2 Getting Started with Oracle GoldenGate

Oracle GoldenGate supports two architectures, the Classic Architecture and the Microservices Architecture (MA).

Oracle GoldenGate can be configured for the following purposes:
  • A static extraction of data records from one database and the loading of those records to another database.

  • Continuous extraction and replication of transactional Data Manipulation Language (DML) operations and data definition language (DDL) changes (for supported databases) to keep source and target data consistent.

  • Data extraction from supported database sources and replication to Big Data and file targets using Oracle GoldenGate for Big Data.

Oracle GoldenGate Architectures Overview

The following table describes the two Oracle GoldenGate architectures and when you should use each of the architectures.

X Classic Architecture Microservices Architecture

What is it?

Oracle GoldenGate Classic Architecture is the original architecture for enterprise replication. This architecture provides the processes and files required to effectively transfer transactional data across a variety of topologies. These processes and files form the main components of the classic architecture and was the main product installation method until the Oracle GoldenGate 12c (12.3.0.1) release.

Oracle GoldenGate Microservices Architecture is a microservices architecture that enables REST services as part of the Oracle GoldenGate environment. The REST-enabled services provide API end-points that can be leveraged for remote configuration, administration, and monitoring through web-based consoles, an enhanced command line interface, PL/SQL and scripting languages.

When should I use it?

Oracle GoldenGate can be installed and configured to use the Oracle GoldenGate Classic Architecture only if MA release is not available for that platform, mentioned in the following scenarios:
  • A static extraction of data records from one database and the loading of those records to another database.

  • Continuous extraction and replication of transactional Data Manipulation Language (DML) operations and Data Definition Language (DDL) changes (for supported databases) to keep source and target data consistent.

  • Extraction from a database and replication to a file outside the database.

  • Capture from heterogeneous database sources.

Oracle GoldenGate can be installed and configured to use the Oracle GoldenGate Microservices Architecture for the following purposes:
  • Large scale and cloud deployments with fully-secure HTTPS interfaces and Secure WebSockets for streaming data.

  • Simpler management of multiple implementations of Oracle GoldenGate environments and control user access for the different aspects of Oracle GoldenGate setup and monitoring.

  • Support system managed database sharding to deliver fine-grained, multi-master replication where all shards are writable, and each shard can be partially replicated to other shards within a shardgroup.

  • Support the following features:

    • Thin and browser-based clients

    • Network security

    • User Authorization

    • Distributed deployments

    • Remote administration

    • Performance monitoring and orchestration

    • Coordination with other systems and services in an Oracle Database environment.

    • Custom embedding of Oracle GoldenGate into applications or to use secure, remote HTML5 applications.

Which databases are supported?

Classic Architecture supports all supported databases as per the certification matrix

MA only supports the Oracle database for an end-to-end MA-only topology. However, it is possible for a source Oracle GoldenGate Classic associated with heterogeneous databases to replicate to a target Oracle GoldenGate MA with Oracle, or a source Oracle GoldenGate MA with Oracle to replicate to a target Oracle GoldenGate legacy with heterogeneous databases.

Topics:

Oracle GoldenGate Supported Processing Methods and Databases

Oracle GoldenGate enables the exchange and manipulation of data at the transaction level among multiple, heterogeneous platforms across the enterprise. It moves committed transactions with transaction integrity and minimal overhead on your existing infrastructure. Its modular architecture gives you the flexibility to extract and replicate selected data records, transactional changes, and changes to DDL (data definition language) across a variety of topologies.

Note:

Support for DDL, certain topologies, and capture or delivery configurations varies by the database type. See Using Oracle GoldenGate for Oracle Database and Using Oracle GoldenGate for Heterogeneous Databases for detailed information about supported features and configurations.

Here is a list of the supported processing methods.

Database Log-Based Extraction (capture) Non-Log-Based Extraction (1) (capture) Replication (delivery)

DB2 for i

N/A

N/A

X

DB2 LUW

X

N/A

X

DB2 z/OS

X

N/A

X

Oracle Database

X

N/A

X

MySQL

X

N/A

X

SQL Server

N/A

X

X

Terradata N/A N/A X

Footnote 1 Non-Log-Based Extraction uses a capture module that communicates with the Oracle GoldenGate API to send change data to Oracle GoldenGate.

Components of Oracle GoldenGate Microservices Architecture

You can use Oracle GoldenGate Microservices Architecture to configure and manage your data replication using an HTML user interface.

There are five main components of the Oracle GoldenGate MA. The following diagram illustrates how replication processes operate within a secure REST API environment.

Description of ggcon_dt_005a_servarch.jpg follows
Description of the illustration ggcon_dt_005a_servarch.jpg

The Oracle GoldenGate MA provides all the tools you need to configure, monitor, and administer deployments and security. It is designed with the industry-standard HTTPS communication protocol and the JavaScript Object Notation (JSON) data interchange format. In addition, the architecture provides you with the ability to verify the identity of clients with basic authentication or Secure Sockets Layer client certificates.

The following diagram shows a variety of clients (Oracle products, command line, browsers, and programmatic REST API interfaces) that you can use to administer your deployments using the service interfaces.

Description of ggcon_dt_004a_clients.png follows
Description of the illustration ggcon_dt_004a_clients.png

Topics:

What is a Service Manager?

A Service Manager acts as a watchdog for other services available with Microservices Architecture.

A Service Manager allows you to manage one or multiple Oracle GoldenGate deployments on a local host.

Optionally, Service Manager may run as a system service and maintains inventory and configuration information about your deployments and allows you to maintain multiple local deployments. Using the Service Manager, you can start and stop instances, and query deployments and the other services.

What is an Administration Server?

The Administration Server supervises, administers, manages, and monitors processes within an Oracle GoldenGate deployment.

The Administration Server operates as the central control entity for managing the replication components in your Oracle GoldenGate deployments. You use it to create and manage your local Extract and Replicat processes without having to have access to the server where Oracle GoldenGate is installed. The key feature of the Administration Server is the REST API service Interface that can be accessed from any HTTP or HTTPS client, such as the Microservices Architecture service interfaces or other clients like Perl and Python.

In addition, the Admin Client can be used to make REST API calls to communicate directly with the Administration Server, see What is the Admin Client?

The Administration Server is responsible for coordinating and orchestrating Extracts, Replicats, and paths to support greater automation and operational managements. Its operation and behavior is controlled through published query and service interfaces. These interfaces allow clients to issue commands and control instructions to the Administration Server using REST JSON-RPC invocations that support REST API interfaces.

The Administration Server includes an embedded web application that you can use directly with any web browser and does not require any client software installation.

Use the Administration Server to create and manage:

  • Extract and Replicat processes

    • Add, alter, and delete

    • Register and unregister

    • Start and stop

    • Review process information, statistics, reports, and status including LAG and checkpoints

    • Retrieve the report and discard files

  • Configuration (parameter) files

  • Checkpoint, trace, and heartbeat tables

  • Supplemental logging for procedural replication, schema, and tables

  • Tasks both custom and standard, such as auto-restart and purge trails

  • Credential stores

  • Encryption keys (MASTERKEY)

  • Add users and assign their roles

What is a Receiver Server?

A Receiver Server is the central control service that handles all incoming trail files. It interoperates with the Distribution Server and provides compatibility with the classic architecture pump for remote classic deployments.

A Receiver Server replaces multiple discrete target-side Collectors with a single instance service.

Use Receiver Server to:

  • Monitor path events

  • Query the status of incoming paths

  • View the statistics of incoming paths

  • Diagnose path issues

WebSockets is the default HTTPS initiated full-duplex streaming protocol used by the Receiver Server. It enables you to fully secure your data using SSL security. The Receiver Server seamlessly traverses through HTTP forward and reverse proxy servers as illustrated in Figure 2-*.

Additionally, the Receiver Server supports the following protocols:

  • UDT—UDP-based protocol for wide area networks. For more information, see http://udt.sourceforge.net/.

  • Classic Oracle GoldenGate protocol—For classic deployments so that the Distribution Server communicates with the Collector and the Data Pump communicates with the Receiver Server.

Note:

TCP encryption does not work in a mixed environment of Classic and Microservices architecture. The Distribution Server in Microservices Architecture cannot be configured to use the TCP encryption to communicate with the Server Collector in Classic Architecture running in a deployment. Also, the Receiver Server in Microservices Architecture cannot accept a connection request from a data pump in Classic Architecture configured with RMTHOST ... ENCRYPT parameter running in a deployment.

What is a Distribution Server?

A Distribution Server is a service that functions as a networked data distribution agent in support of conveying and processing data and commands in a distributed deployment. It is a high performance application that is able to handle multiple commands and data streams from multiple source trail files, concurrently.

Distribution Server replaces the classic multiple source-side data pumps with a single instance service. This server distributes one or more trails to one or more destinations and provides lightweight filtering only (no transformations).

Multiple communication protocols can be used, which provide you the ability to tune network parameters on a per path basis. These protocols include:

  • Oracle GoldenGate protocol for communication between the Distribution Server and the Collector in a non services-based (classic) target. It is used for inter-operability.

    Note:

    TCP encryption does not work in a mixed environment of Classic and Microservices architecture. The Distribution Server in Microservices Architecture cannot be configured to use the TCP encryption to communicate with the Server Collector in Classic Architecture running in a deployment. Also, the Receiver Server in Microservices Architecture cannot accept a connection request from a data pump in Classic Architecture configured with RMTHOST ... ENCRYPT parameter running in a deployment.

  • WebSockets for HTTPS-based streaming, which relies on SSL security.

  • UDT for wide area networks.

  • Proxy support for cloud environments:

    • SOCKS5 for any network protocol.

    • HTTP for HTTP-type protocols only, including WebSocket.

  • Passive Distribution Server to initiate path creation from a remote site. Paths are source-to-destination replication configurations though are not included in this release.

Note:

There is no content transformation by this service.

What is a Performance Metrics Server?

To access the Performance Management Server APIs, you need the Oracle GoldenGate Management Pack Plug-in.

The Performance Metrics Server uses the metrics service to collect and store instance deployment performance results. This metrics collection and repository is separate from the administration layer information collection. You can monitor performance metrics using other embedded web applications and use the data to tune your deployments for maximum performance. All Oracle GoldenGate processes send metrics to the Performance Metrics Server. You can use the Performance Metrics Server in both Microservices Architecture and Classic Architecture.

Use the Performance Metrics Server to:

  • Query for various metrics and receive responses in the services JSON format or the classic XML format

  • Integrate third party metrics tools

  • View error logs

  • View active process status

  • Monitor system resource utilization

What is the Admin Client?

The Admin Client is a command line utility (similar to the classic GGSCI utility).You can use it to issue the complete range of commands that configure, control, and monitor Oracle GoldenGate.

Admin Client is used to create, modify, and remove processes, rather than using MA. It’s not used by MA services such as the Administration, Distribution and other servers. For example, you can either use Admin Client to execute all the commands necessary to create an Extract or customize a new Extract application, or use the Administration Server available with MA to configure an Extract.

Note:

Ensure that the OGG_HOME, OGG_VAR_HOME, and OGG_ETC_HOME are set up correctly in the environment.

For more information on environment variables, see Setting Environment Variables.

The way that you use the Admin Client while similar is different in some ways in support of the MA design:

GGSCI Admin Client

Connects to local processes

Connects to any MA deployment

Requires local machine access, typically SSH

Requires HTTP or HTTPS access

Application logic executed locally

Application logic executed remotely

Requires connection to DBMS

No connection to DBMS required

Uses operating system security

Uses MA security

Authenticated and authorized once

Authenticated and authorized for each operation

No special connect semantics

Requires a CONNECT command

Supports USERID, PASSWORD, and USERIDALIAS

Supports USERIDALIAS only

REGISTER EXTRACT before ADD EXTRACT

REGISTER EXTRACT after ADD EXTRACT

Non-secure communications

Encrypted communications using SSL

Uses pump processes

Use Distribution Server

The Admin Client was designed with GGSCI as the basis. The following table describes the new, deleted, and deprecated commands in the Admin Client:

Table 2-1 Admin Client Commands

New Commands Deleted Commands and Processes: Deprecated Commands
CONNECT
DISCONNECT
[START | STATUS | STOP] SERVICE
[ADD | ALTER | DELETE | INFO | 
[KILL START | STATS | STOP] 
[EDIT | VIEW] GLOBALS 
CD
* MGR 
* JAGENT 
* CREATE DATASTORE 
SUBDIRS 
FC 
DUMPDDL
INFO MARKER 
ADD CREDENTIALSTORE
[CREATE | OPEN] WALLET

Roadmap for Implementing the Microservices Architecture

Microservices Architecture is based on the REST API. After installing the Microservices Architecture, it is accessible through an HTML5 interface, REST APIs, and the command line.

To start using the Microservices Architecture, the following must be accessible:
  • The Database to which Oracle GoldenGateMicroservices Architecture connects to.

  • Oracle GoldenGate users must be configured.

This topic describes the roadmap for implementing Microservices Architecture components and clients.

Task More Information

Installing MA

Installing the Microservices Architecture for Oracle Database

Starting the Service Manager

How to Start and Stop the Service Manager

Starting the Microservices

Quick Tour of the Service Manager Home Page

Starting the Processes

Quick Tour of the Administration Server Home Page

(Optional) Starting the Admin Client

How to Use the Admin Client

Components of Oracle GoldenGate Classic Architecture

You can use the Oracle GoldenGate Classic Architecture to configure and manage your data replications from the command line.

Description of logicalarch2.png follows
Description of the illustration logicalarch2.png

Note:

This is the basic configuration. Depending on your business needs and use case, you can configure different variations of this model.

Topics:

What is a Manager?

Manager is the control process of Oracle GoldenGate. Manager must be running on each system in the Oracle GoldenGate configuration before the Extract or Replicat processes can be started.

Manager must also remain running while the Extract and Replicat processes are running so that resource management functions are performed. One Manager process can control many Extract or Replicat processes.

Manager performs the following functions:

  • Starts Oracle GoldenGate processes

  • Starts dynamic processes

  • Maintains port numbers for processes

  • Purges Trail files based on retention rules

  • Creates event, error, and threshold reports

One Manager process can control many Extract or Replicat processes. On Windows systems, Manager can run as a service. See https://www.oracle.com/pls/topic/lookup?ctx=en/middleware/goldengate/core/19.1/understanding&id=GWUAD-GUID-5005AF6D-76D2-4C72-80E2-AD33C24F0C26 for more information about the Manager process and configuring TCP/IP connections.

What is a Data Pump?

Data pump is a secondary Extract group within the source Oracle GoldenGate configuration.

If you configure a data pump, the Extract process writes all the captured operations to a trail file on the source database. The data pump reads the trail file on the source database and sends the data operations over the network to the remote trail file on the target database. Configuring a data pump is highly recommended for most configurations. If a data pump is not used, the Extract streams all the captured operations to a trail file on the remote target database. In a typical configuration with a data pump, however, the primary Extract group writes to a trail on the source system. The data pump reads this trail and sends the data operations over the network to a remote trail on the target. The data pump adds storage flexibility and also serves to isolate the primary Extract process from TCP/IP activity.

In general, a data pump can perform data filtering, mapping, and conversion

The data pump can be configured in two ways:
  • Perform data manipulation: Data Pump can be configured to perform data filtering, mapping, and conversion.

  • Perform no data manipulation: Data Pump can be configured in pass-through mode, where data is passively transferred as-is, without manipulation. Pass-through mode increases the throughput of the Data Pump, because all of the functionality that looks up object definitions is bypassed.

Though configuring a data pump is optional, Oracle recommends it for most configurations. Some reasons for using a data pump include the following:

  • Protection against network and target failures: In a basic Oracle GoldenGate configuration, with only a trail on the target system, there is nowhere on the source system to store the data operations that Extract continuously extracts into memory. If the network or the target system becomes unavailable, Extract could run out of memory and abend. However, with a trail and data pump on the source system, captured data can be moved to disk, preventing the abend of the primary Extract. When connectivity is restored, the data pump captures the data from the source trail and sends it to the target system(s).

  • You are implementing several phases of data filtering or transformation. When using complex filtering or data transformation configurations, you can configure a data pump to perform the first transformation either on the source system or on the target system, or even on an intermediary system, and then use another data pump or the Replicat group to perform the second transformation.

  • Consolidating data from many sources to a central target. When synchronizing multiple source databases with a central target database, you can store extracted data operations on each source system and use data pumps on each of those systems to send the data to a trail on the target system. Dividing the storage load between the source and target systems reduces the need for massive amounts of space on the target system to accommodate data arriving from multiple sources.

  • Synchronizing one source with multiple targets. When sending data to multiple target systems, you can configure data pumps on the source system for each target. If network connectivity to any of the targets fails, data can still be sent to the other targets.

What is a Collector?

The Collector is started by the manager process and is a process that runs in the background on the target system. It reassembles the transactional data into a target trail.

When the Manager receives a connection request from an Extract process, the Collector scans and binds to an available port and sends the port number to the Manager for assignment to the requesting Extract process. The Collector also receives the captured data that is sent by the Extract process and writes them to the remote trail file.

Collector is started automatically by the Manager when a network connection is required, so Oracle GoldenGate users do not interact with it. Collector can receive information from only one Extract process, so there is one Collector for each Extract that you use. Collector terminates when the associated Extract process terminates.

Note:

Collector can be run manually, if needed. This is known as a static Collector (as opposed to the regular, dynamic Collector). Several Extract processes can share one static Collector; however, a one-to-one ratio is optimal. A static Collector can be used to ensure that the process runs on a specific port.
By default, Extract initiates TCP/IP connections from the source system to Collector on the target, but Oracle GoldenGate can be configured so that Collector initiates connections from the target. Initiating connections from the target might be required if, for example, the target is in a trusted network zone, but the source is in a less trusted zone.

What is GGSCI?

You can use the Oracle GoldenGate Software Command Interface (GGSCI) commands to create data replications. This is the command interface between you and Oracle GoldenGate functional components.

To start GGSCI, change directories to the Oracle GoldenGate installation directory, and then run the ggsci executable file.

Note:

The environment variable OGG_HOME must be set before GGSCI can be started.

Common Data Replication Processes

There are a number of data replication processes that are common to both Oracle GoldenGate architectures.

Topics:

What is an Extract?

Extract is a process that is configured to run against the source database or configured to run on a downstream mining database (Oracle only) with capturing data generated in the true source database located somewhere else. This process is the extraction or the data capture mechanism of Oracle GoldenGate.

You can configure an Extract for the following use cases:
  • Initial Loads: When you set up Oracle GoldenGate for initial loads, the Extract process captures the current, static set of data directly from the source objects.

  • Change Synchronization: When you set up Oracle GoldenGate to keep the source data synchronized with another set of data, the Extract process captures the DML and DDL operations performed on the configured objects after the initial synchronization has taken place. Extracts can run locally on the same server as the database or on another server using the downstream Integrated Extract for reduced overhead. It stores these operations until it receives commit records or rollbacks for the transactions that contain them. If it receives a rollback, it discards the operations for that transaction. If it receives a commit, it persists the transaction to disk in a series of files called a trail, where it is queued for propagation to the target system. All the operations in each transaction are written to the trail as a sequentially organized transaction unit and are in the order in which they were committed to the database (commit sequence order). This design ensures both speed and data integrity.

    Note:

    Extract ignores operations on objects that are not in the Extract configuration, even though a transaction may also include operations on objects that are in the Extract configuration.
The Extract process can be configured to extract data from three types of data sources:
  • Source tables: This source type is used for initial loads.

  • Database recovery logs or transaction logs: While capturing from the logs, the actual method varies depending on the database type. An example of this source type is the Oracle Database redo logs.

  • Third-party capture modules: This method provides a communication layer that passes data and metadata from an external API to the Extract API. The database vendor or a third-party vendor provides the components that extract the data operations and pass them to Extract.

What is a Trail?

A trail is a series of files on disk where Oracle GoldenGate stores the captured changes to support the continuous extraction and replication of database changes.

A trail can exist on the source system, an intermediary system, the target system, or any combination of those systems, depending on how you configure Oracle GoldenGate. On the local system, it is known as an Extract trail (or local trail). On a remote system, it is known as a remote trail. By using a trail for storage, Oracle GoldenGate supports data accuracy and fault tolerance. The use of a trail also allows extraction and replication activities to occur independently of each other. With these processes separated, you have more choices for how data is processed and delivered. For example, instead of extracting and replicating changes continuously, you could extract changes continuously and store them in the trail for replication to the target later, whenever the target application needs them.

In addition, trails allow Oracle Database to operate in heterogeneous environment. The data is stored in a trail file in a consistent format, so it can be read by Replicat process for all supported databases. For more information , see About the Oracle GoldenGate Trail.

Processes that Write to the Trail File:

In Oracle GoldenGate Classic, the Extract and the data pump processes write to the trail. Only one Extract process can write to a given local trail. All local trails must have different full-path names though you can use the same trail names in different paths.

Multiple data pump processes can each write to a trail of the same name, but the physical trails themselves must reside on different remote systems, such as in a data-distribution topology. For example, a data pump named pumpm and a data pump named pumpn can both reside on sys01 and write to a remote trail named aa. Pumpm can write to trail aa on sys02, while pumpn can write to trail aa on sys03.

In Oracle GoldenGate MA, Distribution Server and distribution paths are used to write the remote trail.

Processes that Read from the Trail File:

The data pump and Replicat processes read from the trail files. The data pump extracts DML and DDL operations from a local trail that is linked to an Extract process, performs further processing if needed, and transfers the data to a trail that is read by the next Oracle GoldenGate process downstream (typically Replicat, but could be another data pump if required).

The Replicat process reads the trail and applies the replicated DML and DDL operations to the target database.

Trail File Creation and Maintenance:

The trail files are created as needed during processing. You specify a two-character name for the trail when you add it to the Oracle GoldenGate configuration with the ADD RMTTRAIL or ADD EXTTRAIL command. By default, trails are stored in the dirdat sub-directory of the Oracle GoldenGate directory. You can specify a six or nine digit sequence number using the TRAIL_SEQLEN_9D | TRAIL_SEQLEN_6D GLOBALS parameter; TRAIL_SEQLEN_9D is set by default.

As each new file is created, it inherits the two-character trail name appended with a unique nine digit sequence number from 000000000 through 999999999 (for example c:\ggs\dirdat\tr000000001). When the sequence number reaches 999999999, the numbering starts over at 000000000, and previous trail files are overwritten. Trail files can be purged on a routine basis by using the Manager parameter PURGEOLDEXTRACTS.

You can create more than one trail to separate the data from different objects or applications. You link the objects that are specified in a TABLE or SEQUENCE parameter to a trail that is specified with an EXTTRAIL or RMTTRAIL parameter in the Extract parameter file. To maximize throughput, and to minimize I/O load on the system, extracted data is sent into and out of a trail in large blocks. Transactional order is preserved.

Converting Existing Trails to 9 Digit Sequence Numbers

You can convert trail files from 6-digit to 9-digit checkpoint record for the named extract groups. Use convchk native command to convert to 9-digit trail by stopping your Extract gracefully then using convchk to upgrade as follows:

convchk extract trail seqlen_9d

Start your Extract

You can downgrade from a 9 to 6 digit trail with the same process using this convchk command:

convchk extract trail seqlen_6d

Note:

Extract Files: You can configure Oracle GoldenGate to store extracted data in an extract file instead of a trail. The extract file can be a single file, or it can be configured to roll over into multiple files in anticipation of limitations on file size that are imposed by the operating system. It is similar to a trail, except that checkpoints are not recorded. The file or files are created automatically during the run. The same versioning features that apply to trails also apply to extract files.

What is a Replicat?

Replicat is a process that delivers data to a target database. It reads the trail file on the target database, reconstructs the DML or DDL operations, and applies them to the target database.

The Replicat process uses dynamic SQL to compile a SQL statement once and then executes it many times with different bind variables. You can configure the Replicat process so that it waits a specific amount of time before applying the replicated operations to the target database. For example, a delay may be desirable to prevent the propagation of errant SQL, to control data arrival across different time zones, or to allow time for other planned events to occur.

For the two common uses cases of Oracle GoldenGate, the function of the Replicat process is as follows:
  • Initial Loads: When you set up Oracle GoldenGate for initial loads, the Replicat process applies a static data copy to target objects or routes the data to a high-speed bulk-load utility.

  • Change Synchronization: When you set up Oracle GoldenGate to keep the target database synchronized with the source database, the Replicat process applies the source operations to the target objects using a native database interface or ODBC, depending on the database type.

You can configure multiple Replicat processes with one or more Extract processes and Data Pumps in parallel to increase throughput. To preserve data integrity, each set of processes handles a different set of objects. To differentiate among Replicat processes, you assign each one a group name

If you don't want to use multiple Replicat processes, you can configure a single Replicat process in parallel, coordinated, integrated mode.

  • Parallel mode Parallel Replicat supports all databases using the non-integrated option. Parallel Replicat only supports replicating data from trails with full metadata, which requires the classic trail format. It takes into account dependencies between transactions, similar to Integrated Replicat. The dependency computation, parallelism of the mapping and apply is performed outside the database so can be off-loaded to another server. The transaction integrity is maintained in this process. In addition, parallel replicat supports the parallel apply of large transactions by splitting a large transaction into chunks and applying them in parallel. See About Parallel Replicat.

  • Coordinated mode is supported on all databases that Oracle GoldenGate supports. In coordinated mode, the Replicat process is threaded. One coordinator thread spawns and coordinates one or more threads that execute replicated SQL operations in parallel. A coordinated Replicat process uses one parameter file and is monitored and managed as one unit. See About Coordinated Replicat Mode for more information.

  • Integrated mode is supported for Oracle Database releases 11.2.0.4 or later. In integrated mode, the Replicat process leverages the apply processing functionality that is available within the Oracle Database. Within a single Replicat configuration, multiple inbound server child processes known as apply servers apply transactions in parallel while preserving the original transaction atomicity. See About Integrated Replicat for more information about integrated mode.

You can delay Replicat so that it waits a specific amount of time before applying the replicated operations to the target database. A delay may be desirable, for example, to prevent the propagation of errant SQL, to control data arrival across different time zones, or to allow time for other planned events to occur. The length of the delay is controlled by the DEFERAPPLYINTERVAL parameter.