1 Introducing Oracle GoldenGate for Big Data

Learn about Oracle GoldenGate for Big Data concepts and features, including how to setup and configure the environment.

The Oracle GoldenGate for Big Data integrations run as pluggable functionality into the Oracle GoldenGate Java Delivery framework, also referred to as the Java Adapters framework. This functionality extends the Java Delivery functionality. Oracle recommends that you review the Java Delivery description in Oracle GoldenGate Java Delivery.

Topics:

1.1 Understanding What’s Supported

Oracle GoldenGate for Big Data supports specific configurations: the handlers, which are compatible with clearly defined software versions, and there are many support topics. This section provides the relevant support information.

Topics:

1.1.1 Verifying Certification and System Requirements

Make sure that you install your product on a supported hardware or software configuration. For more information, see the certification document for your release on the Oracle Fusion Middleware Supported System Configurations page.

Oracle has tested and verified the performance of your product on all certified systems and environments; whenever new certifications occur, they are added to the proper certification document right away. New certifications can occur at any time, and for this reason the certification documents are kept outside of the documentation libraries on the Oracle Technology Network.

1.1.2 What are the Additional Support Considerations?

This section describes additional Oracle GoldenGate for Big Data Handlers additional support considerations.

Pluggable Formatters—Support

The handlers support the Pluggable Formatters as described in Using the Pluggable Formatters as follows:

  • The HDFS Handler supports all of the pluggable handlers .
  • Pluggable formatters are not applicable to the HBase Handler. Data is streamed to HBase using the proprietary HBase client interface.

  • The Flume Handler supports all of the pluggable handlers described in .

  • The Kafka Handler supports all of the pluggable handlers described in .

  • The Kafka Connect Handler does not support pluggable formatters. You can convert data to JSON or Avro using Kafka Connect data converters.

  • The Kinesis Streams Handler supports all of the pluggable handlers described in .

  • The Cassandra, MongoDB, and JDBC Handlers do not use a pluggable formatter.

Avro Formatter—Improved Support for Binary Source Data

In previous releases, the Avro Formatter did not support the Avro bytes data type. Binary data was instead converted to Base64 and persisted in Avro messages as a field with a string data type. This required an additional conversion step to convert the data from Base64 back to binary.

The Avro Formatter now can identify binary source fields that will be mapped into an Avro bytes field and the original byte stream from the source trail file will be propagated to the corresponding Avro messages without conversion to Base64.

Avro Formatter—Generic Wrapper

The schema_hash field was changed to the schema_fingerprint field. The schema_fingerprint is a long and is generated using the parsingFingerprint64(Schema s) method on the org.apache.avro.SchemaNormalization class. This identifier provides better traceability from the Generic Wrapper Message back to the Avro schema that is used to generate the Avro payload message contained in the Generic Wrapper Message.

JSON Formatter—Row Modeled Data

The JSON formatter supports row modeled data in addition to operation modeled data.. Row modeled data includes the after image data for insert operations, the after image data for update operations, the before image data for delete operations, and special handling for primary key updates.

Java Delivery Using Extract

Java Delivery using Extract is not supported and was deprecated in this release. Support for Java Delivery is only supported using the Replicat process. Replicat provides better performance, better support for checkpointing, and better control of transaction grouping.

Kafka Handler—Versions

Support for Kafka versions 0.8.2.2, 0.8.2.1, and 0.8.2.0 was discontinued. This allowed the implementation of the flush call on the Kafka producer, which provides better support for flow control and checkpointing.

HDFS Handler—File Creation

A new feature was added to the HDFS Handler so that you can use Extract, Load, Transform (ELT). The new gg.handler.name.openNextFileAtRoll=true property was added to create new files immediately when the previous file is closed. The new file appears in the HDFS directory immediately after the previous file stream is closed.

This feature does not work when writing HDFS files in Avro Object Container File (OCF) format or sequence file format.

MongoDB Handler—Support
  • The handler can only replicate unique rows from source table. If a source table has no primary key defined and has duplicate rows, replicating the duplicate rows to the MongoDB target results in a duplicate key error and the Replicat process abends.

  • Missed updates and deletes are undetected so are ignored.

  • Untested with sharded collections.

  • Only supports date and time data types with millisecond precision. These values from a trail with microseconds or nanoseconds precision are truncated to millisecond precision.

  • The datetime data type with timezone in the trail is not supported.

  • A maximum BSON document size of 16 MB. If the trail record size exceeds this limit, the handler cannot replicate the record.

  • No DDL propagation.

  • No truncate operation.

JDBC Handler—Support
  • The JDBC handler uses the generic JDBC API, which means any target database with a JDBC driver implementation should be able to use this handler. There are a myriad of different databases that support the JDBC API and Oracle cannot certify the JDBC Handler for all targets. Oracle has certified the JDBC Handler for the following RDBMS targets:

    • Oracle
    • MySQL
    • Netezza
    • Redshift
    • Greenplum
  • The handler supports Replicat using the REPERROR and HANDLECOLLISIONS parameters, see Reference for Oracle GoldenGate.

  • The database metadata retrieved through the Redshift JDBC driver has known constraints, see Release Notes for Oracle GoldenGate for Big Data.

    Redshift target table names in the Replicat parameter file must be in lower case and double quoted. For example:

    MAP SourceSchema.SourceTable, target “public”.”targetable”;  
  • DDL operations are ignored by default and are logged with a WARN level.

  • Coordinated Replicat is a multithreaded process that applies transactions in parallel instead of serially. Each thread handles all of the filtering, mapping, conversion, SQL construction, and error handling for its assigned workload. A coordinator thread coordinates transactions across threads to account for dependencies. It ensures that DML is applied in a synchronized manner preventing certain DMLs from occurring on the same object at the same time due to row locking, block locking, or table locking issues based on database specific rules.  If there are database locking issue, then Coordinated Replicat performance can be extremely slow or pauses.

Delimited Formatter—Limitation

Handlers configured to generate delimited formatter output only allows single character delimiter fields. If your delimiter field length is greater than one character, then the handler displays an error message similar to the following and Replicat abends.

oracle.goldengate.util.ConfigException: Delimiter length cannot be more than one character. Found delimiter [||]
DDL Event Handling

Only the TRUNCATE TABLE DDL statement is supported. All other DDL statements are ignored.

You can use the TRUNCATE statements one of these ways:

  • In a DDL statement, TRUNCATE TABLE, ALTER TABLE TRUNCATE PARTITION, and other DDL TRUNCATE statements. This uses the DDL parameter.

  • Standalone TRUNCATE support, which just has TRUNCATE TABLE. This uses the GETTRUNCATES parameter.

1.2 Setting Up Oracle GoldenGate for Big Data

The various tasks that you need to preform to set up Oracle GoldenGate for Big Data integrations with Big Data targets.

Topics:

1.2.1 About Oracle GoldenGate Properties Files

There are two Oracle GoldenGate properties files required to run the Oracle GoldenGate Java Deliver user exit (alternatively called the Oracle GoldenGate Java Adapter). It is the Oracle GoldenGate Java Delivery that hosts Java integrations including the Big Data integrations. A Replicat properties file is required in order to run either process. The required naming convention for the Replicat file name is the process_name.prm. The exit syntax in the Replicat properties file provides the name and location of the Java Adapter properties file. It is the Java Adapter properties file that contains the configuration properties for the Java adapter include GoldenGate for Big Data integrations. The Replicat and Java Adapters properties files are required to run Oracle GoldenGate for Big Data integrations.

Alternatively the Java Adapters properties can be resolved using the default syntax, process_name.properties. It you use the default naming for the Java Adapter properties file then the name of the Java Adapter properties file can be omitted from the Replicat properties file.

Samples of the properties files for Oracle GoldenGate for Big Data integrations can be found in the subdirectories of the following directory:

GoldenGate_install_dir/AdapterExamples/big-data

1.2.2 Setting Up the Java Runtime Environment

The Oracle GoldenGate for Big Data integrations create an instance of the Java virtual machine at runtime. Oracle GoldenGate for Big Data requires that you install Oracle Java 8 Java Runtime Environment (JRE) at a minimum.

Oracle recommends that you set the JAVA_HOME environment variable to point to Java 8 installation directory. Additionally, the Java Delivery process needs to load the libjvm.so and libjsig.so Java shared libraries. These libraries are installed as part of the JRE. The location of these shared libraries need to be resolved and the appropriate environmental variable set to resolve the dynamic libraries needs to be set so the libraries can be loaded at runtime (that is, LD_LIBRARY_PATH, PATH, or LIBPATH).

1.2.3 Configuring Java Virtual Machine Memory

One of the difficulties of tuning Oracle GoldenGate for Big Data is deciding how much Java virtual machine (JVM) heap memory to allocate for the Replicat process hosting the Java Adapter. The JVM memory must be configured before starting the application. Otherwise, the default Java heap sizing is used. Specifying the JVM heap size correctly sized is important because if you size it to small, the JVM heap can cause runtime issues:

  • A Java Out of Memory exception, which causes the Extract or Replicat process to abend.

  • Increased frequency of Java garbage collections, which degrades performance. Java garbage collection invocations de-allocate all unreferenced Java objects resulting in reclaiming the heap memory for reuse.

Alternatively, too much heap memory is inefficient. The JVM reserves the maximum heap memory (-Xmx) when the JVM is launched. This reserved memory is generally not available to other applications even if the JVM is not using all of it. You can set the JVM memory with these two parameters:

  • -Xmx — The maximum JVM heap size. This amount gets reserved.

  • -Xms — The initial JVM heap size. Also controls the sizing of additional allocations.

The -Xmx and –Xms properties are set in the Java Adapter properties file as follows:

javawriter.bootoptions=-Xmx512m -Xms32m -Djava.class.path=ggjava/ggjava.jar

There are no rules or equations for calculating the values of the maximum and initial JVM heap sizes. Java heap usage is variable and depends upon a number of factors many of which are widely variable at runtime. The Oracle GoldenGate Java Adapter log file provides metrics on the Java heap when the status call is invoked. The information appears in the Java Adapter log4j log file similar to:

INFO 2017-12-21 10:02:02,037 [pool-1-thread-1] Memory at Status : Max: 455.00 MB, Total: 58.00 MB, Free: 47.98 MB, Used: 10.02 MB

You can interpret these values as follows:

  • Max – The value of heap memory reserved (typically the -Xmx setting reduced by approximately 10% due to overhead).

  • Total – The amount currently allocated (typically a multiple of the -Xms setting reduced by approximately 10% due to overhead).

  • Free – The heap memory currently allocated, but free to be used to allocate Java objects.

  • Used – The heap memory currently allocated to Java objects.

You can control the frequency that the status is logged using the gg.report.time=30sec configuration parameter in the Java Adapter properties file.

You should execute test runs of the process with actual data and review the heap usage logging. Then analyze your peak memory usage and then allocate 25% - 30% more memory to accommodate infrequent spikes in memory use and to make the memory allocation and garbage collection processes efficient.

The following items can increase the heap memory required by the Replicat process:

  • Operating in tx mod (For example, gg.handler.name.mode=tx.)

  • Setting the Replicat property GROUPTRANSOPS to a large value

  • Wide tables

  • CLOB or BLOB data in the source

  • Very large transactions in the source data

1.2.4 Grouping Transactions

The principal way to improve performance in Oracle GoldenGate for Big Data integrations is using transaction grouping. In transaction grouping, the operations of multiple transactions are grouped together in a single larger transaction. The application of a larger grouped transaction is typically much more efficient than the application of individual smaller transactions. Transaction grouping is possible with the Replicat process discussed in Running with Replicat.

1.3 Configuring Oracle GoldenGate for Big Data

This section describes how to configure Oracle GoldenGate for Big Data Handlers.

Topics:

1.3.1 Running with Replicat

This section explains how to run the Java Adapter with the Oracle GoldenGate Replicat process. It includes the following sections:

Topics:

1.3.1.1 Configuring Replicat

The following is an example of how you can configure a Replicat process properties file for use with the Java Adapter:

REPLICAT hdfs
TARGETDB LIBFILE libggjava.so SET property=dirprm/hdfs.properties 
--SOURCEDEFS ./dirdef/dbo.def 
DDL INCLUDE ALL
GROUPTRANSOPS 1000
MAPEXCLUDE dbo.excludetable
MAP dbo.*, TARGET dbo.*;

The following is explanation of these Replicat configuration entries:

REPLICAT hdfs - The name of the Replicat process.

TARGETDB LIBFILE libggjava.so SET property=dirprm/hdfs.properties - Sets the target database as you exit to libggjava.so and sets the Java Adapters property file to dirprm/hdfs.properties.

--SOURCEDEFS ./dirdef/dbo.def - Sets a source database definitions file. It is commented out because Oracle GoldenGate trail files provide metadata in trail.

GROUPTRANSOPS 1000 - Groups 1000 transactions from the source trail files into a single target transaction. This is the default and improves the performance of Big Data integrations.

MAPEXCLUDE dbo.excludetable - Sets the tables to exclude.

MAP dbo.*, TARGET dbo.*; - Sets the mapping of input to output tables.

1.3.1.2 Adding the Replicat Process

The command to add and start the Replicat process in ggsci is the following:

ADD REPLICAT hdfs, EXTTRAIL ./dirdat/gg
START hdfs
1.3.1.3 Replicat Grouping

The Replicat process provides the Replicat configuration property, GROUPTRANSOPS, to control transaction grouping. By default, the Replicat process implements transaction grouping of 1000 source transactions into a single target transaction. If you want to turn off transaction grouping then the GROUPTRANSOPS Replicat property should be set to 1.

1.3.1.4 About Replicat Checkpointing

In addition to the Replicat checkpoint file ,.cpr, an additional checkpoint file, dirchk/group.cpj, is created that contains information similar to CHECKPOINTTABLE in Replicat for the database.

1.3.1.5 About Initial Load Support

Replicat can already read trail files that come from both the online capture and initial load processes that write to a set of trail files. In addition, Replicat can also be configured to support the delivery of the special run initial load process using RMTTASK specification in the Extract parameter file. For more details about configuring the direct load, see Loading Data with an Oracle GoldenGate Direct Load.

Note:

The SOURCEDB or DBLOGIN parameter specifications vary depending on your source database.

1.3.1.6 About the Unsupported Replicat Features

The following Replicat features are not supported in this release:

  • BATCHSQL

  • SQLEXEC

  • Stored procedure

  • Conflict resolution and detection (CDR)

1.3.1.7 How the Mapping Functionality Works

The Oracle GoldenGate Replicat process supports mapping functionality to custom target schemas. You must use the Metadata Provider functionality to define a target schema or schemas, and then use the standard Replicat mapping syntax in the Replicat configuration file to define the mapping. For more information about the Replicat mapping syntax in the Replication configuration file, see Mapping and Manipulating Data.

1.3.2 Overview of Logging

Logging is essential to troubleshooting Oracle GoldenGate for Big Data integrations with Big Data targets. This section covers how Oracle GoldenGate for Big Data integration log and the best practices for logging.

Topics:

1.3.2.1 About Replicat Process Logging

Oracle GoldenGate for Big Data integrations leverage the Java Delivery functionality described in the Delivering Java Messages. In this setup, either a Oracle GoldenGate Replicat process loads a user exit shared library. This shared library then loads a Java virtual machine to thereby interface with targets providing a Java interface. So the flow of data is as follows:

Replicat Process —>User Exit—> Java Layer

It is important that all layers log correctly so that users can review the logs to troubleshoot new installations and integrations. Additionally, if you have a problem that requires contacting Oracle Support, the log files are a key piece of information to be provided to Oracle Support so that the problem can be efficiently resolved.

A running Replicat process creates or appends log files into the GoldenGate_Home/dirrpt directory that adheres to the following naming convention: process_name.rpt. If a problem is encountered when deploying a new Oracle GoldenGate process, this is likely the first log file to examine for problems. The Java layer is critical for integrations with Big Data applications.

1.3.2.2 About Java Layer Logging

The Oracle GoldenGate for Big Data product provides flexibility for logging from the Java layer. The recommended best practice is to use Log4j logging to log from the Java layer. Enabling simple Log4j logging requires the setting of two configuration values in the Java Adapters configuration file.

gg.log=log4j
gg.log.level=INFO

These gg.log settings will result in a Log4j file to be created in the GoldenGate_Home/dirrpt directory that adheres to this naming convention, process_name_log level_log4j.log. The supported Log4j log levels are in the following list in order of increasing logging granularity.

  • OFF

  • FATAL

  • ERROR

  • WARN

  • INFO

  • DEBUG

  • TRACE

Selection of a logging level will include all of the coarser logging levels as well (that is, selection of WARN means that log messages of FATAL, ERROR and WARN will be written to the log file). The Log4j logging can additionally be controlled by separate Log4j properties files. These separate Log4j properties files can be enabled by editing the bootoptions property in the Java Adapter Properties file. These three example Log4j properties files are included with the installation and are included in the classpath:

log4j-default.properties
log4j-debug.properites
log4j-trace.properties

You can modify the bootoptionsin any of the files as follows:

javawriter.bootoptions=-Xmx512m -Xms64m -Djava.class.path=.:ggjava/ggjava.jar -Dlog4j.configuration=samplelog4j.properties

You can use your own customized Log4j properties file to control logging. The customized Log4j properties file must be available in the Java classpath so that it can be located and loaded by the JVM. The contents of a sample custom Log4j properties file is the following:

# Root logger option 
log4j.rootLogger=INFO, file 
 
# Direct log messages to a log file 
log4j.appender.file=org.apache.log4j.RollingFileAppender 
 
log4j.appender.file.File=sample.log 
log4j.appender.file.MaxFileSize=1GB 
log4j.appender.file.MaxBackupIndex=10 
log4j.appender.file.layout=org.apache.log4j.PatternLayout 
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n

There are two important requirements when you use a custom Log4j properties file. First, the path to the custom Log4j properties file must be included in the javawriter.bootoptions property. Logging initializes immediately when the JVM is initialized while the contents of the gg.classpath property is actually appended to the classloader after the logging is initialized. Second, the classpath to correctly load a properties file must be the directory containing the properties file without wildcards appended.

1.3.3 About Schema Evolution and Metadata Change Events

The Metadata in trail is a feature that allows seamless runtime handling of metadata change events by Oracle GoldenGate for Big Data, including schema evolution and schema propagation to Big Data target applications. The NO_OBJECTDEFS is a sub-parameter of the Extract and Replicat EXTTRAIL and RMTTRAIL parameters that lets you suppress the important metadata in trail feature and revert to using a static metadata definition.

The Oracle GoldenGate for Big Data Handlers and Formatters provide functionality to take action when a metadata change event is encountered. The ability to take action in the case of metadata change events depends on the metadata change events being available in the source trail file. Oracle GoldenGate supports metadata in trail and the propagation of DDL data from a source Oracle Database. If the source trail file does not have metadata in trail and DDL data (metadata change events) then it is not possible for Oracle GoldenGate for Big Data to provide and metadata change event handling.

1.3.4 About Configuration Property CDATA[] Wrapping

The GoldenGate for Big Data Handlers and Formatters support the configuration of many parameters in the Java properties file, the value of which may be interpreted as white space. The configuration handling of the Java Adapter trims white space from configuration values from the Java configuration file. This behavior of trimming whitespace may be desirable for some configuration values and undesirable for other configuration values. Alternatively, you can wrap white space values inside of special syntax to preserve the whites pace for selected configuration variables. GoldenGate for Big Data borrows the XML syntax of CDATA[] to preserve white space. Values that would be considered to be white space can be wrapped inside of CDATA[].

The following is an example attempting to set a new-line delimiter for the Delimited Text Formatter:

gg.handler.{name}.format.lineDelimiter=\n

This configuration will not be successful. The new-line character is interpreted as white space and will be trimmed from the configuration value. Therefore the gg.handler setting effectively results in the line delimiter being set to an empty string.

In order to preserve the configuration of the new-line character simply wrap the character in the CDATA[] wrapper as follows:

gg.handler.{name}.format.lineDelimiter=CDATA[\n]

Configuring the property with the CDATA[] wrapping preserves the white space and the line delimiter will then be a new-line character.

1.3.5 Using Regular Expression Search and Replace

You can perform more powerful search and replace operations of both schema data (catalog names, schema names, table names, and column names) and column value data, which are separately configured. Regular expressions (regex) are characters that customize a search string through pattern matching. You can match a string against a pattern or extract parts of the match. Oracle GoldenGate for Big Data uses the standard Oracle Java regular expressions package, java.util.regex, see "Regular Expressions” in  The Single UNIX Specification, Version 4.

Topics:

1.3.5.1 Using Schema Data Replace

You can replace schema data using the gg.schemareplaceregex and gg.schemareplacestring properties. Use gg.schemareplaceregex to set a regular expression, and then use it to search catalog names, schema names, table names, and column names for corresponding matches. Matches are then replaced with the content of the gg.schemareplacestring value. The default value of gg.schemareplacestring is an empty string or "".

For example, some system table names start with a dollar sign like $mytable. You may want to replicate these tables even though most Big Data targets do not allow dollar signs in table names. To remove the dollar sign, you could configure the following replace strings:

gg.schemareplaceregex=[$] 
gg.schemareplacestring= 

The resulting example of searched and replaced table name is mytable. These properties also support CDATA[] wrapping to preserve whitespace in the value of configuration values. So the equivalent of the preceding example using CDATA[] wrapping use is:

gg.schemareplaceregex=CDATA[[$]]
gg.schemareplacestring=CDATA[]

The schema search and replace functionality supports using multiple search regular expressions and replacements strings using the following configuration syntax:

gg.schemareplaceregex=some_regex
gg.schemareplacestring=some_value
gg.schemareplaceregex1=some_regex
gg.schemareplacestring1=some_value
gg.schemareplaceregex2=some_regex
gg.schemareplacestring2=some_value
1.3.5.2 Using Content Data Replace

You can replace content data using the gg.contentreplaceregex and gg.contentreplacestring properties to search the column values using the configured regular expression and replace matches with the replacement string. For example, this is useful to replace line feed characters in column values. If the delimited text formatter is used then line feeds occurring in the data will be incorrectly interpreted as line delimiters by analytic tools.

You can configure n number of content replacement regex search values. The regex search and replacements are done in the order of configuration. Configured values must follow a given order as follows:

gg.contentreplaceregex=some_regex 
gg.contentreplacestring=some_value 
gg.contentreplaceregex1=some_regex 
gg.contentreplacestring1=some_value 
gg.contentreplaceregex2=some_regex 
gg.contentreplacestring2=some_value

Configuring a subscript of 3 without a subscript of 2 would cause the subscript 3 configuration to be ignored.

Attention:

 Regular express searches and replacements require computer processing and can reduce the performance of the Oracle GoldenGate for Big Data process.

To replace line feeds with a blank character you could use the following property configurations:

gg.contentreplaceregex=[\n] 
gg.contentreplacestring=CDATA[ ]

This changes the column value from:

this is 
me

to :

this is me

Both values support CDATA wrapping. The second value must be wrapped in a CDATA[] wrapper because a single blank space will be interpreted as whitespace and trimmed by the Oracle GoldenGate for Big Data configuration layer. In addition, you can configure multiple search a replace strings. For example, you may also want to trim leading and trailing white space out of column values in addition to trimming line feeds from:

^\\s+|\\s+$
gg.contentreplaceregex1=^\\s+|\\s+$ 
gg.contentreplacestring1=CDATA[]

1.3.6 Scaling Oracle GoldenGate for Big Data Delivery

 Oracle GoldenGate for Big Data supports breaking down the source trail files into either multiple Replicat processes or by using Coordinated Delivery to instantiate multiple Java Adapter instances inside a single Replicat process to improve throughput.. This allows you to scale Oracle GoldenGate for Big Data delivery.

There are some cases where the throughput to Oracle GoldenGate for Big Data integration targets is not sufficient to meet your service level agreements even after you have tuned your Handler for maximum performance. When this occurs, you can configure parallel processing and delivery to your targets using one of the following methods:

  • Multiple Replicat processes can be configured to read data from the same source trail files. Each of these Replicat processes are configured to process a subset of the data in the source trail files so that all of the processes collectively process the source trail files in their entirety.  There is no coordination between the separate Replicat processes using this solution.

  • Oracle GoldenGate Coordinated Delivery can be used to parallelize processing the data from the source trail files within a single Replicat process. This solution involves breaking the trail files down into logical subsets for which each configured subset is processed by a different delivery thread. For more information about Coordinated Delivery, see https://blogs.oracle.com/dataintegration/entry/goldengate_12c_coordinated_replicat.

With either method, you can split the data into parallel processing for improved throughput. Oracle recommends breaking the data down in one of the following two ways:

  • Splitting Source Data By Source Table –Data is divided into subsections by source table. For example, Replicat process 1 might handle source tables table1 and table2, while Replicat process 2 might handle data for source tables table3 and table2. Data is split for source table and the individual table data is not subdivided.

  • Splitting Source Table Data into Sub Streams – Data from source tables is split. For example, Replicat process 1 might handle half of the range of data from source table1, while Replicat process 2 might handler the other half of the data from source table1.

Additional limitations:

  • Parallel apply is not supported.

  • The BATCHSQL parameter not supported.

Example 1-1 Scaling Support for the Oracle GoldenGate for Big Data Handlers

Handler Name Splitting Source Data By Source Table Splitting Source Table Data into Sub Streams

Cassandra

Supported

Supported when:

  • Required target tables in Cassandra are pre-created.

  • Metadata change events do not occur.

Elastic Search

Supported

Supported

Flume

Supported

Supported for formats that support schema propagation, such as Avro. This is less desirable due to multiple instances feeding the same schema information to the target.

HBase

Supported when all required HBase namespaces are pre-created in HBase.

Supported when:

  • All required HBase namespaces are pre-created in HBase.

  • All required HBase target tables are pre-created in HBase. Schema evolution is not an issue because HBase tables have no schema definitions so a source metadata change does not require any schema change in HBase.

  • The source data does not contain any truncate operations.

HDFS

Supported

Supported with some restrictions.

  • You must select a naming convention for generated HDFS files wherethe file names do not collide. Colliding HDFS file names results in a Replicat abend. When using coordinated apply it is suggested that you configure ${groupName} as part of the configuration for the gg.handler.name.fileNameMappingTemplate property . The ${groupName} template resolves to the Replicat name concatenated with the Replicat thread number, which provides unique naming per Replicat thread. 

  • Schema propagatation to HDFS and Hive integration is not currently supported.

JDBC

Supported

Supported

Kafka

Supported

Supported for formats that support schema propagation, such as Avro. This is less desirable due to multiple instances feeding the same schema information to the target.

Kafka Connect

Supported

Supported

Kinesis Streams

Supported

Supported

MongoDB

Supported

Supported

1.3.7 Using Identities in Oracle GoldenGate Credential Store

The Oracle GoldenGate credential store manages user IDs and their encrypted passwords (together known as credentials) that are used by Oracle GoldenGate processes to interact with the local database. The credential store eliminates the need to specify user names and clear-text passwords in the Oracle GoldenGate parameter files. An optional alias can be used in the parameter file instead of the user ID to map to a userid and password pair in the credential store. The credential store is implemented as an auto login wallet within the Oracle Credential Store Framework (CSF). The use of an LDAP directory is not supported for the Oracle GoldenGate credential store. The auto login wallet supports automated restarts of Oracle GoldenGate processes without requiring human intervention to supply the necessary passwords.

In Oracle GoldenGate for Big Data, you specify the alias and domain in the property file not the actual user ID or password. User credentials are maintained in secure wallet storage.

Topics:

1.3.7.1 Creating a Credential Store

You can create a credential store for your Big Data environment.

Run the GGSCI ADD CREDENTIALSTORE command to create a file called cwallet.sso in the dircrd/ subdirectory of your Oracle GoldenGate installation directory (the default).

You can the location of the credential store (cwallet.sso file by specifying the desired location with the CREDENTIALSTORELOCATION parameter in the GLOBALS file.

For more information about credential store commands, see Reference for Oracle GoldenGate.

Note:

Only one credential store can be used for each Oracle GoldenGate instance.

1.3.7.2 Adding Users to a Credential Store

After you create a credential store for your Big Data environment, you can added users to the store.

Run the GGSCI ALTER CREDENTIALSTORE ADD USER userid PASSWORD password [ALIAS alias] [DOMAIN domain] command to create each user, where:

  • userid is the user name. Only one instance of a user name can exist in the credential store unless the ALIAS or DOMAIN option is used.

  • password is the user's password. The password is echoed (not obfuscated) when this option is used. If this option is omitted, the command prompts for the password, which is obfuscated as it is typed (recommended because it is more secure).

  • alias is an alias for the user name. The alias substitutes for the credential in parameters and commands where a login credential is required. If the ALIAS option is omitted, the alias defaults to the user name.

For example:

ALTER CREDENTIALSTORE ADD USER scott PASSWORD tiger ALIAS scsm2 domain ggadapters

For more information about credential store commands, see Reference for Oracle GoldenGate.

1.3.7.3 Configuring Properties to Access the Credential Store

The Oracle GoldenGate Java Adapter properties file requires specific syntax to resolve user name and password entries in the Credential Store at runtime. For resolving a user name the syntax is the following:

ORACLEWALLETUSERNAME[alias domain_name]

For resolving a password the syntax required is the following:

ORACLEWALLETPASSWORD[alias domain_name]

The following example illustrate how to configure a Credential Store entry with an alias of myalias and a domain of mydomain.

Note:

With HDFS Hive JDBC the user name and password is encrypted.

Oracle Wallet integration only works for configuration properties which contain the string username or password. For example:

gg.handler.hdfs.hiveJdbcUsername=ORACLEWALLETUSERNAME[myalias mydomain] 
gg.handler.hdfs.hiveJdbcPassword=ORACLEWALLETPASSWORD[myalias mydomain]

Consider the user name and password entries as accessible values in the Credential Store. Any configuration property resolved in the Java Adapter layer (not accessed in the C user exit layer) can be resolved from the Credential Store. This allows you more flexibility to be creative in how you protect sensitive configuration entries.