This chapter explains the Flume Handler and includes examples so that you can understand this functionality.
Topics:
The Flume Handler is designed to stream change capture data from a Oracle GoldenGate trail to a Flume source. Apache Flume is an open source application for which the primary purpose is streaming data into Big Data applications. The Flume architecture contains three main components, sources, channels, and sinks that collectively make a pipeline for data. A Flume source publishes the data to a Flume channel. A Flume sink retrieves the data out of a Flume channel and streams the data to different targets. A Flume Agent is a container process that owns and manages a source, channel and sink. A single Flume installation can host many agent processes. The Flume Handler can stream data from a trail file to Avro or Thrift RPC Flume sources.
To run the Flume Handler, a Flume Agent configured with an Avro or Thrift Flume source must be up and running. Oracle GoldenGate can be collocated with Flume or located on a different machine. If located on a different machine the host and port of the Flume source must be reachable with a network connection. For instructions on how to configure and start a Flume Agent process, see the Flume User Guide at
https://flume.apache.org/releases/content/1.6.0/FlumeUserGuide.pdf
Instructions for configuring the Flume Handler components and running the handler are described in the following sections.
You must configure two things in the gg.classpathconfiguration
variable for the Flume Handler to connect to the Flume source and run. The Flume Agent configuration file and the Flume client JARS. The Flume Handler uses the contents of the Flume Agent configuration file to resolve the host, port, and source type for the connection to Flume source. The Flume client libraries do not ship with Oracle GoldenGate for Big Data. The Flume client library versions must match the version of Flume to which the Flume Handler is connecting. For a listing for the required Flume client JAR files by version, see Flume Handler Client Dependencies.
The Oracle GoldenGate property, gg.classpath
, must be set to include the following default locations:
The default location of the core-site.xml
file is Flume_Home
/conf
.
The default location of the Flume client JARS is Flume_Home
/lib/*
.
The gg.classpath
must be configured exactly as shown in the preceding example. Pathing to the Flume Agent configuration file should simply contain the path with no wild card appended. The inclusion of the *wildcard
in the path to the Flume Agent configuration file will cause it not to be accessible. Conversely, pathing to the dependency jars should include the *
wildcard character in order to include all of the JAR files in that directory in the associated classpath. Do not use *.jar
. An example of a correctly configured gg.classpath
variable is the following:
gg.classpath=dirprm/:/var/lib/flume/lib/*
If the Flume Handler and Flume are not collocated, then the Flume Agent configuration file and the Flume client libraries must be copied to the machine hosting the Flume Handler process.
The following are the configurable values for the Flume Handler. These properties are located in the Java Adapter properties file (not in the Replicat properties file).
Property Name | Property Value | Mandatory | Description |
---|---|---|---|
|
|
Yes |
List of handlers. Only one is allowed with grouping properties |
|
|
Yes |
Type of handler to use. |
|
Formatter class or short code |
No. Defaults to |
The Formatter to be used. Can be one of the following:
Alternatively, it is possible to write a custom formatter and include the fully qualified class name here. |
|
Any choice of filename |
No. Defaults to |
Either the default |
|
|
No. Defaults to |
Operation mode or Transaction Mode. Java Adapter grouping options can be used only in tx mode. |
|
A custom implementation fully qualified class name |
No. Defaults to |
Class to be used which defines what headers properties are to be added to a flume event. |
|
|
No. Defaults to |
Defines whether each flume event would represent an operation or a transaction. If |
|
|
No. Defaults to |
When set to |
|
|
No. Defaults to |
When set to |
gg.handlerlist = flumehandler gg.handler.flumehandler.type = flume gg.handler.flumehandler.RpcClientPropertiesFile=custom-flume-rpc.properties gg.handler.flumehandler.format =avro_op gg.handler.flumehandler.mode =tx gg.handler.flumehandler.EventMapsTo=tx gg.handler.flumehandler.PropagateSchema =true gg.handler.flumehandler.includeTokens=false
This section explains how operation data from the Oracle GoldenGate trail file is mapped by the Flume Handler into Flume Events based on different configurations. A Flume Event is a unit of data that flows through a Flume agent. The Event flows from source to channel to sink and is represented by an implementation of the Event interface. An Event carries a payload (byte array) that is accompanied by an optional set of headers (string attributes).
The following topics are included:
The configuration for the Flume Handler is the following in the Oracle GoldenGate Java configuration file.
gg.handler.{name}.mode=op
The data for each individual operation from Oracle GoldenGate trail file maps into a single Flume Event. Each event is immediately flushed to Flume. Each Flume Event will have the following headers.
TABLE_NAME:
The table name for the operation.
SCHEMA_NAME
: The catalog name (if available) and the schema name of the operation.
SCHEMA_HASH
: The hash code of the Avro schema. (Only applicable for Avro Row and Avro Operation formatters.)
EventMapsTo
OperationThe configuration for the Flume Handler is the following in the Oracle GoldenGate Java configuration file.
gg.handler.flume_handler_name.mode=tx
gg.handler.flume_handler_name.EventMapsTo=op
The data for each individual operation from Oracle GoldenGate trail file maps into a single Flume Event. Events are flushed to Flume at transaction commit. Each Flume Event will have the following headers.
TABLE_NAME
: The table name for the operation.
SCHEMA_NAME
: The catalog name (if available) and the schema name of the operation.
SCHEMA_HASH
: The hash code of the Avro schema. (Only applicable for Avro Row and Avro Operation formatters.)
It is suggested to use this mode when formatting data as Avro or delimited text. It is important to understand that configuring Replicat batching functionality increases the number of operations processed in a transaction.
EventMapsTo
TransactionThe configuration for the Flume Handler is the following in the Oracle GoldenGate Java configuration file.
gg.handler.flume_handler_name.mode=tx gg.handler.flume_handler_name.EventMapsTo=tx
The data for all operations for a transaction from the source trail file are concatenated and mapped into a single Flume Event. The event is flushed at transaction commit. Each Flume Event has the following headers.
GG_TRANID
: The transaction ID of the transaction
OP_COUNT
: The number of operations contained in this Flume payload event
It is suggested to use this mode only when using self describing formats such as JSON or XML. In is important to understand that configuring Replicat batching functionality increases the number of operations processed in a transaction.
Replicat-based grouping is recommended to be used to improve performance.
Transaction mode with gg.handler.
flume_handler_name
. EventMapsTo=tx
setting is recommended for best performance.
The maximum heap size of the Flume Handler may affect performance. Too little heap may result in frequent garbage collections by the JVM. Increasing the maximum heap size of the JVM in the Oracle GoldenGate Java properties file may improve performance.
The Flume Handler is adaptive to metadata change events. To handle metadata change events, the source trail files must have metadata in the trail file. However, this functionality depends on the source replicated database and the upstream Oracle GoldenGate Capture process to capture and replicate DDL events. This feature is not available for all database implementations in Oracle GoldenGate, see the Oracle GoldenGate installation and configuration guide for the appropriate database to understand if DDL replication is supported.
Whenever a metadata change occurs at the source, the flume handler will notify the associated formatter of the metadata change event. Any cached schema that the formatter is holding for that table will be deleted. The next time the associated formatter encounters an operation for that table the schema will be regenerated.
This section contains the following sample Flume source configurations:
The following is sample configuration for an Avro Flume source from the Flume Agent configuration file:
client.type = default hosts = h1 hosts.h1 = host_ip:host_port batch-size = 100 connect-timeout = 20000 request-timeout = 20000
This section contains the following advanced features of the Flume Handler that you may choose to implement:
The Flume Handler can propagate schemas to Flume. This is currently only supported for the Avro Row and Operation formatters. To enable this feature set the following property:
gg.handler.name.propagateSchema=true
The Avro Row or Operation Formatters generate Avro schemas on a just in time basis. Avro schemas are generated the first time an operation for a table is encountered. A metadata change event results in the schema reference being for a table being cleared and thereby a new schema is generated the next time an operation is encountered for that table.
When schema propagation is enabled the Flume Handler will propagate schemas an Avro Event when they are encountered.
Default Flume Schema Event headers for Avro include the following information:
SCHEMA_EVENT
: true
GENERIC_WRAPPER
: true or false
TABLE_NAME
: The table name as seen in the trail
SCHEMA_NAME
: The catalog name (if available) and the schema name
SCHEMA_HASH
: The hash code of the Avro schema
Kerberos authentication for the Oracle GoldenGate for Big Data Flume Handler connection to the Flume source is possible. This feature is only supported in Flume 1.6.0 and later using the Thrift Flume source. It is enabled by changing the configuration of the Flume source in the Flume Agent configuration file.
Following is an example of the Flume source configuration from the Flume Agent configuration file that shows how to enable Kerberos authentication. The Kerberos principal name of the client and the server must be provided. The path to a Kerberos keytab
file must be provided so that the password of the client principal can be resolved at runtime. For information on how to administrate Kerberos, Kerberos principals and their associated passwords, and the creation of a Kerberos keytab
file, see the Kerberos documentation.
client.type = thrift hosts = h1 hosts.h1 =host_ip:host_port kerberos=true client-principal=flumeclient/client.example.org@EXAMPLE.ORG client-keytab=/tmp/flumeclient.keytab server-principal=flume/server.example.org@EXAMPLE.ORG
It is possible to configure the Flume Handler so that it will fail over in the event that the primary Flume source becomes unavailable. This feature is currently only supported in Flume 1.6.0 and later using the Avro Flume source. It is enabled with Flume source configuration in the Flume Agent configuration file. The following is sample configuration for enabling fail over functionality:
client.type=default_failover hosts=h1 h2 h3 hosts.h1=host_ip1:host_port1 hosts.h2=host_ip2:host_port2 hosts.h3=host_ip3:host_port3 max-attempts = 3 batch-size = 100 connect-timeout = 20000 request-timeout = 20000
You can configure the Flume Handler so that produced Flume events are load balanced across multiple Flume sources. It is currently only supported in Flume 1.6.0 and later using the Avro Flume source. This feature is enabled with Flume source configuration in the Flume Agent configuration file. The following is sample configuration for enabling load balancing functionality:
client.type = default_loadbalance hosts = h1 h2 h3 hosts.h1 = host_ip1:host_port1 hosts.h2 = host_ip2:host_port2 hosts.h3 = host_ip3:host_port3 backoff = false maxBackoff = 0 host-selector = round_robin batch-size = 100 connect-timeout = 20000 request-timeout = 20000
This section contains information to help you troubleshoot various issues and includes the following topics:
Issues with the Java classpath are one of the most common problems. The indication of a classpath problem is a ClassNotFoundException
in the Oracle GoldenGate Java log4j
log file. The Java log4j
log file can be used to troubleshoot this issue. Setting the log level to DEBUG
allows for logging of each of the jars referenced in the gg.classpath
object to be logged to the log file. This way, you can make sure that all of the required dependency JARs are resolved, see Classpath Configuration.
The Flume Handler may write to the Flume source faster than the Flume sink can dispatch messages in some situations. When this happens, the Flume Handler will work for a while, but once Flume can no longer accept messages it will abend. The cause logged in the Oracle GoldenGate Java log file will likely be an EventDeliveryException
indicating the Flume Handler was unable to send an event. Check the Flume log to for the exact cause of the problem. You may be able to reconfigure the Flume channel to increase capacity or increase the configuration for Java heap if the Flume Agent is experiencing an OutOfMemoryException
. This may not entirely solve the problem. If the Flume Handler can push data to the Flume source faster than messages are dispatched by the Flume sink, then any change may simply extend the period the Flume Handler can run before failing.
The Flume Handler will abend at start up if the Flume Agent configuration file is not in the classpath. The result is generally a ConfigException
listing the issue as an error loading the Flume producer properties. Check the gg.handler.
name
. RpcClientProperites
configuration file to ensure that the naming of the Flume Agent properties file is correct. Check the GoldenGate gg.classpath
properties to ensure that the classpath contains the directory containing the Flume Agent properties file. Also, check the classpath to ensure that the path to the Flume Agent properties file does not end with a wildcard *
character.
The Flume Handler will abend at start up if it is unable to make a connection to the Flume source. The root cause of this problem will likely be reported as an IOExeption
in the Oracle GoldenGate Java log4j
file indicating a problem connecting to Flume at a given host and port. Check the following:
That the Flume Agent process is running and
the Flume agent configuration file that the Flume Handler is accessing contains the correct host and port.