Table of Contents
- Title and Copyright Information
- 1 Overview
-
2
Install
- Planning Your Installation
- Installing GoldenGate Stream Analytics
- Configuring the Metadata Store
- Initializing Metadata Store
- Jetty Properties File
- Adjusting Jetty Threadpool
- Integrating Stream Analytics with Oracle GoldenGate
- Maven Setting for GoldenGate Big Data Handlers
- GoldenGate Stream Analytics Hardware Requirements for Enterprise Deployment
- Retaining https and Disabling http
- Setting up Runtime for GoldenGate Stream Analytics Server
- Validating Data Flow to GoldenGate Stream Analytics
- Terminating GoldenGate Stream Analytics
- Upgrading GoldenGate Stream Analytics
- 3 Configure
-
4
Manage
-
Connections
-
Create Connections
- Creating a Connection to ADW or ATP
- Creating a Connection to AWS S3
- Creating a Connection to Coherence
- Creating a Connection to Druid
- Creating a Connection to Elasticsearch
- Creating a Connection to GoldenGate
- Creating a Connection to HBase
- Creating a Connection to HDFS
- Creating a Connection to Hive
- Creating a connection to Ignite Cache
- Creating a Connection to JMS
- Creating a Connection to Kafka
- Creating a Connection to Microsoft Azure Data Lake-Gen2
- Creating a Connection to MongoDB
- Creating a Connection to MySQL Database
- Creating a Connection to OCI Object Store
- Creating a Connection to ONS
- Creating a Connection to Oracle AQ
- Creating a Connection to Oracle Database
- Creating a Connection to OSS
- Manage Connections
-
Create Connections
- Streams
-
References
- Create References
-
Manage References
-
Coherence Reference
- Configuring Extend Proxy on the Coherence Server
- Limitations of Coherence as Reference
- Loading Number Type Data on Coherence Cache
- Data Mapping in Coherence Reference Map Type
- Data Mapping in Coherence Reference POJO Type
- Datatypes Supported in Correlation Conditions
- Sample POJO Cache Loading in Coherence
- Sample POJO Class
-
Coherence Reference
-
Targets
-
Create Targets
- Creating an AWS S3 Target
- Creating an Azure DataLake Gen-2 Target
- Creating a Coherence Target
- Creating a Database Target
- Creating an Elasticsearch Target
- Creating an HBase Target
- Creating HDFS Target
- Creating a Hive Target
- Creating an Ignite Cache Target
- Creating a JMS Target
- Creating a Kafka Target
- Creating a MongoDB Target
- Creating a Network File System (NFS) Target
- Creating a Notification Target
- Creating an OCI Object Store Target
- Creating an OSS Target
- Creating a REST Target
- Manage Targets
-
Create Targets
- Pipelines
- GoldenGate Change Stream
- Embedded Ignite Cache
-
Connections
-
5
Transform
- Adding Stages to a Pipeline
- Correlating Streams and References
-
Applying Window Functions to a Stream
- Applying a Time Window with Slide
- Applying a Time Window without Slide
- Applying a Row Window with Slide
- Applying a Row Window without Slide
- Applying a window with current year, month, day, or hour
- Applying your own Window using Field from Payload
- Applying a Row window with Partition without Range
- Applying a Row Window with Partition with Range without Slide
- Applying a Row Window with Partition with Slide and Range
-
Applying Functions to Create a New Column
- Using Bessel Functions
- Using Conversion Functions
- Using Date Functions
- Using Geometry Functions
- Using Interval Functions
-
Using Math Functions
- IEEEremainder(value1, value1)
- abs(value1)
- acos(value1)
- asin(value1)
- atan(value1)
- atan2
- binomial(base, power)
- bitMaskWithBitsSetFromTo(value1, value2)
- cbrt()
- ceil()
- copySign()
- cos(value1)
- cosh(value1)
- exp(value1, value2)
- expm1(value1)
- factorial(value1)
- floor(value1)
- GetExponent(value1)
- getSeedAtRowColumn(value1, value2)
- hash(value1)
- hypot(value1, value2)
- LeastSignificantBit(value1)
- log(value1, value2)
- log1(value1)
- log10(value1)
- log2(value1)
- logFactorial(value1)
- long()
- longFactorial(value1)
- minimum(value1, value2)
- mod(value1, value2)
- mostSignificantBit(value1)
- nextAfter(value1, value2)
- nextDown(value1, value2)
- nextUp(value1)
- pow(value1, value2)
- rint(value1)
- round(value1)
- scalb(
- signum(value1)
- sin(value1)
- sinh(value1)
- sqrt(value1)
- stirlingCorrection(value1)
- tan(value1)
- tanh(value1)
- toDegrees(value1)
- toRadians(value1)
- ulp(value1)
- Using Null-related Functions
-
Using Statistical Functions
- beta1(value1, value2, value3)
- betacomplemented(value1, value2, value3)
- binomial2(value1, value2, value3)
- binomialcomplemented(value1, value2, value3)
- chiSquare(value1, value2)
- chiSquareComplemented(value1, value2)
- errorFunction(value1)
- errorFunctionComplemented(value1)
- gamma(value1, value2, value3)
- gammacomplemented(value1, value2, value3)
- incompleteBeta(value1, value2, value3)
- incompleteGamma(value1, value2)
- incompleteGammaComplement(value1, value2)
- logGamma(value1)
- negativeBinomial(value1, value2, value3)
- negativeBinomialComplemented(value1, value2, value3)
- normal(value1, value2, value3)
- normalInverse(value1)
- poisson(value1, value2)
- poissonComplemented(value1, value2)
- studentT(value1, value2)
- studentTInverse(value1, value2)
-
Using String Functions
- coalesce(value1,... )
- Concat(value1,...)
- indexof(value1, value2)
- initcap(value1)
- length(value1)
- like(string, pattern)
- lower(value1)
- lpad(value1, value2, value3)
- ltrim(value1, value2)
- replace(string, match, replacement)
- rpad(value1, value2, value3)
- rtrim(value1, value2)
- substr()
- substring(string, from, to)
- translate(expression, from_string, to_string)
- upper(value1)
- Adding Custom Functions and Custom Stages
- Writing CQL Queries
-
6
Analyze
-
Using Geofences for Location-based Analytics
- Selecting a Tile Layer
- Managing Geofences using the Map Editor
- Importing a Geofence from a Database
-
Using Spatial Patterns in Pipeline Stages
- Clearing Objects Outside a Geo Fence
- Tracking Objects using a Geo Fence
- Getting Direction of a Moving Object
- Obtaining Geographic Coordinates
- Calculating Distance between Objects in a Stream
- Calculating Distance between Objects in Two Streams
- Creating Geo Fence
- Monitoring Proximity between Objects in a Stream
- Monitoring Proximity between Objects in Two Streams
- Obtaining the Proximity of an Object from a Geo Fence
- Finding Nearest Place using the Geographical Coordinates
- Finding Nearest Place Details using the Geographical Coordinates
- Determining Average Speed
-
Transforming and Analyzing Data using Patterns
- Adding a Pattern Stage
- Detecting Missing Events
- Calculating Quantile Value
- Identifying Correlation between Two Numeric Patterns
- Detecting Duplicate Events
- Eliminating Duplicate Events
- Detecting Event Value Changes
- Detecting Data Field Value Changes
- Monitoring Sequence of Events
- Outputting Highest Value Events
- Outputting Lowest Value Events
- Monitoring Invariably Increasing Numeric Values
- Monitoring Invariably Decreasing Numeric Values
- Identifying the Missing First Event in a Sequence
- Identifying the Second Missing Event in a Sequence
- Analyzing Data using Double Bottom Charts
- Analyzing Data using Double Top Charts
- Correlating Current and Previous Events
- Delaying Delivery of Events to Downstream Node
- Outputting Contents to Downstream Node
- Outputting Unexpired Contents to Downstream Node
- Merging Two Streams having Identical Shapes
- Joining Flows with Streams and References
- Transforming Events into JSON
- Transforming a Single Event from a Stage into Multiple Events
- Merging Two Continuous Events into a Single Event
- Applying OML Models to get the Scoring of Events (Preview Feature)
- Detecting Contiguous Events
- Creating Pivot Columns
- Using Machine Learning Models for Scoring and Prediction
- Integrating with Druid Timeseries Database for Realtime Interactive Analytics
-
Using Geofences for Location-based Analytics
- 7 Visualize
- 8 Monitor
- 9 Reference
-
10
Troubleshoot
- Pipeline Debug and Monitoring Metrics
-
Common Issues and Remedies
- Pipeline
-
Pipeline
- Pipelines are not running as expected
- GGSA Pipeline getting Terminated
- Live Table Shows Listening Events with No Events in the Table
- Live Table Still Shows Starting Pipeline
- Time-out Exception in the Spark Logs when you Unpublish a Pipeline
- Piling up of Queued Batches in HA mode
- Null Record from Summary in Query Stage
- Stream
- Connection
- Target
- Geofence
- Cube
- Dashboard
- Live Output
- Pipeline Deployment Failure