1.1.1.9 Release 19.1.0.0.2 - February 2020

This section lists the new features added in the OSA 19.1.0.0.2 bundle patch release.

1.1.1.9.1 Integration with Oracle Goldengate

The Integration with Oracle Goldengate feature enables you to connect to a Goldengate microservices instance, and generate a change data stream (GG Change Data) from an extract process. Configuring GG Change Data creates a new Kafka topic which you can then use to create a Stream.

1.1.1.9.2 Coherence Target

OSA now supports Coherence Target, to output data to a Coherence server. The output OSA events (Tuple) are pushed to external coherence cluster using the cache name and the coherence server details that the user provides.

1.1.1.9.3 Wallet Support for DB Target/Reference & Metadata

Oracle Stream Analytics User Interface allows a user to create and test a ATP/ADW database connection.

The user can use this database connection to create a database reference or a database target.

1.1.1.9.4 Support MySQL DB for Reference

This feature allows a user to create MySQL database tables as a reference in OSA. It supports a JDBC based connection to a MySQL database.

1.1.1.9.5 Integration with OCI Streaming Service & Secure Kafka

Oracle Stream Analytics now supports ingesting from OCI Streaming service using the Kafka compatibility feature of OSS.

1.1.1.9.6 Oracle Advanced Queue Source

OSA can read messages from Oracle Advanced Queue. This option is available as general JMS connector. User can create an Advanced Queue connection and use it to create a JMS type stream.

1.1.1.9.7 Support for secure REST Target

OSA now supports REST target type, which is SSL enabled and requires Basic authentication.

In addition to SSL enabled if the end point requires authentication, then you can pass it as a custom header field. Currently, OSA supports only Basic authentication. Example Custom Header - Authorization Basic admin:admin.

1.1.1.9.8 Kerberos Authentication

Hadoop now supports Kerberos Authentication for OSA running on Yarn-based Spark cluster. When using Kerberos authentication, the user is authenticated by obtaining a Kerberos ticket from the Kerberos server.

1.1.1.9.9 SSL And Authentication Enabled REST Endpoint

Creating a target of type "REST" which is SSL enabled and requires Basic authentication: If the REST End point is only SSL enabled, then you could connect in one of the following ways:

  • Upload Truststore file, enter truststore password. Truststore password is optional.
  • If you do not have the Truststore file and password, you can connect to the REST end-point by clicking Trust password

    Note:

    Using the Trust password option, will connect using untrusted certificates and is an insecure connection.

In addition to SSL enabled if the end point requires authentication, then you can pass it as a custom header field. Currently, OSA supports only Basic authentication. The custom header would be, for example, "Authorization Basic admin:admin".

1.1.1.9.10 New Data Patterns

OSA supports 4 new Data Patterns:

  • Current And Previous Pattern - automatically correlates the current and previous events
  • Delay Pattern - delays delivering the event to downstream node in the pipeline for specified number of seconds
  • Row Window Snapshot - dumps entire window contents to downstream node on the arrival of a new event. Window capacity is limited by specifying the maximum number of events.
  • Time Window Snapshot - dumps entire window contents to downstream node on the arrival of a new event. Event in the window expires after specified time range.

1.1.1.9.11 Backpressure in OSA

Backpressure is Spark's way to ensure stability in the streaming application, which means the streaming application receives data only as fast as it can process.

When the Backpressure feature is enabled, a signal is passed from the downstream components towards the upstream components within Spark, based on the present batch processing, and scheduling delay statistics.

The Backpressure is currently enabled by default for all published applications, regardless of whether HA is enabled. Also, this setting is enabled for both Yarn as well as the Standalone cluster manager.

1.1.1.9.12 IN Filter Supports DB Column Lookup

You can use the IN filter to refer to a column in a database table. When you change the database column values at runtime, the pipeline picks up the latest values from the DB column, without republishing the pipeline.