Create Streams

A Stream is a source of continuous and dynamic data. The data can be from a wide variety of data sources such as IoT sensors, transaction or activity log files, point-of-sale devices, ATM machines, transactional databases, or information from geospatial services or social networks.

Creating a File Stream

To create a File stream:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Stream and select File from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the stream. This is a mandatory field.
    • Display Name: Enter a display name for the stream. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Stream Type: The selected stream is displayed.
  4. Click Next.
  5. On the Source Details screen, enter the following details:
    • File: Upload the CSV or JSON sample file to be used.

      Note:

      Use File stream only for POCs and quick prototyping
    • Read whole content: Select this option to read all the records in the file, at once. If you uncheck this option, the engine reads one record at a time.

    • Number of events per batch: Enter the number of records that you want to process per batch. The default value is one, but you can specify the number of records to process in each read. You can use this option only when Read Whole Content is unchecked.

    • Loop: Select this option to process the file in a loop.

    • Data Format: Select CSV or JSON as the data format.

  6. Click Next.
  7. On the Data Format screen, set the attributes for the selected the data format.
    • For JSON data format:
      • Allow Missing Column Names: Select this option to allow an input stream that has a column undefined in the shape.
      • Array in Multi-lines: Select this option to allow multi-line data formatting.
    • For CSV data format:
      • CSV Predefined Format: Select one of the predefined data format from the drop-down list. For more information, see Predefined CSV Data Formats.
      • First record as header: Select this option to use the first record as the header row.
  8. Click Next.
  9. On the Shape screen, select one of the methods to define the shape:
    • Infer Shape : Select this option to detect the shape automatically from the input data stream.

    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape : Select this option to infer the fields from a stream or file. You can also update the datatype of the fields.

      Note:

      • To retrieve the entire JSON payload, add a new field with path $.
      • To retrieve the content of the array, add a new field with path $[arrayField].

      In both the cases, the value returned is Text.

    • From File: Select this option to infer the shape from a JSON schema file, or a JSON or CSV data file. You can also save the auto-detected shape and use it later.
  10. Click Save.

Creating a GoldenGate Stream

To create a GoldenGate stream:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Stream and select GoldenGate from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the stream. This is a mandatory field.
    • Display Name: Enter a display name for the stream. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Stream Type: The selected stream is displayed.
  4. Click Next.
  5. On the Source Type page, enter the following details:
    • Connection: Select a GG Change Data.

    • Table name: Enter a valid table name that includes the period (.) delimiter between the catalog, schema, and table names. For example, test.dbo.table1

    • Generate Full Records: Select this option to stream full data record (value of all fields), irrespective of the database transactional changes to a single column, a subset, or all the columns of a row.
      • Database Connection: Select a GoldenGate sourced database connection.
      • Enable Cache: Select this option to enable caching for GoldenGate Full Records, to enhance its performance.
  6. Click Next.
  7. On the Shape screen, select one of the methods to define the shape:
    • Infer Shape : Select this option to detect the shape automatically from the input data stream.

    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape : Select this option to manually infer the fields from a stream or file. You can also update the datatype of the fields.

      Note:

      • To retrieve the entire JSON payload, add a new field with path $.
      • To retrieve the content of the array, add a new field with path $[arrayField].

      In both the cases, the value returned is Text.

    • From Stream: Select this option to detect the shape based on the table shape selected in the previous screen.
    • From File: Select this option to infer the shape from a JSON file. You can also save the auto-detected shape and use it later.
  8. Click Save.

Note:

The difference between a Kafka stream and a GoldenGate stream is that the pipeline constructs, like the Query Group Table, understands the GoldenGate syntax and associates it with the relevant GoldenGate fields.

Creating a JMS Stream

Prerequisite: A JMS connection.

To create a JMS stream:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Stream and select JMS from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the stream. This is a mandatory field.
    • Display Name: Enter a display name for the stream. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Stream Type: The selected stream is displayed.
  4. Click Next.
  5. On the Source Type page, enter the following details:
    • Connection: Select an existing JNDI connection for the stream

    • Connection Factory: Enter a value for the connection factory. A ConnectionFactory encapsulates connection configuration information, and enables JMS applications to create a Connection. The default value is weblogic.jms.ConnectionFactory.

      Note:

      GGSA can read messages from Oracle Advanced Queue. This option is available as a general JMS connector - oracle.jms.AQjmsInitialContextFactory.
    • Jndi name: Enter the name of the Java interface that reads messages from topics, distributed topics, queues and distributed queues

    • Client ID: Enter the unique client ID to be used for a durable subscriber. If you do not provide this value, subscriber ID is used as a clientID to create a durable subscriber.

    • Message Selector : Set the message selector to filter messages. Message selectors assign the work of filtering messages to the JMS provider rather than to the application.

      If your messaging application needs to filter the messages it receives, you can use a JMS API message selector. A message selector is a String that contains an expression. The syntax of the expression is based on a subset of the SQL92 conditional expression syntax. The message selector in the following example selects any message that has a NewsType property that is set to the value Sports or Opinion:

      NewsType = ’Sports’ OR NewsType = ’Opinion’

      The createConsumer and createDurableSubscriber methods allow you to specify a message selector as an argument when you create a message consumer.

    • Subscription ID: Enter the unique subscription ID for durable selector. This value is essential for durable subscriber.

      Note:

      When you specify both clientID and subscriberID, you can have only one running pipeline consuming that stream. If you need multiple subscribers/pipelines, remove clientID or subscriberName from the stream or create different streams (with different clientID and subscriberName) for multiple pipelines.
    • Data Format: Select the data format from the drop-down list. The supported formats are: CSV, JSON, AVRO, MapMessage.

      A MapMessage object is used to send a set of name-value pairs. The names are String objects, and the values are primitive data types in the Java programming language. The names must have a value that is not null, and not an empty string. The entries can be accessed sequentially or randomly by name. The order of the entries is undefined.

  6. Click Next.
  7. On the Data Format screen, enter the shape details for the stream, based on the data format you have selected.
    • For JSON:
      • Allow Missing Column Names: Select this option to allow an input stream that has a column undefined in the shape.
    • For CSV:
      • CSV Predefined Format: Select one of the predefined data format from the drop-down list. For more information, see Predefined CSV Data Formats.
      • First record as header: Select this option to use the first record as the header row.
    • For AVRO:
      • Schema Namespace: Enter the schema name combined with the namespace, to uniquely identify the schema within the store.
      • Schema (optional): Upload a schema file to infer shape from.
    • If you selected MapMessage as the data format, there are no specific attributes to be set on this screen. The Data Format screen is skipped, and you are redirected to the Shape screen.
  8. Click Next.
  9. On the Shape screen, select one of the methods to define the shape:
    • Infer Shape : Select this option to detect the shape automatically from the input data stream.

    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape: Select this option to manually infer the fields from a stream or file. You can also update the datatype of the fields.

      Note:

      • To retrieve the entire JSON payload, add a new field with path $.
      • To retrieve the content of the array, add a new field with path $[arrayField].

      In both the cases, the value returned is of type Text.

    • From File: Select this option to infer the shape from a JSON schema file, or a JSON or CSV data file. You can also save the auto-detected shape and use it later.
  10. Click Save.

JMS Server Clean-Up

GGSA creates a durable subscription with the JMS provider, when you create a JMS stream and select the durable subscription option. When you unpublish or kill a pipeline that is using this stream, the durable subscription still remains on the JMS Server. It is advisable to delete the durable subscription from the JMS Server and clean up the resources, if you do not intend to publish the pipeline anymore.

Creating a Kafka Stream

Prerequisite: A Kafka connection.

To create a Kafka stream:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Stream and select Kafka from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the stream. This is a mandatory field.
    • Display Name: Enter a display name for the stream. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Stream Type: The selected stream is displayed.
  4. Click Next.
  5. On the Source Details screen, enter the following details:
    • Connection: Select a Kafka connection for the stream.

    • Topic name: Enter a name for the kafka topic that will store the stream.

    • Data Format: Select CSV, JSON, or AVRO as the data format for the stream.

      for each format type:
  6. Click Next.
  7. On the Data Format screen, enter the shape details for the stream, based on the data format you have selected.
    • For JSON:
      • Allow Missing Column Names: Select this option to allow an input stream that has a column undefined in the shape.
    • For CSV:
      • CSV Predefined Format: Select one of the predefined data formats from the drop-down list. For more information, see Predefined CSV Data Formats.
      • First record as header: Select this option to use the first record as the header row.
    • For AVRO:
      • Schema Namespace: Enter the schema name combined with the namespace, to uniquely identify the schema within the store.
      • Schema (optional): Upload a schema file to infer shape from.
  8. Click Next.
  9. On the Shape screen, select one of the methods to define the shape:
    • Infer Shape : Select this option to detect the shape automatically from the input data stream.

    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape : Select this option to manually infer the fields from a stream or file. You can also update the datatype of the fields.

      Note:

      • To retrieve the entire JSON payload, add a new field with path $.
      • To retrieve the content of the array, add a new field with path $[arrayField].

      In both the cases, the value returned is of type Text.

    • From Stream: Select this option to detect the shape based on the earliest or the latest offset of the kafka topic. The default option is earliest. Use latest to infer the shape based on latest records in the Kafka topic.

      This option is currently available only for JSON data format.

    • From File: Select this option to infer the shape from Kafka, a JSON schema file, or a JSON or CSV data file. You can also save the auto-detected shape and use it later.

      This option is enabled if you have selected CSV as the data format.

    • From Schema: Select this option to infer the shape based on the schema you selected in Step 6. This option is enabled if you have selected AVRO as the data format.
  10. Click Save.