Transforming and Analyzing Data using Patterns

The visual representation of the event stream varies from one pattern type to another based on the key fields you choose. A pattern provides you with a simple way to explore event streams, based on common business scenarios.

To access the available patterns:

  • On the Home page, click Patterns.

To view all the available patterns:

  • Click View All under the Show Me panel.

Adding a Pattern Stage

A pattern is a template of an Oracle GoldenGate Stream Analytics application, with a business logic built into it. You can create pattern stages within the pipeline. Patterns are not stand-alone artifacts, they need to be embedded within a pipeline.

For detailed information about the various type of patterns, see Transforming and Analyzing Data using Patterns.

To add a pattern stage:
  1. Open a pipeline in the Pipeline Editor.
  2. Right-click the stage after which you want to add a pattern stage, click Add a Stage, and then select Pattern.
  3. Choose the required pattern from the list of available patterns.
  4. Enter a Name and Description for the pattern stage.
    The selected pattern stage is added to the pipeline.
  5. Click Parameters and provide the required values for the parameters.
  6. Click Visualizations and add the required visualizations to the pattern stage.

Detecting Missing Events

Use the Detect Missing Event pattern to detect missing events. For example, if a feed has multiple sensors sending readings every 5 seconds, this pattern detects sensors that have stopped sending readings. This also indicates that the sensors are either broken or disconnected.

To use this pattern, provide suitable values for the following parameters:

  • Partition Criteria: Select a field as a partition criterion.

    For example, your stream contains events issues by a number of sensors. All sensors send the same but individual data. You would want to compare readings of a sensor to previous readings of the same sensor and not just a previous event in your stream, which is very likely to be from a different sensor. Select a field that would uniquely identify your sensors, such as sensor id. This field is optional. For example, if your stream contains readings from just one sensor, you do not need to partition your data.

  • Window: Enter a time period, within which missing events are detected. If there is no event from a sensor within this specified time interval after the last event, an alert is triggered.

Outgoing Shape

The outgoing shape is the same as incoming shape. If there are no missing heartbeats, no events are output. If there is a missing heartbeat, the previous event, which was used to calculate the heartbeat interval is output.

Calculating Quantile Value

Use the Quantile pattern to calculate the value of quantile function. It returns the percentile value of all data in the specified window range. For example, a 25th percentile of a dataset is a value where 25% of data points are less than the value returned from the 25th percentile.

To use this pattern, provide suitable values for the following parameters:

  • Partition Criteria: Select a field as the partition criteria.

  • Observable Parameter: Select as field as the parameter to calculate the quantile.

  • Phi-quantile: Select the percentile value to calculate the quantile of the selected event stream. Values can only be from 1 to 99.

  • Window: Select the range that determines the amount of data to consider.

  • Slide: Select the frequency for newly updated output to be pushed downstream and into the browser.

The outgoing shape is the same as the incoming shape.

Identifying Correlation between Two Numeric Patterns

Use the Correlation Pattern pattern to identify the correlation between two numeric parameters. The output will define if the two parameters are positively correlated (value of 1), or negatively correlated (-1), or not correlated(value of 0).

To use this pattern, provide suitable values for the following parameters:

  • Partition Criteria: Select a field that uniquely identifies the object, for example, Sensor ID.

  • Observable Parameter 1: Select first field to correlate.

  • Observable Parameter 2: Select second field to correlate.

  • Window: Select a time range to retain the data, while identifying the correlation between parameter 1 and parameter 2. Default slide value is used when no slide value is specified. A slide value same as window range will output the correlation at the end of the time window. Slide value less than the window range will output more frequently.

  • Slide: Set the frequency at which you want to refresh the data.

The outgoing shape is same as the incoming shape.

Detecting Duplicate Events

The Detect Duplicates pattern detects duplicate events in your stream according to the criteria you specify and within a specified time window. Events may be partially or fully equivalent to be considered duplicates.

For example, when you suspect that your aggregates are offset, you can check your stream for duplicate events.

To use this pattern, provide suitable values for the following parameters:

  • Duplicate Criteria: Select the fields to be compared. If all the configured fields have identical values, the incoming event will be considered a duplicate and an outgoing event will be fired.

  • Window: Select the time period within which to search for duplicates.

    For example, if you set the window to 10 seconds, a duplicate event that arrives 9 seconds after the first one will trigger an outgoing event, while a duplicate event that arrives 11 seconds after the first one will not do so.

Outgoing Shape

The outgoing shape is the same as the incoming shape with one extra field: Number_of_Duplicates. This extra field will carry the number of duplicate events that have been discovered. All the other fields will have values of the last duplicate event.

Eliminating Duplicate Events

Use the Eliminate Duplicates pattern to look for duplicate events in your stream within a specified time window, and remove all but the first occurrence. A duplicate event is an event that has one or more field values identical to values of the same field(s) in another event. You can specify what fields are analyzed for duplicate values. You can configure the pattern to compare just one field or the whole event.

For example, use it when you know that your stream contains duplicates that might offset your aggregates, such as counts.

To use this pattern, provide suitable values for the following parameters:

  • Duplicate Criteria: Select the fields to be compared. If all the configured fields have identical values, the second, third, and subsequent events will be dropped.

  • Window: Select a time period, within which the duplicates should be discarded.

    For example, if you set the window to 10 seconds, a duplicate event that arrives 9 seconds after the first one will be discarded, while a duplicate event that arrives 11 seconds after the first one will be accepted and let through.

The outgoing shape is the same as the incoming shape.

Detecting Event Value Changes

Use the Change Detector pattern to look for changes in the values of your event fields and report the changes once they occur within a specified range window. For example, if an event arrives with value value1 for field field1, and any of the following incoming events, within a specified range window, contains a value different from value1, an alert is triggered. You can designate more than one field to look for changes.

For example, a sensor reading that is supposed to be the same for certain periods of time and changes in readings may indicate issues.

The default configuration of this pattern stage is to alert on change of any selected fields.

To use this pattern, provide suitable values for the following parameters:

  • Partition Criteria: Select the partition criteria.

    For example, your stream contains events issued by a number of sensors. All sensors send the same but individual data. You would want to compare readings of a sensor to previous readings of the same sensor and not just a previous event in your stream, which is very likely to be from a different sensor. Select a field that would uniquely identify your sensors, such as sensor Id. This field is optional. For example, if your stream contains readings from just one sensor, you do not need to partition your data.

  • Window range: Select a time period within which the values of designated fields are compared for changes.

    For example, if you set the window range to 10 seconds, an event with changes in observed fields will trigger an alert if it arrives within 10 seconds after the initial event. The clock starts at the initial event.

  • Change Criteria: Select a list of fields to be compared. If the fields contain no changes, no alerts will be generated.

  • Alert on group changes: Select this option default group changes support. If it is OFF, then alert on at least one field changes. If it is ON, then sends alert on every field change.

Outgoing Shape

The outgoing shape is based on the incoming shape, the difference being that all the fields except the one in the partition criteria parameter will be duplicated to carry both the initial event values and the change event values.

Example:

Your incoming event contains the following fields:

  • sensor_id

  • temperature

  • pressure

  • location

Normally, you would use sensor_id to partition your data, to look for changes in temperature. So, select sensor_id in the partition criteria parameter and temperature in the change criteria parameter. Use a range window that fits your use case. In this scenario, you will have the following outgoing shape:

  • sensor_id

  • temperature

  • orig_temperature

  • pressure

  • orig_pressure

  • location

  • orig_location

The orig_ fields carry values from the initial event. In this scenario, temperature and orig_temperature values are different, while pressure and orig_pressure, location, and orig_location may have identical values.

Detecting Data Field Value Changes

Use the Fluctuation pattern to detect when an event data field value changes in a specific upward or downward fashion within a specific time window. For example, use this pattern to identify the variable changes in an Oil Pressure value are maintained within acceptable ranges.

To use this pattern, provide suitable values for the following parameters:

  • Partition Criteria: Select the fields to be used as partition criteria.

  • Tracking Value: Select a field value to track the event data and create a pattern in the live output stream.

  • Window: Select a rolling time period, the frequency at which you want to refresh the data.

  • Deviation Threshold %: Select the percentage of deviation you want to be included in the pattern. This is the interval in which the pipeline looks for a matching pattern.

The outgoing shape is same as the incoming shape.

Monitoring Sequence of Events

Use the 'A' Followed by 'B' pattern to look for particular events following one another and to output an event when the specified sequence of events occurs.

Use it when you need to be aware of a certain succession of events happening in your flow. For example, if an order status BOOKED is followed by an order status SHIPPED (skipping status PAID), you need to raise an alert.

To use this pattern, provide suitable values for the following parameters:

  • Partition Criteria: Select the fields to be used as partition criteria. In the order example above, it may be order_id.

  • State A: field: Select an initial state field, whose value will be used in the comparison of two events. In our example, it will be order_status.

  • State A: value: Select the initial field state value. In our example, BOOKED.

  • State B: field: Select a consecutive state field, whose value will be used in the comparison of two events. In our example, it will be order_status again.

  • State B: value: Select the consecutive field state value. In our example, SHIPPED.

  • Duration: Select the time period within which to look for state changes.

Outgoing Shape

The outgoing shape is based on the incoming shape. A new abInterval field is added to carry the value of the time interval between the states in nanosecond. Also, all but the partition criteria fields are duplicated to carry values from both a and b states. For example, if you have the following incoming shape:

  • order_id

  • order_status

  • order_revenue

You will get the following outgoing shape:

  • order_id

  • abInterval

  • order_status (this is the value by which you partition your stream)

  • aState_order_status (this is the value of order_status in state A, in our example 'BOOKED')

  • order_revenue (this is the value of order_revenue in state B)

  • aState_order_revenue (this is the value of order_revenue in state A)

Outputting Highest Value Events

Use the Top N pattern to output N events with highest values from a collection of events, arriving within a specified time window. The events here are sorted the way you specify, and not in the default order of arrival.

For example, use it to get N highest values of pressure sensor readings.

To use this pattern, provide suitable values for the following parameters:

  • Window Range: Select a rolling time period within which the events will be collected and ordered per your ordering criteria.

  • Window Slide: Select the frequency for the newly updated output to be pushed downstream and into the browser.

  • Order by Criteria: Select a list of fields to use to order the collection of events.

  • Number of Events: Select the number of top value events to output.

The outgoing shape is the same as the incoming shape.

Outputting Lowest Value Events

Use the Bottom N pattern to output N events with lowest values from a collection of events, arriving within a specified time window. The events here are sorted the way you specify and not in the default order of arrival.

For example, use it to get N lowest values of pressure sensor readings.

To use this pattern, provide suitable values for the following parameters:

  • Window Range: Select a rolling time period within which the events will be collected and ordered per your ordering criteria.

  • Window Slide: Select the frequency for newly updated output to be pushed downstream and into the browser.

  • Order by Criteria: Select a list of fields to use to order the collection of events.

  • Number of Events: Select the number of bottom value events to output.

The outgoing shape is the same as the incoming shape.

Monitoring Invariably Increasing Numeric Values

Use the Up Trend pattern to detect an invariably increasing numeric value, over a period of time.

Use the pattern if you need to detect situations of a constant increase in one of your numeric values. For example, detect a constant increase in pressure from one of your sensors.

To use this pattern, provide suitable values for the following parameters:

  • Partition Criteria: Select fields to be used as partition criteria.

    For example, your stream contains events issues by a number of sensors. All sensors send the same but individual data. You would want to compare readings of a sensor to previous readings of the same sensor and not just a previous event in your stream, which is very likely to be from a different sensor. Select a field that would uniquely identify your sensors, such as sensor id. This field is optional. For example, if your stream contains readings from just one sensor, you do not need to partition your data.

  • Duration: Select a time period within which the values of the designated field are analyzed for the upward trend.

  • Tracking value: Select a field to be analyzed for upward trend.

Outgoing Shape

The outgoing shape is based on the incoming shape with an addition of two new fields. For example, if your incoming event contains the following fields:

  • sensor_id

  • temperature

  • pressure

  • location

Normally, you would use sensor_id to partition your data and say you want to look for the upward trend in temperature. So, select sensor_id in the partition criteria parameter and temperature in the tracking value parameter. Use a duration that fits your use case. In this scenario, you will have the following outgoing shape:

  • sensor_id

  • startValue (this is the value of temperature that starts the trend)

  • endValue (this is the value of temperature that ends the trend)

  • temperature (the value of the last event)

  • pressure (the value of the last event)

  • location (the value of the last event)

Monitoring Invariably Decreasing Numeric Values

Use the Down Trend pattern to detect an invariably decreasing a numeric value, over a period of time.

For example, detect a constant drop in pressure from one of your sensors.

To use this pattern, provide suitable values for the following parameters:

  • Partition Criteria: Select a field to be used as a partition criterion.

    For example, your stream contains events issues by a number of sensors. All sensors send the same but individual data. You would want to compare readings of a sensor to previous readings of the same sensor and not just a previous event in your stream, which is very likely to be from a different sensor. Select a field that would uniquely identify your sensors, such as sensor id. This field is optional. For example, if your stream contains readings from just one sensor, you do not need to partition your data.

  • Duration: Select a time period within which the values of the designated field are analyzed for the downward trend.

  • Tracking value: Select a field to be analyzed for downward trend.

Outgoing Shape

The outgoing shape is based on the incoming shape with an addition of two new fields. Let's look at an example. Your incoming event contains the following fields:

  • sensor_id

  • temperature

  • pressure

  • location

Normally, you would use sensor_id to partition your data and say you want to look for the downward trend in temperature. So, select sensor_id in the partition criteria parameter and temperature in the tracking value parameter. Use a duration that fits your use case. In this scenario, you will have the following outgoing shape:

  • sensor_id

  • startValue (this is the value of temperature that starts the trend)

  • endValue (this is the value of temperature that ends the trend)

  • temperature (the value of the last event)

  • pressure (the value of the last event)

  • location (the value of the last event)

The pattern is visually represented based on the data you have entered/selected.

Identifying the Missing First Event in a Sequence

The 'B' Not Preceded by 'A' pattern will look for a missing event in a particular combination of events and will output the first event which is found where the first event is not preceded by the second event.

Use it when you need to be aware of a specific event not preceded by another event in your flow. For example, if an order status BOOKED is not preceded by an order status PAID within a certain time period, you may need to raise an alert.

To use this pattern, provide suitable values for the following parameters:

  • Partition Criteria: (Optional) a field to partition your stream by. In the order example above, it may be order_id.

  • State A: Field: an initial state field, whose value will be used in the comparison of two events. In our example, it will be order_status.

  • State A: Value: the initial field state value. In our example, BOOKED.

  • State B: Field: a consecutive state field, whose value will be used in the comparison of two events. In our example, it will be order_status again.

  • State B: Value: the consecutive field state value. In our example, PAID.

  • Duration: the time period, within which to look for state changes.

Outgoing Shape

The outgoing shape is the same as incoming shape. If the second (state B) event does not arrive within the specified time window, the first (state A) event is pushed to the output.

Identifying the Second Missing Event in a Sequence

The 'A' Not Followed by 'B' pattern will look for a missing second event in a particular combination of events and will output the first event when the expected second event does not arrive within the specified time period.

Use it when you need to be aware of a specific event not following its predecessor in your flow. For example, if an order status BOOKED is not followed by an order status PAID within a certain time period, you may need to raise an alert.

To use this pattern, provide suitable values for the following parameters:

  • Partition Criteria: (Optional) a field to partition your stream by. In the order example above, it may be order_id.

  • State A: field: an initial state field, whose value will be used in the comparison of two events. In our example, it will be order_status.

  • State A: value: the initial field state value. In our example, BOOKED.

  • State B: field: a consecutive state field, whose value will be used in the comparison of two events. In our example, it will be order_status again.

  • State B: value: the consecutive field state value. In our example, SHIPPED.

  • Duration: the time period, within which to look for state changes.

Outgoing Shape

The outgoing shape is the same as incoming shape. If the second (state B) event does not arrive within the specified time window, the first (state A) event is pushed to the output.

Analyzing Data using Double Bottom Charts

Use the W pattern for technical analysis of financial trading markets. This pattern is also known as a double bottom chart pattern.

Use this pattern to detect when an event data field value rises and falls in “W” fashion over a specified time window. For example, use this pattern when monitoring a market data feed stock price movement to determine a buy/sell/hold evaluation.

To use this pattern, provide suitable values for the following parameters:

  • Partition Criteria: Select a field to be used as a partition criterion. For example, a ticker symbol.

  • Window: Select a time period within which the values of the designated field are analyzed for the W shape.

  • Tracking value: Select a field to be analyzed for the W shape.

Outgoing Shape

The outgoing shape is based on the incoming shape with an addition of five new fields. The new fields are:

  • firstW

  • firstValleyW

  • headW

  • secondValleyW

  • lastW

The new fields correspond to the tracking value terminal points of the W shape discovered in the feed. The original fields correspond to the last event in the W pattern.

Analyzing Data using Double Top Charts

Use the Inverse W pattern for the technical analysis of financial trading markets, and to see the financial data in a graphical form. This pattern is also known as a double top chart pattern.

To use this pattern, provide suitable values for the following parameters:

  • Partition Criteria: Select a field to be used as a partition criterion. For example, a ticker symbol.

  • Window: Select a time period within which the values of the designated field are analyzed for the inverse W shape.

  • Tracking value: Select a field to be analyzed for the inverse W shape.

Outgoing Shape

The outgoing shape is based on the incoming shape with an addition of five new fields. The new fields are:

  • firstW

  • firstPeakW

  • headInverseW

  • secondpeakW

  • lastW

The new fields correspond to the tracking value terminal points of the inverse W shape discovered in the feed. The original fields correspond to the last event in the inverse W pattern.

Correlating Current and Previous Events

Use the Current and Previous Events pattern to automatically correlate the current and previous events.

To use this pattern, provide suitable values for the following parameters:

  • Partition Criteria: Select the fields to be used the partition criteria.

Input Schema or Payload Shape [DiskID, Usage]

Output schema/payload from this pattern for non-partitioned input is [DiskID, Usage, PREV_DiskId, PREV_Usage].

For input partitioned by DiskID, the output schema from the pattern is [DiskID, Usage, PREV_Usage].

Below is an example with values:

For non-partitioned input:

Input - [Disk1, 40gb], [Disk2, 60gb], [Disk1, 45gb]

Output - [Disk2, 60gb, Disk1, 40gb], [Disk1, 45gb, Disk2, 60gb]

For input partitioned by Disk ID :

Input - [Disk1, 40gb], [Disk2, 60gb], [Disk1, 45gb]

Output [Disk1, 45gb, 40gb]. There is no output for Disk2 until another event for Disk2 arrives.

Delaying Delivery of Events to Downstream Node

Use the Delay Event pattern to delay delivering an event to downstream node in the pipeline, for a specified number of seconds. A practical use case is to wind up a campaign event or promotion.

To use this pattern, provide suitable values for the following parameters:

  • Delay in Seconds: Select a time period for which you want to delay the processing of an event.

Outputting Contents to Downstream Node

Use the Row Window Snapshot pattern to output entire window contents to a downstream node, on the arrival of a new event, based on the specified maximum number of events a window can hold.

For example:
  • To rebuild an ML model in real-time
  • To continually use the last X values in a time-series forecasting algorithm, to predict future values

To use this pattern, provide suitable values for the following parameters:

  • Maximum number of rows the window will hold
  • Key fields for partitioning the window
  • Time in seconds before event in the window expires

[PARTITION BY StockSymbol, ROWS 500, RANGE 1 MINUTE] will dump the entire window contents on the arrival of a new event. The window will hold a maximum of 500 events. An event will expire after a minute, allowing newer events.

Partitioning key creates separate window for each value of the key. For example, a separate window for 500 Oracle quote events, 500 Microsoft quote events and so on.

Outputting Unexpired Contents to Downstream Node

Use the Time Window Snapshot pattern to output entire window contents to a downstream node, on the arrival of a new event, based on the time window specified for each event.

To use this pattern, provide suitable values for the following parameters:

  • Window range in seconds or duration in seconds before event expires
  • Frequency in seconds for window snapshot output
[RANGE 10 MINUTES] will dump entire window contents to downstream node on the arrival of a new event. Event will expire from the window after 10 minutes.

Note:

Window will automatically dump contents either when a new event arrives or when an event expires.

Merging Two Streams having Identical Shapes

Use the Union pattern to merge two streams having identical shapes.

For example, you have two similar sensors sending data into two different streams, and you want to process the streams simultaneously, in one pipeline.

To use this pattern, provide suitable values for the following parameters:

  • Second event stream: Select the stream you want to merge with your primary stream. Make sure you select a stream with an identical shape.

The outgoing shape is the same as the incoming shape.

Joining Flows with Streams and References

Use the Left Outer join pattern to join a stream or a reference, using the left outer join semantics.

The result of this pattern always contains the data of the left table even if the join-condition does not find any matching data in the right table.

To use this pattern, provide suitable values for the following parameters:

  • Enriching Reference/Stream: Select the stream or reference you want to join to your flow.

  • Correlation Criteria: Select the fields based on which the stream/ reference will be joined.

  • Window Range of the Primary Stream: Select a rolling time window to make a collection of events in your primary flow, to be joined with the enriching stream/ reference.

  • Window Slide of the Primary Stream: Select the frequency for data to be pushed downstream and to the UI.

  • Window Range of the Enriching Stream: Select a rolling time window to make a collection of events in your enriching stream, to be joined with the primary flow. Disabled, if a reference is used.

  • Window Slide for the Enriching Stream: Select the frequency for the data to be pushed downstream and to the UI. Disabled, if a reference is used.

The outgoing shape is a sum of two incoming shapes.

Transforming Events into JSON

Use the ToJson pattern to transform event(s) coming from a stage in the pipeline into a JSON text.

Use this pattern to transform multiple events into a single JSON document, and send it to a downstream system through OSA pipeline targets. For example, you can configure a Database target after the toJson pattern stage, to write the json payload (of a single or multiple events), into a database table

To use this pattern, provide suitable values for the following parameters:

  • Enable batching: Select this option to transform multiple events as an array of single JSON document and output it as JSON text. By Default, it uses all the events of a partition, within the batch duration, to transform as an array of JSON document.
  • Batch Size: Select the number of events to be included in a batch.

    If the size is configured to the value greater than 0 (say n), it will transform maximum n events as a JSON array of single JSON document and output as a JSON text.

    If the size is set to the default 0, all the events of a partition within the batch duration will be transformed as an array of single JSON document and output as JSON text.

  • Upload Json File: You can upload a sample JSON file to be used to infer JSON path for field mapping.
  • Field Mapping:
    • Json Path: Lists all the paths in uploaded JSON file.
    • Fields: Lists all the fields from the previous stage. You can map the JSON path with one of the fields from the drop-down list.

Transforming a Single Event from a Stage into Multiple Events

Use the Split pattern to transform a single event from a stage into multiple events, by splitting the value field. For example, you can flatten an array of json element or json object from the source, to individually process it and also to push it to some targets.

The output of this pattern, is one or more events, corresponding to each single event of the previous stage. The output events are a clone of the source event. The attribute of the selected field is split into an array, based on the selected type. Each value of an array correspond to an output event with the new value of the selected attribute.

To use this pattern, provide suitable values for the following parameters:

  • Split: Select one of the fields from the previous event, to split.
  • Type: Select the value type of the split field, from the drop-down list.
  • Separator: Set the text separator for delimited text type. Default separator is comma.

Merging Two Continuous Events into a Single Event

Use the Continuous Merge pattern to merge two or more continuous events into a single event, based on the key attributes.

The output of this pattern, is a single event corresponding to two or more merged events of the previous stage. The attribute of the selected field is merged into an array, based on the selected type. Each value of an array correspond to an output event with the new value of the selected attribute.

To use this pattern, provide suitable values for the following parameters:

  • Key Fields: Select one or more attributes from the previous stage.
  • Merged Field Name: Enter a name for the merged, output field.

Applying OML Models to get the Scoring of Events (Preview Feature)

Use the Oracle Machine Learning Service pattern to use OML models to apply the scoring on the ingested events.

To use this pattern, provide suitable values for the following parameters:

  • OML server url: Enter the OML service endpoint where the autonomous data warehouse is located for the Machine Learning model in the region.
  • Tenant: Enter the tenant ID hosting the OML model.
  • OML Service Name.: Provide the OML Service Name.
  • Username: Username for the OML service or the ADW database, for the OML user.
  • Password: Password for the OML service or the ADW database, for the OML user.
  • OML Model: Select an OML model that you want to apply to the current stage.
  • Input Fields: Choose the input parameter fields to map with the OML model.

Detecting Contiguous Events

Use the Segment Detector pattern to detect contiguous events (range/segment) having unchanged values for the selected attributes, over a specified period of time.

For example, to detect the range of stability over a period of time in:
  • The range of constant speed of a vehicle
  • The range of constant/ stable temperature of an electric appliance
To use this pattern, provide suitable values for the following parameters:
  • Event Stream: Event stream is the stream (previous stage) over which the pattern will be applied.
  • Time Window: Time window specifies the period of time during which this pattern detect the segment.
  • Equality Criteria: Enter the attributes (criteria) to detect the segment.
  • Partition Criteria: Enter the attributes (criteria) to partition the output.
  • Alert on group changes: Select for notifications on all observable parameters changes.

The Segment Detector pattern outputs the attribute values of the first and last event in the detected segment (range). Each of the attributes of the first event is prefixed with orig_ in the output shape, whereas the attribute of the last event name is same as in the previous stage.

Creating Pivot Columns

Use the Pivot pattern to pivot all or selected attributes of an incoming event, into new columns, based on the selected pivot value and key provided.

To use this pattern, provide suitable values for the following parameters:
  • Event Stream: Event stream is the stream (previous stage) over which the pattern will be applied.
  • Time Window: Time window specifies the period of time during which the pivot value for the pivot key gets memorized.
  • Pivot Value: Select the attributes to pivot.
  • Pivot Key: Select the values of the selected attributes in Pivot Value,that will be displayed as new attributes in the output.
  • Partition Criteria: Enter the attributes (criteria) to partition the output.
  • Retain pivoted columns: When selected, it includes the pivoted column (pivot value) in the output
  • Keep all events: When selected, it outputs all the events from the previous stage.