Manage Pipelines

Using the Pipeline Editor

The canvas on which you edit a pipeline and add different stages to the pipeline is called Pipeline Editor.

In the Pipeline Editor, you can:
  • Adjust the pipeline pane, editor pane, and the live output table pane using the resizing arrows
  • See the relationship and dependencies between various stages of the pipeline
  • Add any type of stage to any of the existing stages in the pipeline.
    To add a stage:
    1. Right-click the stage after which you want to add the new stage.
    2. Click Add a Stage, and select the stage type to add.
    3. Provide the details for the new stage.
    4. Click Save.
  • Expand or collapse a pipeline.
    • To expand a pipeline, click the
  • Switch the layout of the pipeline to vertical or horizontal.
  • Zoom to fit a pipeline

Publishing a Pipeline

You must publish a pipeline to make the pipeline available for all the users of Oracle Stream Analytics and send data to targets.

A published pipeline will continue to run on your Spark cluster after you exit the Pipeline Editor, unlike the draft pipelines which are undeployed to release resources.

To publish a pipeline:

  1. Open a draft pipeline in the Pipeline Editor.
  2. Click Publish.
    The Pipeline Settings dialog box opens.
  3. Update any required settings.

    Note:

    Make sure to allot more memory to executors in the scenarios where you have large windows.
  4. Click Publish to publish the pipeline.
    A confirmation message appears when the pipeline is published.
You can also publish a pipeline from the Catalog using the Publish option in the Actions menu.

Unpublishing a Pipeline

Unpublishing a pipeline from the Catalog page

  1. Go to the Catalog page and hover the mouse over the pipeline that you want to unpublish.
  2. Click the Unpublish icon that appears to your right side on the screen.
  3. On the Warning screen, click OK.

Unpublishing a pipeline from the Pipeline Editor

  1. Click the Unpublish button at the top right corner of the pipeline editor.
  2. On the Warning screen, click OK.

Exporting and Importing a Pipeline and Its Dependent Artifacts

The export and import features let you migrate your pipeline and its contents between Oracle Stream Analytics systems (such as development and production). You also have the option to migrate only select artifacts. You can import a pipeline developed with the latest version of Oracle Stream Analytics. On re-import, the existing metadata is overwritten with the newly imported metadata if the pipeline is not published. You can delete the imported artifacts by right-clicking them and selecting Delete.

To export a pipeline:

  1. On the Catalog page, hover the mouse over, or select the pipeline that you want to export to another GGSA instance.
  2. Click the Export option that appears to your right side on the screen.
  3. The selected pipeline and its dependent artifacts are exported as a JSON zip file, to your computer's default Downloads folder.

To import a pipeline:

  1. Go to the GGSA instance to which you want to import the exported metadata.
  2. On the Catalog page, click Import.
  3. In the Import dialog box, click Select, to locate and select the exported zip file on your computer.
  4. Click Import.

The imported pipeline and its dependent artifacts are available on the Catalog page.

Note:

  • Each pipeline should have a unique name. If you are importing an updated version of a pipeline, you can retain the same name. If you are importing a new pipeline and if a pipeline with the same name already exists in the catalog, change the name of the pipeline that you are importing.
  • If you have already exported a pipeline with the same name, update the pipeline name as below:
    1. Create a directory exportUpdate.
    2. Copy the exported zip, say exportNameUpdateExample.zip, to the folder exportUpdate.
    3. Unzip the file exportNameUpdateExample.zip.
    4. Open the json file in edit mode.
    5. Search for pipeline/ artifact name in the json file. For example, if Nano pipeline was the name given to the pipeline, update it to Nano pipeline updated.
    6. Update the json file in exportNameUpdateExample.zip.
    7. Import this zip.
    8. The pipeline is automatically assigned a name, using the display name.
    9. The draft pipeline and publish pipeline topic are created as below:
      1. sx_Nanopipelineupdated_Nano_Stream_draft
      2. sx_Nanopipelineupdated_Nano_Stream_public

Working with Live Output Table

The streaming data in the pipeline appears in a live output table. Select any stage in the pipeline to see its output.

Hide/Unhide Columns

In the live output table, right-click a column and click Hide to hide that column from the output. This option only hides the columns from the UI and does not remove them from the output. To unhide the hidden columns, click Columns and then click the eye icon to make the columns visible in the output.

Select/Unselect the Columns

Click the Columns link at the top of the output table to view all the columns available. Use the arrow icons to either select or unselect individual columns or all columns. Only the columns that you select appear in the output table and in the actual output when the pipeline is published.

Pause/Restart the Table

Click Pause/Resume to pause or resume the streaming data in the output table.

Perform Operations on Column Headers

Right-click on any column header to perform the following operations:

  • Hide: Hides the column from the output table. Click the Columns link and unhide the hidden columns.

  • Remove from output: Removes the column from the output table. Click the Columns link and select the columns to be included in the output table.

  • Rename: Renames the column to the specified name.

  • Function: Captures the column in Expression Builder using which you can perform various operations through the in-built functions.

Add a Timestamp

Include timestamp in the live output table by clicking the clock icon in the output table.

Reorder the Columns

Click and drag the column headers to right or left in the output table to reorder the columns.

Using the Topology Viewer

Topology is a graphical representation and illustration of the connected entities and the dependencies between the artifacts.

The topology viewer helps you in identifying the dependencies that a selected entity has on other entities. Understanding the dependencies helps you in being cautious while deleting or undeploying an entity. Oracle Stream Analytics supports two contexts for the topology — Immediate Family and Extended Family.

You can launch the Topology viewer in any of the following ways:

Click the Show Topology icon at the top-right corner of the editor to open the topology viewer. By default, the topology of the entity from which you launch the Topology Viewer is displayed. The context of this topology is Immediate Family, which indicates that only the immediate dependencies and connections between the entity and other entities are shown. You can switch the context of the topology to display the full topology of the entity from which you have launched the Topology Viewer. The topology in an Extended Family context displays all the dependencies and connections in the topology in a hierarchical manner.

Note:

The entity for which the topology is shown has a grey box surrounding it in the Topology Viewer.

Immediate Family

Immediate Family context displays the dependencies between the selected entity and its child or parent.

The following figure illustrates how a topology looks in the Immediate Family.

Description of topology_viewer_immediate.png follows
Description of the illustration topology_viewer_immediate.png

Extended Family

Extended Family context displays the dependencies between the entities in a full context, that is if an entity has a child entity and a parent entity, and the parent entity has other dependencies, all the dependencies are shown in the Full context.

The following figure illustrates how a topology looks in the Extended Family.

Description of topology_viewer_full.png follows
Description of the illustration topology_viewer_full.png