3 Work with Stream Analytics Artifacts

Stream Analytics has various artifacts like connections, references, streams, targets, and more. Artifacts are important resources that you can use to create pipelines.

About the Catalog

The Catalog page is the location where resources including pipelines, streams, references, maps, connections, and targets are listed. This is the go-to place for you to perform any tasks in Stream Analytics.

You can mark a resource as a favorite in the Catalog by clicking on the Star icon. Click the icon again to remove it from your favorites. You can also delete a resource or view its topology using the menu icon to the right of the favorite icon.

The tags applied to items in the Catalog are also listed on the screen below the left navigation pane. You can click any of these tags to display only the items with that tag in the Catalog. The tag appears at the top of the screen. Click Clear All at the top of the screen to clear the Catalog and display all the items.

You can include or exclude pipelines, streams, references, maps, connections, and targets using the View All link in the left panel under Show Me. When you click View All, a check mark appears beside it and all the components are displayed in the Catalog.

When you want to display or view only a few or selective items in the Catalog, deselect View All and select the individual components. Only the selected components will appear in the Catalog.

Create a Connection

To create a connection:
  1. Click Catalog in the left pane.
  2. From the Create New Item menu, select Connection.
  3. Provide details for the following fields on the Type Properties page and click Next:
    • Name — name of the connection

    • Description — description of the connection

    • Tags — tags you want to use for the connection

    • Connection Type — type of connection: Database or Kafka

    Description of create_connection_type.png follows
    Description of the illustration create_connection_type.png

  4. Enter Connection Details on the next screen and click Save.

    When the connection type is Kafka, provide Zookeeper URLs.

    When the connection type is Database:

    • Connect using — select the way you want to identify the database; SID or Service name

    • Service name/SID — the details of the service name or SID

    • Host name — the host name on which the database is running

    • Port — the port on which the database is running. Usually it is 1521

    • Username — the user name with which you connect to the database

    • Password — the password you use to login to the database

A connection with the specified details is created.

Create a Stream

A stream is a source of events with a given content (shape).

To create a stream:

  1. Navigate to Catalog.

  2. Select Stream in the Create New Item menu.

  3. Provide details for the following fields on the Type Properties page and click Next:

    • Name — name of the stream

    • Description — description of the stream

    • Tags — tags you want to use for the stream

    • Stream Type — select suitable stream type. Supported types are Kafka and GoldenGate

    Description of create_stream_type.png follows
    Description of the illustration create_stream_type.png

  4. Provide details for the following fields on the Source Details page and click Next:

    • Connection — the connection for the stream

    • Topic Name — the topic name that receives events you want to analyze

    Description of create_stream_source.png follows
    Description of the illustration create_stream_source.png

  5. Select one of the mechanisms to define the shape on the Shape page:

    • Infer Shape

    • Select Existing Shape

    • Manual Shape

    Infer Shape detects the shape automatically from the input data stream. You can infer the shape from Kafka or from JSON schema/message in a file. You can also save the auto detected shape and use it later.

    Select Existing Shape lets you choose one of the existing shapes from the drop-down list.

    Manual Shape populates the existing fields and also allows you to add or remove columns from the shape. You can also update the datatype of the fields.

    Description of create_stream_shape.png follows
    Description of the illustration create_stream_shape.png

A stream is created with specified details.

Create a Reference

A reference defines a read-only source of reference data to enrich a stream. A stream containing a customer name could use a reference containing customer data to add the customer’s address to the stream by doing a lookup using the customer name. A reference currently can only refer to database tables. A reference requires a database connection.

To create a reference:

  1. Navigate to Catalog.

  2. Select Reference in the Create New Item menu.

  3. Provide details for the following fields on the Type Properties page and click Next:

  4. Provide details for the following fields on the Source Details page and click Next:

    • Connection — the connection for the stream

      Description of create_reference_source.png follows
      Description of the illustration create_reference_source.png

    • Enable Caching — select this option to enable caching for better performance at the cost of higher memory usage of the Spark applications. Caching is supported only for single equality join condition. When you enable caching, any update to the reference table does not take effect as the data is fetched from the cache.

  5. Provide details for the following fields on the Shape page and click Save:

When the datatype of the table data is not supported, the table columns do not have auto generated datatype. Only the following datatypes are supported:

  • numeric

  • interval day to second

  • text

  • timestamp (without timezone)

  • date time (without timezone)

A reference is created with the specified details.

Create a Dashboard

A dashboard is a visualization tool that helps you look at and analyze the data related to a pipeline based on various metrics like slices.

A dashboard is an analytics feature. You can create dashboards in Stream Analytics to have a quick view at the metrics.

To create a dashboard:
  1. Go to the Catalog.
  2. Select Dashboard in the Create New Item menu.

    The Create Dashboard screen appears.

    Description of create_dashboard.png follows
    Description of the illustration create_dashboard.png

  3. Provide suitable details for the following fields:
    • Name — enter a name for the dashboard. this is a mandatory field.
    • Description — enter a suitable description for the dashboard. This is an optional field.
    • Tags — enter or select logical tags to easily identify the dashboard in the catalog. This is an optional field.
  4. Click Next.
  5. Enter a custom stylesheet for the dashboard. This is an optional step.
  6. Click Save.
    You can see the dashboard in the Catalog.

After you have created the dashboard, it is just an empty dashboard. You need to start adding details to the dashboard.

Editing a Dashboard

To edit a dashboard:

  1. Click the required dashboard in the catalog.

    The dashboard opens in the dashboard editor.

    Description of edit_dashboard.png follows
    Description of the illustration edit_dashboard.png

  2. Click the Add a new slice to the dashboard icon to see a list of existing slices. Go through the list, select one or more slices and add them to the dashboard.

  3. Click the Specify refresh interval icon to select the refresh frequency for the dashboard.

    This just a client side setting and is not persisted with the Superset Version 0.17.0.

  4. Click the Apply CSS to the dashboard icon to select a CSS. You can also edit the CSS in the live editor.

  5. Click the Save icon to save the changes you have made to the dashboard.

  6. Within the added slice, click the Explore chart icon to open the chart editor of the slice.

    Description of explore_chart.png follows
    Description of the illustration explore_chart.png

    You can see the metadata of the slice.

  7. Click Save as to make the following changes to the dashboard:

    1. Overwrite the current slice with a different name

    2. Add the slice to an existing dashboard

    3. Add the slice to a new dashboard

Create a Cube

A cube is a data structure that helps in quickly analyzing the data related to a business problem on multiple dimensions.

The cube feature works only when you have enabled Analytics. Verify this in System Settings.

To create a cube:

  1. Go to the Catalog.
  2. From the Create New Item menu, select Cube.
  3. On the Create Cube — Type Properties screen, provide suitable details for the following fields:
    • Name — enter a name for the cube. This is a mandatory field.
    • Description — enter a suitable description for the cube. This is an optional field.
    • Tags — enter or select logical tags for the cube. This is an optional field.
    • Source Type — select the source type from the drop-down list. Currently, Published Pipeline is the only supported type. This is a mandatory field.
  4. Click Next and provide suitable details for the following fields on the Ingestion Detailsscreen:
    • Pipelines — select a pipeline to be used as the base for the cube. This is a mandatory field.
    • Timestamp — select a column from the pipeline to be used as the timestamp. This is a mandatory field.
    • Timestamp format — select or set a suitable format for the timestamp using Joda time format. This is a mandatory field. auto is the default value.
    • Metrics — select metrics for creating measures
    • Dimensions — select dimensions for group by
    • High Cardinality Dimensions — high cardinality dimensions such as unique IDs. Hyperlog approximation will be used.
  5. Click Next and select the required values for the Metric on the Metric Capabilities screen.
  6. Click Next and make any changes, if required, on the Advanced Settings screen.
    • Segment granularity — select the granularity with which you want to create segments
    • Query granularity — select the minimum granularity to be able to query results and the granularity of the data inside the segment
    • Task count — select the maximum number of reading tasks in a replica set. This means that the maximum number of reading tasks is taskCount*replicas and the total number of tasks (reading + publishing) is higher than this. The number of reading tasks is less than taskCount if taskCount > {numKafkaPartitions}.
    • Task duration — select the length of time before tasks stop reading and begin publishing their segment. The segments are only pushed to deep storage and loadable by historical nodes when the indexing task completes.
    • Maximum rows in memory — enter a number greater than or equal to 0. This number indicates the number of rows to aggregate before persisting. This number is the post-aggregation rows, so it is not equivalent to the number of input events, but the number of aggregated rows that those events result in. This is used to manage the required JVM heap size. Maximum heap memory usage for indexing scales with maxRowsInMemory*(2 + maxPendingPersists).
    • Maximum rows per segment — enter a number greater than or equal to 0. This is the number of rows to aggregate into a segment; this number is post-aggregation rows.
    • Immediate Persist Period — select the period that determines the rate at which intermediate persists occur. This allows the data cube is ready for query earlier before the indexing task finishes.
    • Report Parse Exception — select this option to throw exceptions encountered during parsing and halt ingestion.
    • Advanced IO Config — specify name-value pair in a CSV format. Available configurations are replicas, startDelay, period, useEarliestOffset, completionTimeout, and lateMessageRejectionPeriod.
    • Advanced Tuning Config — specify name-value pair in CSV format. Available configurations are maxPendingPersists, handoffConditionTimeout, resetOffsetAutomatically, workerThreads, chatThreads, httpTimeout, and shutdownTimeout.
  7. Click Save to save the changes you have made.
You can see the cube you have created in the catalog.

Create a Target

A target defines a destination for output data coming from a pipeline.

To create a target:

  1. Navigate to Catalog.

  2. Select Target in the Create New Item menu.

  3. Provide details for the following fields on the Type Properties page and click Save and Next:

  4. Provide details for the following fields on the Target Details page and click Next:

    When the target type is Kafka:

    When the target type is REST:

    • URL — enter the REST service URL. This is a mandatory field.

    • Custom HTTP headers — set the custom headers for HTTP. This is an optional field.

    • Batch processing — select this option to send events in batches and not one by one. Enable this option for high throughput pipelines. This is an optional field.

    Click Test connection to check if the connection has been established successfully.

    Testing REST targets is a heuristic process. It uses proxy settings. The testing process uses GET request to ping the given URL and returns success if the server returns OK (status code 200). The return content is of the type of application/json.

  5. Select one of the mechanisms to define the shape on the Shape page and click Save:

    • Select Existing Shape lets you choose one of the existing shapes from the drop-down list.

    • Manual Shape populates the existing fields and also allows you to add or remove columns from the shape. You can also update the datatype of the fields.

      Description of create_target_shape.png follows
      Description of the illustration create_target_shape.png

A target is created with specified details.

Creating Target from Pipeline Editor

Alternatively, you can also create a target from the pipeline editor. When you click Create in the target stage, you are navigated to the Create Target dialog box. Provide all the required details and complete the target creation process. When you create a target from the pipeline editor, the shape gets pre-populated with the shape from the last stage.

Create a Geo Fence

Geo fences are further classified into two categories: manual geo fence and database-based geo fence.

Create a Manual Geo Fence

To create a manual geo fence:

  1. Navigate to the Catalog page.

  2. Click Create New Item and select Geo Fence from the drop-down list.

    The Create Geo Fence dialog opens.

  3. Enter a suitable name for the Geo Fence.

  4. Select Manually Created Geo Fence as the Type.

  5. Click Save.

    The Geo Fence Editor opens. In this editor you can create the geo fence according to your requirement.

  6. Within the Geo Fence Editor, Zoom In or Zoom Out to navigate to the required area using the zoom icons in the toolbar located on the top-left side of the screen.

    You can also use the Marquee Zoom tool to move across locations on the map.

  7. Click the Polygon Tool and mark the area around a region to create a geo fence.

    Description of create_geo_fence.png follows
    Description of the illustration create_geo_fence.png

  8. Enter a name and description, and click Save to save your changes.

Update a Manual Geo Fence

To update a manual geo fence:

  1. Navigate to the Catalog page.

  2. Click the name of the geo fence you want to update.

    The Geo Fence Editor opens. You can edit/update the geo fence here.

Search Within a Manual Geo Fence

You can search the geo fence based on the country and a region or address. The search field allows you search within the available list of countries. When you click the search results tile in the left center of the geo fence and select any result, you are automatically zoomed in to that specific area.

Delete a Manual Geo Fence

To delete a manual geo fence:

  1. Navigate to Catalog page.

  2. Click Actions, then select Delete Item to delete the selected geo fence.

Create a Database-based Geo Fence

To create a database-based geo fence:

  1. Navigate to Catalog page.

  2. Click Create New Item and then select Geo Fence from the drop-down list.

    The Create Geo Fence dialog opens.

  3. Enter a suitable name for the geo fence.

  4. Select Geo Fence from Database as the Type.

  5. Click Next and select Connection.

  6. Click Next.

    All tables that have the field type as SDO_GEOMETERY appear in the drop-down list.

  7. Select the required table to define the shape.

  8. Click Save.

Note:

You cannot edit/update database-based geo fences.

Delete a Database-based Geo Fence

To delete a database-based geo fence:

  1. Navigate to Catalog page.

  2. Click Actions and then select Delete Item to delete the selected geo fence.

Create a Pipeline

A pipeline is a Spark application where you implement your business logic. It can have multiple stages such as a query, a pattern stage, a business rule, or a query group.

To create a pipeline:

  1. Navigate to Catalog.

  2. Select Pipeline in the Create New Item menu.

  3. Provide details for the following fields and click Save:

A pipeline is created with specified details.

Configure a Pipeline

You can configure a pipeline to use various stages like query, pattern, rules, query group.

Add a Query Stage

You can include simple or complex queries on the data stream without any coding to obtain refined results in the output.

  1. Open a pipeline in the Pipeline Editor.
  2. Click the Add a Stage button and select Query.
  3. Enter a Name and Description for the Query Stage.
  4. Click Save.
Adding and Correlating Sources and References

You can correlate sources and references in a pipeline.

To add a correlating source or reference:
  1. Open a pipeline in the Pipeline Editor.
  2. Select the required query stage.
  3. Click the Sources tab.
  4. Click Add a Source.
  5. Select a source (stream or reference) from the available list.
  6. Click the Window Area in the source next to the clock icon and select appropriate values for Range and Evaluation Frequency.
  7. Under Correlation Conditions, select Match All or Match Any as per your requirement. Then click Add a Condition.
  8. Select the fields from the sources and the appropriate operator to correlate.
    Ensure that the fields you use on one correlation line are of compatible types. The fields that appear in the righ drop-down list depend on the field you select in the left drop-down list.
  9. Repeat these steps for as many sources or references as you want to correlate.
Adding Filters

You can add filters in a pipeline to obtain more accurate streaming data.

To add a filter:
  1. Open a pipeline in the Pipeline Editor.
  2. Select the required query stage.
  3. Navigate to the Filters tab.
  4. Click Add a Filter.
  5. Select the required column and a suitable operator and value.

    You can also calculated fields within filters.

  6. Click Add a Condition to add and apply a condition to the filter.
  7. Click Add a Group to add a group to the filter.
  8. Repeat these steps for as many filters, conditions, or groups as you want to add.
Adding Summaries

To add a summary:
  1. Open a pipeline in the Pipeline Editor.
  2. Select the required query stage and click the Summaries tab.
  3. Click Add a Summary.
  4. Select the suitable function and the required column.
  5. Repeat the above steps to add as many summaries you want.
Adding Group Bys

To add a group by:
  1. Open a pipeline in the Pipeline Editor.
  2. Select the required query stage and click the Summaries tab.
  3. Click Add a Group By.
  4. Click Add a Field and select the column on which you want to group by.
    A group by is created on the selected column.

When you create a group by, the live output table shows the group by column alone by default. Turn ON Retain All Columns to display all columns in the output table.

You can add multiple group bys as well.

Adding Visualizations

Visualizations are graphical representation of the streaming data in a pipeline. You can add visualizations on all stages in the pipeline.

To add a visualization:
  1. Open a pipeline in the Pipeline Editor.
  2. Select the required stage and click the Visualizations tab.
  3. Click Add a Visualization.
  4. Select a suitable visualization type from the available list.
    • Bar Chart

    • Line Chart

    • Geo Spatial

    • Area Chart

    • Pie Chart

    • Scatter Chart

    • Bubble Chart

    • Stacked Bar Chart

  5. Provide all the required details to populate data in the visualization.
  6. Select Horizontal if you want the visualization to appear with a horizontal orientation in the Pipeline Editor. This is optional and you can decide based on your usecase or requirement if you want to change the orientation.
  7. Select Save as Slice check box if you want the visualization to be available as a slice for future reference. Slices can be used in dashboards.
  8. Repeat these steps to add as many visualizations as you want.
Updating Visualizations

You can perform update operations like edit and delete on the visualizations after you add them.

You can open the visualization in a new window/tab using the Maximize Visualizations icon in the visualization canvas.

Edit Visualization

To edit a visualization:

  1. On the stage that has visualizations, click the Visualizations tab.

  2. Identify the visualization that you want to edit and click the pencil icon next to the visualization name.

  3. In the Edit Visualization dialog box that appears, make the changes you want. You can even change the Y Axis and X Axis selections. When you change the Y Axis and X Axis values, you will notice a difference in the visualization as the basis on which the graph is plotted has changed.

Change Orientation

Based on the data that you have in the visualization or your requirement, you can change the orientation of the visualization. You can toggle between horizontal and vertical orientations by clicking the Flip Chart Layout icon in the visualization canvas.

Delete Visualization

You can delete the visualization if you no longer need it in the pipeline. In the visualization canvas, click the Delete icon to delete the visualization from the pipeline. Be careful while you delete the visualization, as it is deleted with immediate effect and there is no way to restore it once deleted.

Working with a Live Output Table

The streaming data in the pipeline appears in a live output table.

Hide/Unhide Columns

In the live output table, right-click columns and click Hide to hide that column from the output. To unhide the hidden columns, click Columns and then click the eye icon to make the columns visible in the output.

Select/Unselect the Columns

Click the Columns link at the top of the output table to view all the columns available. Use the arrow icons to either select or unselect individual columns or all columns. Only columns you select appear in the output table.

Pause/Restart the Table

Click Pause/Resume to pause or resume the streaming data in the output table.

Perform Operations on Column Headers

Right-click on any column header to perform the following operations:

  • Hide — hides the column from the output table. Click the Columns link and unhide the hidden columns.

  • Remove from output — removes the column from the output table. Click the Columns link and select the columns to be included in the output table.

  • Rename — renames the column to the specified name.

  • Function — captures the column in Expression Builder using which you can perform various operations through the in-built functions.

Add a Timestamp

Include timestamp in the live output table by clicking the clock icon in the output table.

Reorder the Columns

Click and drag the column headers to right or left in the output table to reorder the columns.

Using the Expression Builder

You can perform calculations on the data streaming in the pipeline using in-built functions of the Expression Builder.

Stream Analytics supports various functions. For a list of supported functions, see Expression Builder Functions.

Adding a Constant Value Column

A constant value is a simple string or number. No calculation is performed on a constant value. Enter a constant value directly in the expression builder to add it to the live output table.

Description of expr_constant_value.png follows
Description of the illustration expr_constant_value.png

Using Functions

You can select a CQL Function from the list of available functions and select the input parameters. Make sure to begin the expression with =”. Click Apply to apply the function to the streaming data.

Description of list_of_functions.png follows
Description of the illustration list_of_functions.png

Add a Pattern Stage

Patterns are templatized stages. You supply a few parameters for the template and a stage is generated based on the template.

To add a pattern stage:
  1. Open a pipeline in the Pipeline Editor.
  2. Click Add a Stage.
  3. Select Pattern.
  4. Choose the required pattern from the list of available patterns.
  5. Enter a Name and Description for the pattern stage.
    The selected pattern stage is added to the pipeline.
  6. Click Parameters and provide the required values for the parameters.
  7. Click Visualizations and add the required visualizations to the pattern stage.

Add a Rule Stage

Using a rule stage, you can add IF-THEN logic to your pipeline. A rule is a set of conditions and actions applied to a stream.

To add a rule stage:
  1. Open a pipeline in the Pipeline Editor.
  2. Click Add a Stage.
  3. Select Rules.
  4. Enter a Name and Description for the rule stage.
  5. Click Add a Rule.
  6. Enter Rule Name and Description for the rule and click Done to save the rule.
  7. Select a suitable condition in the IF statement, THEN statement, and click Add Action to add actions within the business rules.
The rules are applied to the incoming events one by one and actions are triggered if the conditions are met.

Add a Query Group Stage

A query group stage allows you to use more than one query group to process your data - a stream or a table in memory. A query group is a combination of summaries (aggregation functions), GROUP BYs, filters and a range window. Different query groups process your input in parallel and the results are combined in the query group stage output. You can also define input filters that process the incoming stream before the query group logic is applied, and result filters that are applied on the combined output of all query groups together.

A query group stage of the stream type applies processing logic to a stream. It is in essence similar to several parallel query stages grouped together for the sake of simplicity.

A query group stage of the table type can be added to a stream containing transactional semantic, such as a change data capture stream produced, to give just one example, by the Oracle Golden Gate Big Data plugin. The stage of this type will recreate the original database table in memory using the transactional semantics contained in the stream. You can then apply query groups to this table in memory to run real-time analytics on your transactional data without affecting the performance of your database.

Add a Query Group: Stream

You can apply aggregate functions with different GROUP BY and window ranges to your streaming data.

To add a query group stage of type stream:
  1. Open a pipeline in the Pipeline Editor.
  2. Click the Add Stage button, select Query Group and then Stream.

    You can add a query stage group only at the end of the pipeline.

  3. Enter a name and a description for the query group stage of the type stream and click Save.

    The query group stage of the type stream appears in the pipeline.

  4. On the Input Filters tab, click Add a Filter. See Adding Filters to understand the steps for creating filters.

    These filters process data before it enters the query group stage. Hence, you can only see fields of the original incoming shape.

  5. On the Groups tab, click Add a Group. A group can consist one or many of summaries, filters, and GROUP BYs.
  6. Repeat the previous step to add as many groups as you want.
  7. On the Result Filters tab, click Add a Filter to filter the results.

    These filters process data before it exits the query group stage. Hence, you can see combined set of fields that get produced in the outgoing shape.

  8. On the Visualizations tab, click Add a Visualization and add the required type of visualization. See Adding Visualizations for the procedure.
Add a Query Group: Table

You can apply aggregate functions with different GROUP BYs and window ranges to a database table data recreated in memory.

To add a query group stage of the type table:
  1. Open a pipeline in the Pipeline Editor.
  2. Click the Add Stage button, select Query Group and then Table.
  3. Enter a name and a description for the Query Group Table and click Next.
  4. On the Transactions Settings screen, select a column in the Transaction Field drop-down list.

    The transaction column is a column from the output of the previous stage that carries the transaction semantics (insert/update/delete). Make sure that you use the values that correspond to your change data capture dataset. The default values work for Oracle GoldenGate change data capture dataset.

  5. On the Field Mappings screen, select the columns that carry the before and after transaction values from the original database table. For example, in case of Oracle GoldenGate, the before and after values have before_ and after_ as prefixes, respectively. Specify a column as primary key in the table.
  6. Click Save to create a query group stage of the type table.
    You can see the table configuration that you have specified while creating the table stage in the Table Configuration tab.
  7. On the Input Filters tab, click Add a Filter. See Adding Filters to understand the procedure.
  8. On the Groups tab, click Add a Group. A group can consist one or many of summaries, filters, and GROUP BYs.
  9. Repeat the previous step to add as many groups as you want.
  10. On the Result Filters tab, click Add a Filter to filter the results.
  11. On the Visualizations tab, click Add a Visualization and add the required type of visualization. See Adding Visualizations for the procedure.

Configure a Target

A target defines a destination for output data coming from a pipeline.

To configure a target:
  1. Open a pipeline in the Pipeline Editor.
  2. Click Target in the left tree.
  3. Select a target for the pipeline from the drop-down list.
  4. Map each of the Target Property and Output Stream Property.

You can also directly create the target from within the pipeline editor. See Create a Target for the procedure. You can also edit an existing target.

Description of create_edit_target.png follows
Description of the illustration create_edit_target.png

The pipeline is configured with the specified target.

Publish a Pipeline

You must publish a pipeline to make the pipeline available for all users of Stream Analytics and send data to targets.

A published pipeline will continue to run on your Spark cluster after you exit the Pipeline Editor, unlike the draft pipelines, which are undeployed to release resources.

To publish a pipeline:

  1. Open a draft pipeline in the Pipeline Editor.
  2. Click Publish.
    The Pipeline Settings dialog box opens.
  3. Update any required settings.
  4. Click Publish to publish the pipeline.
    A confirmation message appears when the pipeline is published.

Use the Topology Viewer

Topology is a graphical representation and illustration of the connected entities and the dependencies between the artifacts.

The topology viewer helps you in identifying the dependencies that a selected entity has on other entities. Understanding the dependencies helps you in being cautious while deleting or undeploying an entity. Stream Analytics supports two contexts for the topology — Immediate Family and Extended Family.

You can launch the Topology viewer in any of the following ways:

Click the Show Topology icon at the top-right corner of the editor to open the topology viewer.By default, the topology of the entity from which you launch the Topology Viewer is displayed. The context of this topology is Immediate Family, which indicates that only the immediate dependencies and connections between the entity and other entities are shown. You can switch the context of the topology to display the full topology of the entity from which you have launched the Topology Viewer. The topology in an Extended Family context displays all the dependencies and connections in the topology in a hierarchical manner.

Note:

The entity for which the topology is shown has a grey box surrounding it in the Topology Viewer.

Immediate Family

Immediate Family context displays the dependencies between the selected entity and its child or parent.

The following figure illustrates how a topology looks in the Immediate Family.

Description of topology_viewer_immediate.png follows
Description of the illustration topology_viewer_immediate.png

Extended Family

Extended Family context displays the dependencies between the entities in a full context, that is if an entity has a child entity and a parent entity, and the parent entity has other dependencies, all the dependencies are shown in the Full context.

The following figure illustrates how a topology looks in the Extended Family.

Description of topology_viewer_full.png follows
Description of the illustration topology_viewer_full.png