8 Managing Targets

A target is an external system to which the stream processing results are ouput. It is an interface with a downstream system.

Creating an AWS S3 Target

To create an AWS S3 target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select AWS S3 from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection: Select an AWS connection from the drop-down list.
    • File Name: Enter the name of the file used to save data to the AWS S3 bucket.
    • AWS S3 Path: Enter a name for folder to be created in the AWS S3 bucket. A new folder is created if there is no existing folder.
    • Bucket: Enter the name of the bucket to be created. A new bucket is created if there is no existing bucket in the region.
    • Roll Interval: Enter the roll-over interval to write a new file. The interval can be in 1000ms, 10s, 1m, or 1.5h format.

    • File Roll Max Size: Enter the roll-over file size to create a new file. The size can be in 1000, 10k, 1m, or 1g format.

    • NFS Path: Enter the local file or NFS path where the files are written first and then uploaded to the AWS S3 bucket.

    • Storage Format: Select a storage format from the drop-down list.

  6. Click Next.
  7. On the Data Format screen, enter the shape details, based on the storage format you have selected.
    • For FILE:
      • File Format: Select a file format from the drop-down list.

        • JSON Delimiter: Enter the JSON delimiter if you have selected the JSON file format. This is an optional field.
        • Avro Codec: Select a compression codec from the drop-down list. This option is enabled if you have selected the file format as AVRO or AVRO Object Container Format.
    • For PARQUET:
      • PARQUET Compression: Select a compression codec from the drop-down list.
    • For ORC:
      • ORC Compression: Select a compression codec from the drop-down list.
  8. Click Next.
  9. On the Shape screen, select one of the methods to define the shape:
    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape: Select this option to manually infer the fields from a stream or file. You can also add to, or remove fields from, an existing shape. Enter the following details:
      • Shape Name: Enter a name for the shape.
      • Clear Fields: Click to delete all the fields in the shape.
      • Field Name: Add the necessary fields.
      • Field Type: Select the field data type from the drop-down list.
  10. Click Save.

Creating an Azure DataLake Gen-2 Target

To create Azure DataLake Gen-2 target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select Azure DataLake Gen-2 from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection: Select an HDFS connection from the drop-down list.
    • HDFS File: Enter a file name. The file name is appended with current timestamp and the extension, based on the type of storage format.
    • HDFS Path: Enter the HDFS location. Provide full access to this location to enable users other than the folder owner, to write to this path.
    • File Roll Interval: Enter the roll-over interval to write a new file. The interval can be in 1000ms, 10s, 1m, or 1.5h format.

    • File Roll Max Size: Enter the roll-over file size to create a new file. The size can be in 1000, 10k, 1m, or 1g format.

    • NFS Path: Enter the local file or NFS path where the files are written first and then uploaded to HDFS.

    • Storage Format: Select a storage format from the drop-down list.

  6. Click Next.
  7. On the Data Format screen, enter the shape details, based on the storage format you have selected.
    • For FILE:
      • File Format: Select a file format from the drop-down list.

        • JSON Delimiter: Enter the JSON delimiter if you have selected the JSON file format.
        • Avro Codec: Select a compression codec from the drop-down list. This option is enabled if you have selected the file format as AVRO or AVRO Object Container Format.
    • For PARQUET:
      • PARQUET Compression: Select a compression codec from the drop-down list.
    • For ORC:
      • ORC Compression: Select a compression codec from the drop-down list.
  8. Click Next.
  9. On the Shape screen, select one of the methods to define the shape:
    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape: Select this option to manually infer the fields from a stream or file. You can also add to, or remove fields from, an existing shape. Enter the following details:
      • Shape Name: Enter a name for the shape.
      • Clear Fields: Click to delete all the fields in the shape.
      • Field Name: Add the necessary fields.
      • Field Type: Select the field data type from the drop-down list.
  10. Click Save.

Creating a Coherence Target

To create an Coherence target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select Coherence from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection: Select a coherence connection.

    • Cache Name: Enter a name for the coherence cache.

    • Data Format: Select a data format from the drop-down list.

  6. Click Next.
  7. On the Data Format screen, enter the shape details for the stream, based on the data format you have selected.
    • For JSON:
      • Create nested json object: Select this option to create a nested JSON object for the target.

  8. Click Next.
  9. On the Shape screen, enter the following details:
    • For JSON:
      • Shape Name: Enter a name for the shape.
      • Clear Fields: Click to clear all the fields from the existing shape.
      • Key: Select key fields, based on which data is partitioned. For example, records containing the same values for the selected key fields will all be stored in the same Kafka partition.

        You can select multiple fields as key. Key selection is not mandatory.

      • Field Name: Add the necessary fields.
      • Field Path: Enter the field path.

        Note:

        • To retrieve the entire JSON payload, add a new field with path $.
        • To retrieve the content of the array, add a new field with path $[arrayField].

        In both the cases, the value returned is of type Text.

      • Field Type: Select the field data type from the drop-down list.
    • For POJO:
      • Jar Name: Select a jar name from the custom POJO jars, from the drop-down list.
      • Class Name: Select the class name from the POJO classes in the chosen jars, from the drop-down list.
  10. Click Save.

Datatypes supported in the POJO class

The following data types are supported in the POJO class:
  • java.lang.String
  • java.lang.Integer
  • java.lang.Long
  • java.lang.Float
  • java.lang.Double
  • java.lang.Boolean
  • java.math.BigDecimal
  • java.math.BigInteger

Sample POJO Class

public class OrderPOJO implements Serializable{
private String orderId ;
Private String orderDesc;

public String setOrderId(String str1) {
         this.orderId=str1;
}
public String setOrderDesc(String str2) {
        this.orderDesc=str2;
}

public String getOrderId() {
return orderId;
}
public String getOrderDesc() {
return orderDesc;
}
public boolean equals(Object object) {
if (this == object) return true;
if (object == null || getClass() != object.getClass()) return false;
if (!super.equals(object)) return false;
OrderPOJO that = (OrderPOJO) object;
return java.util.Objects.equals(orderId, this.orderId) &&
java.util.Objects.equals(orderDesc, this.orderDesc);
}
public int hashCode() {
return java.util.Objects.hash(super.hashCode(), orderId, orderDesc);
}
}

Note:

Ensure that the POJO class does not have a GGSA coherence target as a constructor, because it can instantiate the POJO class using default constructor, and then access the setXXX and getXXX, and isXXX methods.

Sample Code Snippet to declare a Method which returns Boolean

If a method in the POJO class returns a Boolean value, prefix the method name with is instead of get, while defining the POJO class.

Class Abc {

private boolean var1;

public setVar1(boolean aa){
this.var1 = aa;
}
public boolean isVar1(){
return var1;
}
}

Creating a Database Target

Prerequisite: A Database connection.

To create a Database target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select Database from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection: Select a database connection from the drop-down list.

  6. Click Next.
  7. On the Shape screen, enter the following details:
    • Table Name: Select a database table from the drop-down list.
  8. Click Save.

Creating an Elasticsearch Target

To create an Elastic Search target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select Elastic Search from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection: Select an Elastic Search connection from the drop-down list.
    • Index Name: An Elasticsearch index is a collection of documents with similar characteristics. You can create an index name only in lowercase.

      Example format: index.name, your index in elastic search will beindex_name.

  6. Click Next.
  7. On the Shape screen, select one of the methods to define the shape:
    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape: Select this option to manually define a shape. You can also add to, or remove fields from, an existing shape. Enter the following details:
      • Shape Name: Enter a name for the shape.
      • Clear Fields: Click to delete all the fields in the shape.
      • Key: Select one or more fields as key, which will be used as the ID field.

        Example: {"_index":"json_data","_type":"_doc","_id":"2","_score":1.0,"_source":{"address":"Mumbai","serial":"2","clientName":"Joe"}}]}}

        Note:

        • Any update to the value will result in new entry rather than updating previous value.
        • If a key has a null value, ElasticSearch will autogenerate the key. In the example above, ID is 2, because serial is selected as the key field. If record has null in serial field: {"address":"Mumbai","serial":null","clientName":"Joe"}, then ID will be autogenerated by Elasticsearch.
        • Index is json_data which is provided in previous step, ID will be value of each record.

      • Field Name: Add the necessary fields.
      • Field Type: Select the field data type from the drop-down list.
  8. Click Save.

Creating an HBase Target

To create HBase target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select HBase from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection: Select an HBase connection from the drop-down list.
    • Column Family: Provide a column family to group the columns in the HBase table. GGSA and HBase Handler support only a single column family.
    • Table Name: Select an already created table in HBase, or provide a table name for GGSA to create a table, with default HBase table properties.
  6. Click Next.
  7. On the Shape screen, select one of the methods to define the shape:
    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape: Select this option to manually define a shape. You can also add to, or remove fields from, an existing shape. Enter the following details:
      • Shape Name: Enter a name for the shape.
      • Clear Fields: Click to delete all the fields in the shape.
      • Key: Selecting atleast one field in the HBase table as a primary key. A primary key is mandatory.
      • Field Name: Add the necessary fields.
      • Field Type: Select the field data type from the drop-down list.
  8. Click Save.

Creating HDFS Target

To create HDFS target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select HDFS from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection: Select an HDFS connection from the drop-down list.
    • HDFS File: Enter a file name. The file name is appended with current timestamp and the extension, based on the type of storage format.
    • HDFS Path: Enter the HDFS location. Provide full access to this location to enable users other than the folder owner, to write to this path.
    • File Roll Interval: Enter the roll-over interval to write a new file. The interval can be in 1000ms, 10s, 1m, or 1.5h format.

    • File Roll Max Size: Enter the roll-over file size to create a new file. The size can be in 1000, 10k, 1m, or 1g format.

    • NFS Path: Enter the local file or NFS path where the files are written first and then uploaded to HDFS.

    • Storage Format: Select a storage format from the drop-down list.

  6. Click Next.
  7. On the Data Format screen, enter the shape details, based on the storage format you have selected.
    • For FILE:
      • File Format: Select a file format from the drop-down list.

        • JSON Delimiter: Enter the JSON delimiter if you have selected the JSON file format.
        • Avro Codec: Select a compression codec from the drop-down list. This option is enabled if you have selected the file format as AVRO or AVRO Object Container Format.
    • For PARQUET:
      • PARQUET Compression: Select a compression codec from the drop-down list.
    • For ORC:
      • ORC Compression: Select a compression codec from the drop-down list.
  8. Click Next.
  9. On the Shape screen, select one of the methods to define the shape:
    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape: Select this option to manually infer the fields from a stream or file. You can also add to, or remove fields from, an existing shape. Enter the following details:
      • Shape Name: Enter a name for the shape.
      • Clear Fields: Click to delete all the fields in the shape.
      • Field Name: Add the necessary fields.
      • Field Type: Select the field data type from the drop-down list.
  10. Click Save.

Creating a Hive Target

To create Hive target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select Hive from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection: Select a Hive connection from the drop-down list.
    • Table Name: Enter a table name for the external hive table to be created. The table is created in the database mentioned in the JDBC url.
    • HDFS Path: Enter the file path to write the Avro_OCF files. Data from these files are loaded into the external tables.
    • Schema File Path: Enter the HDFS path to write the Avro_OCF schema file. This schema file is used to derive the schema of the external hive table. Ensure that the Avro_ocf data file path and the schema file path are different.
    • File Roll Interval: Enter the roll-over interval to write a new file. The interval can be in 10ms, 10s, 10m, 1hr formats.

    • File Roll Max Size: Enter the roll-over file size to create a new file. The size can be in 1000, 10k, 10m, 1g formats.

  6. Click Next.
  7. On the Shape screen, select one of the methods to define the shape:
    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape: Select this option to manually infer the fields from a stream or file. You can also add to, or remove fields from, an existing shape. Enter the following details:
      • Shape Name: Enter a name for the shape.
      • Key: Select key fields, based on which the data is partitioned.
      • Clear Fields: Click to delete all the fields in the shape.
      • Field Name: Add the necessary fields.
      • Field Type: Select the field data type from the drop-down list.
  8. Click Save.

Creating an Ignite Cache Target

To create an Ignite Cache target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and selectIgnite Cache from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection: Select an Ignite connection. Select an embedded connection if you have the Internal cache cluster is started.

    • Cache Name: Enter a unique cache name for the ignite cluster. This will be verified when the cache name is validated.

    • Expiry Delay: Select the cache expiry period from the drop-down list.
    • Caching Scheme: Select a scheme from the drop-down list. This field is not applicable for an embedded cluster.
      • Replicated : If you select this scheme, all the data is replicated to every node in the cluster.
      • Partitioned: If you select this scheme, the entire data set is divided equally into partitions.
    • Backup: Enter the number of backup nodes. This field is not applicable for an embedded cluster.
    • Update Cache Entry: Select this option to update a particular key value with new data. This option is enabled by default.
    • Data Format: Select a data format from the drop-down list.

  6. Click Next.
  7. On the Data Format screen, enter the shape details for the stream, based on the data format you have selected.
    • For JSON:
      • Create nested json object: Select this option to create a nested JSON object for the target.

  8. Click Next.
  9. On the Shape screen, enter the following details:
    • For JSON:
      • Shape Name: Enter a name for the shape.
      • Clear Fields: Click to clear all the fields from the existing shape.
      • Key: Select a key from the input data to store record.

        You can select multiple fields as key. Key selection is mandatory.

      • Field Name: Add the necessary fields.
      • Field Path: Enter the field path.
      • Field Type: Select the field data type from the drop-down list.
  10. Click Save.

Note:

You cannot edit an Ignite target once created. This restriction avoids cache data corruption because only one target from the GGSA platform is allowed to write to only one cache in the ignite server.

Creating a JMS Target

Prerequisite: A JMS connection.

To create a JMS target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select JMS from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection: Select a JNDI connection.

    • JNDI name: Enter a name for the JNDI topic.

    • Data Format: Select a data format from the drop-down list.

  6. Click Next.
  7. On the Data Format screen, enter the shape details, based on the data format you have selected.
    • For JSON:
      • Create nested json object: Select this option to create a nested JSON object for the target.

    • For CSV:
      • CSV Predefined Format: Select one of the predefined data formats from the drop-down list. For more information, see Predefined CSV Data Formats.
      • First record as header: Select this option to use the first record as the header row.
  8. Click Next.
  9. On the Shape screen, select one of the methods to define the shape:
    • Infer Shape: Select this option to detect the shape automatically from the input data stream.

    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape: Select this option to manually infer the fields from a stream or file. You can also add to, or remove fields from, an existing shape. Enter the following details:
      • For JSON:
        • Shape Name: Enter a name for the shape.
        • Clear Fields: Click to remove all the fields from the shape.
        • Key: Select key fields, based on which data is partitioned. For example, records containing the same values for the selected key fields will all be stored in the same Kafka partition.

          You can select multiple fields as key. Key selection is not mandatory.

        • Field Name: Add the necessary fields.
        • Field Path: Enter the field path.

          Note:

          • To retrieve the entire JSON payload, add a new field with path $.
          • To retrieve the content of the array, add a new field with path $[arrayField].

          In both the cases, the value returned is of type Text.

        • Field Type: Select the field data type from the drop-down list.
      • For CSV, AVRO, and MapMessage:
        • Shape Name: Enter a name for the shape.
        • Clear Fields: Click to delete all the fields from the shape.
        • Field Name: Add the necessary fields.
        • Field Type: Select the field data type from the drop-down list.
  10. Click Save.

Creating a Kafka Target

Prerequisite: A Kafka connection.

To create a Kafka target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select Kafka from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection: Select a Kafka connection.

    • Topic name: Enter a name for the Kafka topic.

    • Data Format: Select a data format from the drop-down list.

  6. Click Next.
  7. On the Data Format screen, enter the shape details, based on the data format you have selected.
    • For JSON:
      • Create nested json object: Select this option to create a nested JSON object for the target.

    • For CSV:
      • CSV Predefined Format: Select one of the predefined data formats from the drop-down list. For more information, see Predefined CSV Data Formats.
      • First record as header: Select this option to use the first record as the header row.
  8. Click Next.
  9. On the Shape screen, select one of the methods to define the shape:
    • Infer Shape: Select this option to detect the shape automatically from the input data stream.

    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape: Select this option to manually infer the fields from a stream or file. You can also add to, or remove fields from, an existing shape. Enter the following details:
      • For JSON:
        • Shape Name: Enter a name for the shape.
        • Clear Fields: Click to remove all the fields from the shape.
        • Key: Select key fields, based on which data is partitioned. For example, records containing the same values for the selected key fields will all be stored in the same Kafka partition.

          You can select multiple fields as key. Key selection is not mandatory.

        • Field Name: Add the necessary fields.
        • Field Path: Enter the field path.

          Note:

          • To retrieve the entire JSON payload, add a new field with path $.
          • To retrieve the content of the array, add a new field with path $[arrayField].

          In both the cases, the value returned is of type Text.

        • Field Type: Select the field data type from the drop-down list.
      • For CSV and AVRO:
        • Shape Name: Enter a name for the shape.
        • Clear Fields: Click to delete all the fields in the shape.
        • Field Name: Add the necessary fields.
        • Field Type: Select the field data type from the drop-down list.
  10. Click Save.

Creating a MongoDB Target

To create an MongoDB target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select MongoDB from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection: Select a MongoDB connection from the drop-down list.
    • Database: Enter the name of the database to be used for the target.
    • Collection: Enter the name of the collection to insert documents.
  6. Click Next.
  7. On the Shape screen, select one of the methods to define the shape:
    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape: Select this option to manually define a shape. You can also add to, or remove fields from, an existing shape. Enter the following details:
      • Shape Name: Enter a name for the shape.
      • Clear Fields: Click to delete all the fields in the shape.
      • Key: Select none, or one or more fields as key. This key will be the ID field.
      • Field Name: Add the necessary fields.
      • Field Type: Select the field data type from the drop-down list.
  8. Click Save.

Creating a Network File System (NFS) Target

To create an NFS target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select NFS from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • File Name: Enter the name of the file to be stored on the local file system or NFS. The file name is prefixed with timestamp and extension, when finally stored.

    • NFS Path: Enter the shared file path which is accessible from the Spark cluster nodes.

    • File Roll Interval: Enter the roll-over interval to write a new file. The interval can be in 1000ms, 10s, 1m, or 1.5h format.

    • File Roll Max Size: Enter the roll-over file size to create a new file. The size can be in 1000, 10k, 1m, or 1g format.

    • Storage Format: Select a storage format from the drop-down list.

  6. Click Next.
  7. On the Storage Format screen, enter the shape details, based on the storage format you have selected.
    • For FILE:
      • File Format: Select a file format from the drop-down list.

      • Avro Codec: Select a compression codec from the drop-down list. This option is enabled if you have selected the file format as AVRO or AVRO Object Container Format.
    • For PARQUET:
      • PARQUET Compression: Select a compression codec from the drop-down list.
    • For ORC:
      • ORC Compression: Select a compression codec from the drop-down list.
  8. Click Next.
  9. On the Shape screen, select one of the methods to define the shape:
    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape: Select this option to manually infer the fields from a stream or file. You can also add to, or remove fields from, an existing shape. Enter the following details:
      • Shape Name: Enter a name for the shape.
      • Clear Fields: Click to delete all the fields in the shape.
      • Field Name: Add the necessary fields.
      • Field Type: Select the field data type from the drop-down list.
  10. Click Save.

Note:

In case of JSON, AVRO, AVRO OCF schema files would be written under NFS Path/SCHEMA folder.

Creating a Notification Target

Prerequisite: An OCI connection.

To create a Notification target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select Notification from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection: Select an OCI connection from the drop-down list.

    • Topic: Enter the OCID of the topic.

    • Data Format: Select a data format. Currently, OSA supports only JSON data format.

  6. Click Next.
  7. On the Shape screen, you do not have the option to define a shape. An already populated shape is displayed. Enter the following details:
    • Header: Enter the message header.
    • Body: Enter the message body.
  8. Click Save.

Note:

ONS connection type is no longer supported in GGSA. Recreate older Notification type targets in the pipeline, using an OCI connection.

Creating an OCI Object Store Target

To create an OCI Object Store target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select Object Storage from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection:
    • Object Storage File Name: Enter the name of the file written to the Object Storage bucket.
    • Object Storage File Path: Enter the name of the folder to be created in the Object Storage bucket. A new folder is created if there is no existing folder.
    • Object Storage Bucket: Enter the OCI Object Storage bucket.
    • File Roll Interval: Enter the roll-over interval to write a new file. The interval can be in 1000ms, 10s, 1m, or 1.5h format.

    • File Roll Max Size: Enter the roll-over file size to create a new file. The size can be in 1000, 10k, 1m, or 1g format.

    • NFS Path: Enter the local file or NFS path where the files are written first and then uploaded to the Object Storage.

    • Storage Format: Select a storage format from the drop-down list.

  6. Click Next.
  7. On the Data Format screen, enter the shape details, based on the storage format you have selected.
    • For FILE:
      • File Format: Select a file format from the drop-down list.

        • JSON Delimiter: Enter the JSON delimiter if you have selected the JSON file format.
        • Avro Codec: Select a compression codec from the drop-down list. This option is enabled if you have selected the file format as AVRO or AVRO Object Container Format.
    • For PARQUET:
      • PARQUET Compression: Select a compression codec from the drop-down list.
    • For ORC:
      • ORC Compression: Select a compression codec from the drop-down list.
  8. Click Next.
  9. On the Shape screen, select one of the methods to define the shape:
    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape: Select this option to manually infer the fields from a stream or file. You can also add to, or remove fields from, an existing shape. Enter the following details:
      • Shape Name: Enter a name for the shape.
      • Clear Fields: Click to delete all the fields in the shape.
      • Field Name: Add the necessary fields.
      • Field Type: Select the field data type from the drop-down list.
  10. Click Save.

Creating an OSS Target

Prerequisite: A Kafka connection.

To create a Kafka target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select Kafka from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • Connection: Select a Kafka connection.

    • Topic name: Enter a name for the kafka topic.

    • Data Format: Select a data format from the drop-down list.

  6. Click Next.
  7. On the Data Format screen, enter the shape details, based on the data format you have selected.
    • For JSON:
      • Create nested json object: Select this option to create a nested JSON object for the target.

    • For CSV:
      • CSV Predefined Format: Select one of the predefined data formats from the drop-down list. For more information, see Predefined CSV Data Formats.
      • First record as header: Select this option to use the first record as the header row.
  8. Click Next.
  9. On the Shape screen, select one of the methods to define the shape:
    • Infer Shape: Select this option to detect the shape automatically from the input data stream.

    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape: Select this option to manually infer the fields from a stream or file. You can also add to, or remove fields from, an existing shape. Enter the following details:
      • For JSON:
        • Shape Name: Enter a name for the shape.
        • Clear Fields: Click to clear all the fields from the existing shape.
        • Key: Select key fields, based on which data is partitioned. For example, records containing the same values for the selected key fields will all be stored in the same Kafka partition.

          You can select multiple fields as key. Key selection is not mandatory.

        • Field Name: Add the necessary fields.
        • Field Path: Enter the field path.

          Note:

          • To retrieve the entire JSON payload, add a new field with path $.
          • To retrieve the content of the array, add a new field with path $[arrayField].

          In both the cases, the value returned is of type Text.

        • Field Type: Select the field data type from the drop-down list.
      • For CSV and AVRO:
        • Shape Name: Enter a name for the shape.
        • Clear Fields: Click to clear all the fields from the existing shape.
        • Field Name: Add the necessary fields.
        • Field Type: Select the field data type from the drop-down list.
  10. Click Save.

Creating a REST Target

To create a REST target:

  1. On the Catalog page, click Create New Item.
  2. Hover the mouse over Target and select REST from the submenu.
  3. On the Type Properties screen, enter the following details:
    • Name: Enter a unique name for the target. This is a mandatory field.
    • Display Name: Enter a display name for the target. If left blank, the Name field value is copied.
    • Description
    • Tags
    • Target Type: The selected target is displayed.
  4. Click Next.
  5. On the Target Details screen, enter the following details:
    • URL: Enter the REST service URL.

    • Use SSL: Select this option to enable SSL and basic authentication.

    • Trust Store File > Upload File: Click to upload the Truststore file.

    • Trust Store Password: Enter the truststore password.

      If you do not have the Truststore file and password, click Trust password to connect the REST end point.

      Note:

      The Trust Store File and Trust Store Password options allow the use of untrusted certificates for REST connections, resulting in an insecure connection.

    • Trust Anyway: Select this option to supersede the TrustStoreFile selection.

    • Custom HTTP headers: Set the custom headers for HTTP in the format key=value[,key2=value2, ....], without quotes. If the end point requires authentication, you can pass it as a custom header field.

      An example custom header would be Authorization=Basic XXXXXXXX, where XXXXXXXX is a base64-encoded string of username:password.

    • Batch processing: Select this option to process batch events as a single request. Enable this option for high throughput pipelines. For example,
      Eg: [{"address":
               { "street" : xxxxxxx }
      },{"address":
               { "street" : xxxxxxa }]
    • HTTP Method: Select this option for the REST target to send requests to REST end-point, using Http POST and PUT methods. Default is set to POST.
    • Data Format: Select a data format from the drop-down list.

  6. Click Next.
  7. On the Data Format screen, enter the shape details, based on the data format you have selected.
    • For JSON:
      • Create nested json object: Select this option to create a nested JSON object for the target. For example, if the target shape is defined as
        
        field:attribute_street, field_path:address/street. 
        }
        then output json is
        {"address":
                 { "street" : xxxxxxx }
    • For CSV:
      • CSV Predefined Format: Select one of the predefined data formats from the drop-down list. For more information, see Predefined CSV Data Formats.
      • First record as header: Select this option to use the first record as the header row.
  8. Click Next.
  9. On the Shape screen, select one of the methods to define the shape:
    • Infer Shape: Select this option to detect the shape automatically from the input data stream.

    • Select Existing Shape: Select one of the existing shapes from the drop-down list.

    • Manual Shape: Select this option to manually infer the fields from a stream or file. You can also add to, or remove fields from, an existing shape. Enter the following details:
      • For JSON:
        • Shape Name: Enter a name for the shape.
        • Clear Fields: Click to remove all the fields from the shape.
        • Key: Select key fields, based on which data is partitioned. For example, records containing the same values for the selected key fields will all be stored in the same Kafka partition.

          You can select multiple fields as key. Key selection is not mandatory.

        • Field Name: Add the necessary fields.
        • Field Path: Enter the field path.

          Note:

          • To retrieve the entire JSON payload, add a new field with path $.
          • To retrieve the content of the array, add a new field with path $[arrayField].

          In both the cases, the value returned is of type Text.

        • Field Type: Select the field data type from the drop-down list.
      • For CSV:
        • Shape Name: Enter a name for the shape.
        • Clear Fields: Click to delete all the fields in the shape.
        • Field Name: Add the necessary fields.
        • Field Type: Select the field data type from the drop-down list.
  10. Click Save.