Perform Post-Provisioning Tasks for Integration Analytics Instances

After creating an Integration Analytics instance of Oracle Integration Classic, you may need to complete the following post-provisioning tasks. The features you want to use determine the tasks you need to complete.

You do not have to perform these tasks if you’ve only created instances with the Integrations and Process feature set.

Register an Integration Analytics Instance to Connect Integration Insight to Integrations

Use Registration to link more Oracle Integration features to your current instance. For example, you might receive another URL for an Integration Analytics instance with the Insight and Streams features. You can register it with your current server containing Visual Builder, Processes, and Integrations to see all of their content together.

Integration Analytics instances include Integration Insight and Streams. If want to use Integration Insight to collect and monitor business-level metrics from an Integrations instance, you must register the Integration Analytics instance to the Integrations instance and vice versa. Registration is not required to use Streams.

Only users assigned the ServiceAdministrator role can access registration.

  1. On the Home page, click Registration in the navigation pane.

    The Registration page opens. The Current Instance section lists the features available in your instance.

  2. Click Register.
  3. In the Register Integration Instance dialog, enter the URL and administrator credentials of the instance you want to register with your current instance, and click Discover.

    The dialog displays information about features discovered.

    Description of register_integration_instance.png follows
    Description of the illustration register_integration_instance.png

  4. In the Name field, enter a name for the discovered instance, and click Register.
    • The URLs are linked and the newly registered features display on the Registration page.

    • The navigation pane reflects the newly registered features.

    • The Home page now displays tiles for the newly registered features.

  5. Repeat steps 1–4 on the other server, so that features are registered on each server.

    You must perform registration on both servers to see all content on both servers.

Post-Provisioning Tasks for Stream Analytics

You must complete the following tasks, in sequence, to use the Stream Analytics feature after creating an Integration Analytics instance of Oracle Integration Classic.

You don’t have to complete these tasks if you have not created any Integration Analytics or if you are not using Stream Analytics.

Enable Direct Access to Stream Analytics

Stream Analytics is included with Integration Analytics. You can bypass the load balancer to access Stream Analytics by disabling redirection to the load balancer URL and providing access rules to access the port listening to the managed server on which Stream Analytics is deployed.

Configure the Frontend Host and HTTPS Port

  1. Log in to the Oracle WebLogic Server Administration Console. You can access this console from the Oracle Cloud Infrastructure Console. See Explore the Oracle Cloud Infrastructure Console.

  2. In the navigation pane, select Environment > Clusters > cluster_name > HTTP.

  3. Click Lock and Edit.

  4. Leave the Frontend Host field blank.

  5. Enter 0 in the Frontend HTTPS Port field.

  6. Click View changes and restarts.

  7. Click Restart Checklist.

  8. Select Server Restart Checklist.

  9. Click Restart.

  10. Log out of Oracle WebLogic Server Administration Console.

Access Stream Analytics Directly with the Oracle WebLogic Server Public IP Address

  1. Access Stream Analytics with the direct public IP and port: WLS_Public_IP:8002/ic/streams.

  2. Log in using the Oracle Identity Cloud Service (IDCS) user name.

Add Access Rules

The access rules allow you to access Stream Analytics. You can use access rules to control network access to service components.

To add access rules:
  1. Log in to the PaaS Service Manager Console.
  2. Click Manage this service Manage this Service icon and select Access Rules.
  3. Launch the Oracle Big Data Cloud Service application.
  4. Select Access Rules under Stream BDCSE within Platform Services.
  5. Click Create Rule and fill in all the required details.
    You can see the rule you have added in the list of access rules.

Configure Yarn Resource Manager

Update and modify the configuration variables in BDCSCE Cluster (Spark + Hadoop) to complete the Yarn Resource Manager configuration.

To configure Yarn Resource Manager:
  1. Launch the Ambari application. Enter the URL of the application in the browser in the form: http://active-bdcsce-MASTER-1-node-ipaddress:8080. This is the IP Address of the master node of Spark.
  2. Click YARN in the left pane.
  3. Go to the Configs tab.
  4. Go to the Advanced tab.
  5. Choose the relevant category.
  6. Add, update, or delete the required properties.
  7. Click Save.
    While saving, restart all the affected components when prompted.

Identify Kafka and Yarn Resource URLs

You need to identify and obtain Kafka and Yarn resourcemanager URLs.

One of the BDCS instances will be running Yarn resourcemanager user interface on port 8088. There is no definite way to find which instance is running this user interface, so you must adopt a trial and error strategy by providing BDCS instance IP:8088. Click the configuration to open the XML page that has address for the Yarn resourcemanager.

Look for the yarn.resourcemanager.address property to get the YARN resourcemanager URL. This property contains the instance and port details.

Description of yarn_resource_details.png follows
Description of the illustration yarn_resource_details.png

Configure System Settings in Stream Analytics

After you provision Stream Analytics, configuring system settings is another important step to complete before you can start using Stream Analytics.

To configure system settings:
  1. Launch Stream Analytics. See Enable Direct Access to Stream Analytics.
  2. Log in to the application.
  3. Click the User Name in the top-right corner and select System Settings.

    The System Settings are not auto-configured. You can get these details from the BDCS and OEHPCS instances.

    Description of osa_system_settings.png follows
    Description of the illustration osa_system_settings.png

  4. Enter Kafka Zookeeper Connection details.
    The IP address is the public IP used for OEHCS. Obtain the port from the Access Rules screen by looking for the zookeeper port entry in the list.
    1. Configure Kafka Zookeeper in Access Rules page.
    2. Configure Kafka Broker in Access Rules page.
  5. Select the Runtime Server. Select either Yarn or Spark Standalone based on your requirement.

    If you select Yarn as the server, specify the following parameters:

    1. Enter YARN Resource Manager URL.

      One of the BDCS instances will be running Yarn resourcemanager user interface on port 8088. There is no definite way to find which instance is running this user interface, so you must adopt a trial and error strategy by providing BDCS instance IP:8088. Click the configuration to open the XML page that has address for the Yarn resourcemanager. Look for the property yarn.resourcemanager.address to get the YARN resourcemanager URL.

    2. Enter the Storage details.

      Try the BDCS instance public IP with port 50070 to get the web hdfs explorer. The named node marked as active can be configured for storage.

    3. Enter the Path.
      • WebHDFS: IP_Address:50070/directory_name

      • HDFS: IP_Address/directory_name

    4. Select the Hadoop Authentication mode and enter the required credentials.
    5. Set the HA Namenodes.

    If you select Spark Standalone as the server, you need to specify the following parameters:

    1. Enter the Spark REST URL.
    2. Select the applicable Storage. NFS is the default option for Spark Standalone.
    3. Specify the Path for the storage.
  6. Click Save.