2 Running Applications and Utilities Outside Pods

Learn how to run applications, utilities, and scripts on demand in Oracle Communications Billing and Revenue Management (BRM) cloud native without entering a pod by running configurator and brm-apps jobs.

Topics in this document:

Running Load Utilities through Configurator Jobs

You can run BRM load utilities on demand without entering into a pod by running a configurator job. For a list of utilities supported by the configurator job, see "Supported Load Utilities for Configurator Jobs".

To run BRM load utilities through configurator jobs:

  1. Update the oc-cn-helm-chart/config_scripts/loadme.sh script with the list of load utilities that you want to run. The input will follow this general syntax:

    #!/bin/sh 
    
    cd runDirectory; utilityCommand configFile 
    exit 0;

    where:

    • runDirectory is the directory from which to run the utility.

    • utilityCommand is the utility command to run at the command line.

    • configFile is the file name and path to any input files the utility requires.

  2. Move any required input files to the oc-cn-helm-chart/config_scripts directory.

    If the input file is an XML file with an XSD path, modify the XML file to refer to the container path. If the XML has just an XSD file name, move the XSD file along with the XML file.

  3. Enable the configurator job. In your override-values.yaml file for oc-cn-helm-chart, set ocbrm.config_jobs.run_apps to true:

    ocbrm:
       config_jobs:
          run_apps: true
  4. Run the helm upgrade command to update the release:

    helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile -n BrmNameSpace

    where:

    • BrmReleaseName is the release name for oc-cn-helm-chart and is used to track this installation instance.

    • OverrideValuesFile is the file name and path to your override-values.yaml file.
    • BrmNameSpace is the namespace in which to create BRM Kubernetes objects for the BRM Helm chart.

    The utilities specified in the loadme.sh script are run.

  5. If the utility requires the CM to be restarted, do this:

    1. Update these keys in the override-values.yaml file for oc-cn-helm-chart:

      • ocbrm.config_jobs.restart_count: Increment the existing value by 1
      • ocbrm.config_jobs.run_apps: Set this to false
    2. Update the Helm release again:

      helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile -n BrmNameSpace

Running pin_bus_params and load_pin_device_state

This example shows how to set up the configurator job to run the pin_bus_params and load_pin_device_state utilities.

To run pin_bus_params and then run load_pin_device_state:

  1. Add the following lines to the oc-cn-helm-chart/config_scripts/loadme.sh script:

    #!/bin/sh 
    
    cd /oms/sys/data/config; pin_bus_params -v /oms/load/bus_params_billing_flow.xml 
    cd /oms/sys/data/config; load_pin_device_state -v /oms/sys/data/config/pin_device_state_num
    exit 0;
  2. Move the bus_params_billing_flow.xml and pin_device_state_num input files to the oc-cn-helm-chart/config_scripts directory.

  3. In the override-values.yaml file for oc-cn-helm-chart, set ocbrm.config_jobs.run_apps to true.

  4. Run the helm upgrade command to update the release:

    helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile -n BrmNameSpace
  5. Restart the CM because pin_bus_params requires it.

    1. Set these keys in the override-values.yaml file:

      • ocbrm.config_jobs.restart_count: Increment the existing value by 1
      • ocbrm.config_jobs.run_apps: Set this to false
    2. Update the Helm release again:

      helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile -n BrmNameSpace

Running Load Utilities on Multischema Systems

When you use the configurator job to load configuration data into a multischema system, you load the configuration data into the primary schema.

To load configuration data on a multischema system:

  1. Update the oc-cn-helm-chart/config_scripts/loadme.sh script with the list of load utilities that you want to run. The input will follow this general syntax:

    #!/bin/sh 
    
    cd runDirectory; utilityCommand configFile 
    exit 0;
  2. Move any required input files to the oc-cn-helm-chart/config_scripts directory.

    If the input file is an XML file with an XSD path, modify the XML file to refer to the container path. If the XML has just an XSD file name, move the XSD file along with the XML file.

  3. Enable the configurator job, and disable multischema in the configurator job.

    In your override-values.yaml file for oc-cn-helm-chart, set these keys:

    ocbrm:
       config_jobs:
          run_apps: true
          isMultiSchema: false
  4. Run the helm upgrade command to update the BRM Helm release:

    helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile -n BrmNameSpace

    The utilities specified in the loadme.sh script are run.

  5. If the utility requires the CM to be restarted, do this:

    1. Update these keys in the override-values.yaml file for oc-cn-helm-chart:

      • ocbrm.config_jobs.restart_count: Increment the existing value by 1
      • ocbrm.config_jobs.run_apps: Set this to false
    2. Update the BRM Helm release again:

      helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile -n BrmNameSpace

Running Applications and Utilities through brm-apps Jobs

You can run applications and utilities on demand without entering a pod through a brm-apps job. For a list of utilities and applications supported by the brm-apps job, see "Supported Utilities and Applications for brm-apps Jobs".

To run BRM applications through a brm-apps job:

  1. Update the oc-cn-helm-chart/brmapps_scripts/loadme.sh script to include the applications and utilities that you want to run. The input will follow this general syntax:

    #!/bin/sh 
    
    cd runDirectory; utilityCommand configFile 
    exit 0;

    where:

    • runDirectory is the directory from which to run the application or utility.

    • utilityCommand is the utility or application command to run at the command line.

    • configFile is the file name and path to any input files the application or utility requires.

  2. Move any required input files to the oc-cn-helm-chart/brmapps_scripts directory.

  3. Enable the brm-apps job. In your override-values.yaml file for oc-cn-helm-chart, set ocbrm.brm_apps.job.isEnabled to true.

  4. If you run a multithreaded application (MTA), configure the performance parameters in your override-values.yaml file. For more information, see "Configuring MTA Performance Parameters".

  5. Run the helm upgrade command to update the BRM Helm release:

    helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile -n BrmNameSpace

    where:

    • BrmReleaseName is the release name for oc-cn-helm-chart and is used to track this installation instance.

    • OverrideValuesFile is the file name and path to your override-values.yaml file.
    • BrmNameSpace is the namespace in which to create BRM Kubernetes objects for the BRM Helm chart.

    The applications and utilities specified in the loadme.sh script are run.

Configuring MTA Performance Parameters

You can configure the performance of multithreaded (MTA) applications, such as pin_bill_accts and pin_export_price, outside of the Kubernetes cluster. To do so, you edit these MTA-related keys in your override-values.yaml file for oc-cn-helm-chart:

  • mtaChildren: Governs how many child threads process data in parallel. Each child thread fetches and processes one account from the queue before it fetches the next one.

    You can increase the number of child threads to improve application performance when the database server remains under-utilized even though you have a large number of accounts. If you increase the number of children beyond the optimum, performance suffers from context switching. This is often indicated by a higher system time with no increase in throughput. Performance is best when the number of children is nearly equal to the number of DM backends, and most backends are dedicated to processing transactions.

  • mtaPerBatch: Specifies the number of payment transactions the pin_collect utility sends to dm_fusa in a batch. For example, if you have 20,000 payments to process and the mtaPerBatch key is set to 5000, the pin_collect utility sends four batches to dm_fusa (each batch containing 5,000 payment transactions).

    Note:

    This key impacts the performance of the pin_collect application only. It has minimal impact on other applications.

  • mtaPerStep: Specifies how much data to store in dm_oracle when the application performs a step search. It does not significantly impact performance but governs memory usage in dm_oracle. It also prevents BRM from using all of its memory for one large search.

    A 64-bit dm_oracle can use reasonably large values. A typical mtaPerStep value for invoice utilities would be between 10,000 and 50,000.

  • mtaFetchSize: Specifies the number of account records to retrieve from the database and hold in memory before the utility starts processing them. In general, this value should be as large as possible to reduce the number of fetches from the database.

    The maximum possible fetch size depends on the complexity of the application's search results. When running applications on parent accounts (pay_type 10001), the mtaFetchSize value refers to the number of parent accounts to retrieve. For example, if you have 10,000 parent accounts and each account has an average of 50 children, you would set mtaFetchSize to 10,000 to retrieve all parent accounts. When running applications on only the children (pay_type 10007), you would set mtaFetchSize to 500,000 to retrieve all child accounts.

The MTA-related keys are nested under the ocbrm.brm_apps.deployment.DirectoryName section in your override-values.yaml file:

ocbrm:
   brm_apps:
      deployment:
         DirectoryName
            mtaChildren: 5
            mtaPerBatch: 500
            mtaPerStep: 1000
            mtaFetchSize: 5000

where DirectoryName is the name of the directory in which the application resides, such as pin_collections for the pin_collect application or pin_billd for the pin_bill_day application. The directory name for each application is listed in "Supported Utilities and Applications for brm-apps Jobs".

If you modify these keys, you must run the helm upgrade command for the changes to take effect. See "Updating a Helm Release".

Running Custom Applications and Utilities through brm-apps

You can configure your BRM cloud native environment to run custom applications and utilities through a brm-apps job. To do so:

  1. Identify all binaries, libraries, and configuration files required for your custom utility.

  2. Layer the binaries and libraries on top of the brm-apps image.

    If any configuration needs to be done when the container starts, modify the entrypoint.sh script and layer it while building the brm-apps image.

  3. Convert any configuration files into ConfigMaps.

Example: Running pin_billing_custom

This example shows how to set up a custom utility named pin_billing_custom to run through a brm-apps job.

  1. Convert the utility's pin.conf configuration file into a ConfigMap, which will be mounted inside the container in the path /oms/custom_pin.conf.

    For information about converting a pin.conf file into a ConfigMap, refer to any configmap_pin_conf file in the oc-cn-helm-chart/template directory.

  2. Copy the entrypoint.sh script from the oc-cn-docker-files directory to the /oms directory.

  3. In the entrypoint.sh script, under the brm-apps section, add a line for copying the /oms/custom_pin.conf file to the apps/pin_billing_custom directory.

  4. Layer the pin_billing_custom binary, the modified entrypoint.sh script, and the apps/pin_billing_custom directory into a brm-apps image by creating this dockerfile_custom_brm_apps file:

    Note:

    Ensure that the scripts and binaries have execute permission.

    vi dockerfile_custom_brm_apps
        FROM brm_apps:15.0.x.0.0
        USER root
        COPY pin_billing_custom /oms/bin/
        RUN mkdir /oms/apps/pin_billing_custom
        COPY entrypoint.sh /oms/
        RUN chown -R omsuser:oms /oms/bin/pin_billing_custom  /oms/apps/pin_billing_custom  /oms/entrypoint.sh && \      
          chmod -R 755 /oms/bin/pin_billing_custom  /oms/apps/pin_billing_custom  /oms/entrypoint.sh
        USER omsuser
  5. Build the image by entering this command:

    podman build --format docker --tag brm_apps:15.0.x.0.0-custom --file dockerfile_custom_brm_apps .
  6. Update the oc-cn-helm-chart/template/brm_apps_job.yaml file to mount the ConfigMap in the container:

    volumeMounts:
    - name: brm-apps-custom-pin-conf
      mountPath: /oms/custom_pin.conf
        subPath: pin.conf
    volumes:
    - name: brm-apps-custom-pin-conf
      configMap:
        name: brm-apps-custom-conf
  7. Add the pin.conf file entries to the ConfigMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: brm-apps-custom-conf
      namespace: {{ .Release.Namespace }}
      labels:
        application: {{ .Chart.Name }}
    data:
      pin.conf: |
      #************************************************************************
       pin.conf content here
      #************************************************************************
  8. Update the image tag in your override-values.yaml file.

Running Business Operations through pin_job_executor Service

You can run business operations, such as billing and payment collections, in BRM cloud native environments in the following ways:

  • Using the brm-apps pod to run the pin_job_executor utility as a service named pje in the pje pod. The pje service processes business operations jobs or runs the pin_virtual_time utility. The pin_job_executor service port is exposed as ClusterIP, and the host name and service name of the brm-apps pod is pje.

  • Using the boc pod or another client application to call the PCM_OP_JOB_EXECUTE opcode. In this case, the opcode request goes to the CM, which connects to the pje pod through the pin_job_executor service. The pin_job_executor service processes the opcode request and calls the appropriate BRM application.

    For more information, see "Job Opcode Workflows" in BRM Opcode Guide.