11 Scaling Node Manager Pods

Learn how to scale up and scale down the Node Manager pods in your Oracle Communications Offline Mediation Controller cloud native deployment.

Scaling Up Node Manager Pods

You can scale up the number of Node Manager pod replicas in your Offline Mediation Controller cloud native environment based on the pod's CPU or memory utilization. This helps ensure that your Node Manager pods have enough capacity to handle the current traffic demand while still controlling costs.

Note:

If your node chains include duplicate check EP nodes or AP nodes, follow the instructions in "Scaling CC, EP, and DC Nodes without Impacting Non-Scalable Nodes" before you start this procedure.

You scale up Node Manager pods by creating a scale up job, which runs as part of a post-upgrade or postinstall hook.

You scale up Node Manager pods as follows:

  1. If you are running the scale up job as part of a post-upgrade hook and the cartridge JARs are not part of the Offline Mediation Controller class path, do the following:

    1. Place the cartridge JARs in the directory specified in the ocomcCore.ocomc.cofigEnv.cartridgeFolder key, which can be set to a directory in external-PV.

    2. In your override-values.yaml file for oc-cn-ocomc, increment the ocomcCore.ocomc.configEnv.restart_count key by 1.

    3. Run the helm upgrade command to update the Offline Mediation Controller Helm release:

      helm upgrade ReleaseName oc-cn-ocomc --namespace NameSpace --values OverrideValuesFile

      where:

      • ReleaseName is the release name, which is used to track this installation instance.

      • NameSpace is the namespace in which to create Offline Mediation Controller Kubernetes objects. To integrate the Offline Mediation Controller cloud native deployment with the ECE and BRM cloud native deployments, they must be set up in the same namespace.

      • OverrideValuesFile is the path to a YAML file that overrides the default configurations in the chart's values.yaml file.

      All Offline Mediation Controller components are restarted.

  2. If you are running the scale up job as part of a postinstall hook and your node chain configuration files include cartridge JARs, do the following:

    1. In your override-values.yaml file, set the ocomcCore.ocomc.cofigEnv.cartridgeFolder key to /home/ocomcuser/cartridgeJars/.

    2. Place the cartridge JARs in the /home/ocomcuser/cartridgeJars/ directory by creating a Dockerfile similar to the following:

      FROM oc-cn-ocomc:15.0.0.x.0
      RUN mkdir -p /home/ocomcuser/cartridgeJars/
      COPY custom_cartridge.jar /home/ocomcuser/cartridgeJars/
  3. Open your override-values.yaml file for oc-cn-ocomc.

  4. Specify the number of Node Manager pods to create:

    • If your Node Manager pods use different data PVs, set the ocomcCore.ocomc.deployment.nodemgr.count key to the desired number of Node Manager pods. For example, to increase the number of pods to 3:

      ocomc:
         deployment:
            nodemgr:
               count: 3
    • If your Node Manager pods use the same data PV, set the starting number and ending number for the range of Collection Cartridge (CC) Node Manager pods to create (ccNMRangeEnd). Also, set the starting number and ending number for the range of Enhancement Processor (EP) and Distribution Cartridge (DC) Node Manager pods to create (epdcNMRangeStart and epdcNMRangeEnd). This range must include both scalable and non-scalable EP Node Manager pods.

      For example:

      ocomc:
         deployment:
            nodemgr:
               ccNMRangeEnd: 2
               epdcNMRangeStart: 100
               epdcNMRangeEnd: 100

      In this case, the following Node Manager pods would be created: node-mgr-app (CC Node Manager), node-mgr-app-2 (CC Node Manager), and node-mgr-app-100 (EP and DC Node Manager).

      Note:

      • The number ranges for the CC Node Manager pods and the EP and DC Node Manager pods should not overlap.

      • The range of EP and DC Node Manager pods to create must include both scalable and non-scalable EP Node Managers.

      • The non-scalable EP Node Manager must be the first or last Node Manager pod in the EPDC range.

  5. Configure the scale up job:

    • If your Node Manager pods use different data PVs, set the following keys under ocomc.job:

      • scaleUpNMDifferentDataPV.flag: Set this to true.

      • scaleUpNMDifferentDataPV.parent_NM: Set this to the Node Manager pod to replicate in the format Mediation_name@Mediation_host:Port, where Mediation_name is the mediation host's name configured in Node Manager, Mediation_host is the host name of the server on which the mediation host resides, and Port is the port number at which the mediation host communicates with the Node Manager pod.

      • scaleUpNMDifferentDataPV.exportConfigFile: Set this key only if you copied node chain configuration files onto the data PV. Set this to the absolute path and file name of your node chain configuration files, but do not include the file name extension.

        For example, if the files are named export.nmx and export.xml, and they reside in /home/ocomcuser/ext, you would set exportConfigFile to /home/ocomcuser/ext/export.

    • If your Node Manager pods use the same data PV, set the following keys under ocomc.job:

      • scaleUpNMSameDataPV.flag: Set this to true.

      • scaleUpNMSameDataPV.CC_NM: Set this to the Node Manager that contains all of your Collection Cartridge (CC) nodes in the format CCNM_name@CCNM_host:Port, where CCNM_name is the mediation host's name configured in Node Manager, CCNM_host is the host name of the server on which the mediation host resides, and Port is the port number at which the mediation host communicates with the CC Node Manager pod.

      • scaleUpNMSameDataPV.EPDC_NM: Set this to the Node Manager that contains all of your Enhancement Processor (EP) and Distribution Cartridge (DC) nodes in the format EPDC_name@EPDC_host:Port, where EPDC_name is the mediation host's name configured in Node Manager, EPDC_host is the host name of the server on which the mediation host resides, and Port is the port number at which the mediation host communicates with the EP and DC Node Manager pods.

        Note:

        List only Node Managers with scalable nodes. That is, don't list any Node Managers with Duplicate Check EP nodes or AP nodes.

      • scaleUpNMSameDataPV.exportConfigFile: Set this key only if you copied node chain configuration files onto the data PV. Set this to the absolute path and file name of your node chain configuration files, but do not include the file name extension. For example: /home/ocomcuser/customFiles/.

        For example, if the files are named export.nmx and export.xml, and they reside in /home/ocomcuser/ext, you would set exportConfigFile to /home/ocomcuser/ext/export.

      • scaleUpNMSameDataPV.importNMList: Set this to a comma-separated list of Node Manager pods to import, in the order in which they appear in the node chain. For example: CCNM_name@CCNM_host:Port, EPDC_name@EPDC_host:Port.

        Note:

        The Node Manager pods must be listed in the order in which they appear in the node chain.

  6. Save and close the file.

  7. If you are running the scale up job as a postinstall hook, do the following:

    1. Create a Dockerfile that is similar to the following:

      FROM oc-cn-ocomc:15.0.0.x.0
      RUN mkdir -p /home/ocomcuser/customFiles/
      COPY export_20210311_024702.xml /home/ocomcuser/customFiles
      COPY export_20210311_024702.nmx /home/ocomcuser/customFiles
    2. Run the helm install command:

      helm install ReleaseName oc-cn-ocomc --namespace NameSpace --values OverrideValuesFile --timeout 15m

      Before scaling up the pods, the job confirms that the desired Node Manager pods are up and running, that a connection with the Administration Server has been established, and that all Node Manager hosts are reachable.

  8. If you are running the scale up job as a post-upgrade hook, run the helm upgrade command:

    helm upgrade ReleaseName oc-cn-ocomc --namespace NameSpace --values OverrideValuesFile --timeout 15m

    Before scaling up the pods, the job confirms that the desired Node Manager pods are up and running, that a connection with the Administration Server has been established, and that all Node Manager hosts are reachable.

  9. In your override-values.yaml file for oc-cn-ocomc, set the ocomcCore.ocomc.job.scaleUpNMDifferentDataPV.flag or ocomcCore.ocomc.job.scaleUpNMSameDataPV.flag key to false so that the jobs are not run again the next time you update the Helm release.

You can check the job's status in one of these log files:

  • OMC_home/log/scaleUpNMSegregatedDataPV-Date.log

  • OMC_home/log/scaleUpNMSameDataPV-Date.log

Scaling Down Node Manager Pods

Note:

Scaling down Node Manager pods is supported in Offline Mediation Controller 15.0.

You scale down Node Manager pods in your Offline Mediation Controller cloud native environment by removing the Node Manager from the Administration Server, bringing down the Node Manager pods from the Kubernetes cluster, and then cleaning up the Offline Mediation Controller installation.

Note:

If your node chains include duplicate check EP nodes or AP nodes, follow the instructions in "Scaling CC, EP, and DC Nodes without Impacting Non-Scalable Nodes" before you start this procedure.

To scale down the number of Node Manager pods:

  1. Open your override-values.yaml file for oc-cn-ocomc.

  2. Specify to wait for the Enhancement Processor (EP) and Distribution Cartridge (DC) nodes to finish processing all input network account records (NARs) before shutting down cartridges. To do so, set the following keys under ocomcCore.ocomc.configEnv:

    • forceWaitForNARSToBeProcessed: Set this to true.

    • waitForNarsToProcessInSecs: Set this to a value between 1 and 60. This is the amount of time, in seconds, to wait for NARs to reach the input of a cartridge and then be processed.

  3. Specify the number of Node Manager pods to run by doing one of the following:

    • If your Node Manager pods use different data PVs, set the ocomcCore.ocomc.deployment.nodemgr.count key to the total number of pods to be up and running. For example, if the count was previously 4, you can scale down the number of pods to 3 or less.

    • If your Node Manager pods use the same data PV, configure the following keys under ocomcCore.ocomc.deployment.nodemgr:

      • ccNMRangeEnd: The ending range for creating CC Node Manager pods. (The starting range is always 1).

      • epdcNMRangeStart: The starting range for the EP and DC Node Manager pods.

      • epdcNMRangeEnd: The ending range for the EP and DC Node Manager pods. This range must include both scalable and non-scalable EP Node Manager pods. Also, the non-scalable EP Node Manager must be the first or last Node Manager pod in the EPDC range.

      For example, if ccNMRangeEnd is 2, epdcNMRangeStart is 100, and epdcNMRangeEnd is 100, the following Node Manager pods would be created:

      • node-mgr-app (CC Node Manager)

      • node-mgr-app-2 (CC Node Manager)

      • node-mgr-app-100 (EP and DC Node Manager)

    Note:

    The ranges for the CC Node Manager pods and the EP and DC Node Manager pods should not overlap.

  4. Specify to run the Node Manager scale down and post scale down jobs. To do so, set the following keys under ocomc.job:

    • scaleDownNM.flag: Set this to true.

    • scaleDownNM.startAllNodes: Set this to yes if all cartridges in the Administration Server need to be started. Otherwise, set this to no.

    • postScalingDownNM.flag: Set this to true.

    • postScalingDownNM.backupNeeded: Specify whether to create a backup of your Offline Mediation Controller installation before scaling down the Node Manager pods. Possible values are yes or no.

  5. Close your override-values.yaml file.

  6. Run the helm upgrade command to update your Offline Mediation Controller release:

    helm upgrade ReleaseName oc-cn-ocomc --namespace NameSpace --timeout duration --values OverrideValuesFile

    where:

    • ReleaseName is the release name, which is used to track this installation instance.

    • NameSpace is the namespace in which to create Offline Mediation Controller Kubernetes objects. To integrate the Offline Mediation Controller cloud native deployment with the ECE and BRM cloud native deployments, they must be set up in the same namespace.

    • duration is the amount of time Kubernetes waits for a command to complete, such as 15m for 15 minutes or 10m for 10 minutes. The default is 5 minutes.

    • OverrideValuesFile is the path to a YAML file that overrides the default configurations in the chart's values.yaml file.

  7. In your override-values.yaml file for oc-cn-ocomc, set the following keys:

    • ocomcCore.ocomc.job.scaleDownNM: Set this to false.

    • ocomcCore.ocomc.job.postScalingDown: Set this to false.

    • ocomcCore.ocomc.configEnv.forceWaitForNARSToBeProcessed: Set this to false.

Scaling CC, EP, and DC Nodes without Impacting Non-Scalable Nodes

Note:

This functionality is supported in Offline Mediation Controller 15.0.

If your Node Manager pods share the same data PV, you can scale up or scale down the number of Collection Cartridge (CC), Enhancement Processor (EP), and Distribution Cartridge (DC) nodes without impacting the following non-scalable Offline Mediation Controller nodes: Duplicate Check EP nodes and Aggregation Processor (AP) nodes.

To do so, you must identify separate Node Managers for each of the following:

  • Only CC nodes

  • Only DC nodes and scalable EP nodes

  • Only Duplicate Check EP nodes and AP nodes (This Node Manager will not be replicated)

You must also create separate mediation hosts for each Node Manager, create routes between the CC Node Managers and Duplicate Check EP Node Managers, and create routes between the Duplicate Check EP Node Managers and EP and DC Node Managers. You create mediation hosts and routes by using Administration Client or NMShell. See "Connecting Your Administration Client" and "Using NMShell to Automate Deployment of Node Chains".

Figure 11-1 shows a sample Offline Mediation Controller architecture that contains six Node Managers: two CC Node Managers, one AP Node Manager, one EP Duplicate Check Node Manager, and two EPDC Node Managers.

Figure 11-1 Example Node Manager Architecture with Non-Scalable Nodes



To scale the Node Manager pods without impacting the non-scalable Node Manager pods, follow the instructions in "Scaling Up Node Manager Pods" and "Scaling Down Node Manager Pods", except do the following:

  • When specifying the range of EP and DC Node Manager pods to create, include all EP and DC Node Managers and all non-scalable Node Managers.

    For the example in Figure 11-1, you would configure three EPDC Node Manager pods as follows:

    ocomc:
       deployment:
          nodemgr:
             epdcNMRangeStart: 100
             epdcNMRangeEnd: 102

    In this case, the following Node Manager pods would be created: node-mgr-app-100 (Non-Scalable Node Manager), node-mgr-app-101 (EP and DC Node Manager), and node-mgr-app-102 (EP and DC Node Manager).

  • When configuring a scale up job, set the EPDC_NM key to a scalable Node Manager pod.

    For the example in Figure 11-1, you could specify to replicate either node-mgr-app-101 or node-mgr-app-102:

    ocomc:
       job:
          scaleUpNMSameDataPV:
             EPDC_NM: node-mgr-app-101@node-mgr-app-101:55109