7 Scaling Node Manager Pods

Learn how to scale up and scale down the Node Manager pods in your Oracle Communications Offline Mediation Controller cloud native deployment.

Scaling Up Node Manager Pods (Patch Set 5 and Later)

You can scale up the number of Node Manager pod replicas in your Offline Mediation Controller cloud native environment based on the pod's CPU or memory utilization. This helps ensure that your Node Manager pods have enough capacity to handle the current traffic demand while still controlling costs.

Note:

If your node chains include duplicate check EP nodes or AP nodes, follow the instructions in "Scaling CC, EP, and DC Nodes without Impacting Non-Scalable Nodes (Patch Set 5.1 and Later)" before you start this procedure.

You scale up Node Manager pods by creating a scale up job, which runs as part of a post-upgrade or post-install hook.

In Patch Set 5 and later releases, you scale up Node Manager pods as follows:

  1. If you are running the scale up job as part of a post-upgrade hook and the cartridge JARs are not part of the Offline Mediation Controller class path, do the following:

    1. Place the cartridge JARs in the directory specified in the ocomc.cofigEnv.cartridgeFolder key, which can be set to a directory in external-PV.

    2. In your override-values.yaml file for oc-cn-ocomc-helm-chart, increment the ocomc.configEnv.restart_count key by 1.

    3. Run the helm upgrade command to update the Offline Mediation Controller Helm release:

      helm upgrade ReleaseName oc-cn-ocomc-helm-chart --namespace NameSpace --values OverrideValuesFile

      where:

      • ReleaseName is the release name, which is used to track this installation instance.

      • NameSpace is the name space in which to create Offline Mediation Controller Kubernetes objects. To integrate the Offline Mediation Controller cloud native deployment with the ECE and BRM cloud native deployments, they must be set up in the same name space.

      • OverrideValuesFile is the path to a YAML file that overrides the default configurations in the chart's values.yaml file.

      All Offline Mediation Controller components are restarted.

  2. If you are running the scale up job as part of a post-install hook and your node chain configuration files include cartridge JARs, do the following:

    1. In your override-values.yaml file, set the ocomc.cofigEnv.cartridgeFolder key to /home/ocomcuser/cartridgeJars/.

    2. Place the cartridge JARs in the /home/ocomcuser/cartridgeJars/ directory by creating a Dockerfile similar to the following:

      FROM oc-cn-ocomc:12.0.0.x.0
      RUN mkdir -p /home/ocomcuser/cartridgeJars/
      COPY custom_cartridge.jar /home/ocomcuser/cartridgeJars/
  3. Open your override-values.yaml file for oc-cn-ocomc-helm-chart.

  4. Specify the number of Node Manager pods to create:

    • If your Node Manager pods use different data PVs, set the ocomc.deployment.nodemgr.count key to the desired number of Node Manager pods. For example, to increase the number of pods to 3:

      ocomc:
         deployment:
            nodemgr:
               count: 3
    • If your Node Manager pods use the same data PV, set the starting number and ending number for the range of Collection Cartridge (CC) Node Manager pods to create (ccNMRangeEnd). Also, set the starting number and ending number for the range of Enhancement Processor (EP) and Distribution Cartridge (DC) Node Manager pods to create (epdcNMRangeStart and epdcNMRangeEnd). This range must include both scalable and non-scalable EP Node Manager pods.

      For example:

      ocomc:
         deployment:
            nodemgr:
               ccNMRangeEnd: 2
               epdcNMRangeStart: 100
               epdcNMRangeEnd: 100

      In this case, the following Node Manager pods would be created: node-mgr-app (CC Node Manager), node-mgr-app-2 (CC Node Manager), and node-mgr-app-100 (EP and DC Node Manager).

      Note:

      • The number ranges for the CC Node Manager pods and the EP and DC Node Manager pods should not overlap.

      • The range of EP and DC Node Manager pods to create must include both scalable and non-scalable EP Node Managers.

      • The non-scalable EP Node Manager must be the first or last Node Manager pod in the EPDC range.

  5. Configure the scale up job:

    • If your Node Manager pods use different data PVs, set the following keys under ocomc.job:

      • scaleUpNMDifferentDataPV.flag: Set this to true.

      • scaleUpNMDifferentDataPV.parent_NM: Set this to the Node Manager pod to replicate in the format Mediation_name@Mediation_host:Port, where Mediation_name is the mediation host's name configured in Node Manager, Mediation_host is the host name of the server on which the mediation host resides, and Port is the port number at which the mediation host communicates with the Node Manager pod.

      • scaleUpNMDifferentDataPV.exportConfigFile: Set this key only if you copied node chain configuration files onto the data PV. Set this to the absolute path and file name of your node chain configuration files, but do not include the file name extension.

        For example, if the files are named export.nmx and export.xml, and they reside in /home/ocomcuser/ext, you would set exportConfigFile to /home/ocomcuser/ext/export.

    • If your Node Manager pods use the same data PV, set the following keys under ocomc.job:

      • scaleUpNMSameDataPV.flag: Set this to true.

      • scaleUpNMSameDataPV.CC_NM: Set this to the Node Manager that contains all of your Collection Cartridge (CC) nodes in the format CCNM_name@CCNM_host:Port, where CCNM_name is the mediation host's name configured in Node Manager, CCNM_host is the host name of the server on which the mediation host resides, and Port is the port number at which the mediation host communicates with the CC Node Manager pod.

      • scaleUpNMSameDataPV.EPDC_NM: Set this to the Node Manager that contains all of your Enhancement Processor (EP) and Distribution Cartridge (DC) nodes in the format EPDC_name@EPDC_host:Port, where EPDC_name is the mediation host's name configured in Node Manager, EPDC_host is the host name of the server on which the mediation host resides, and Port is the port number at which the mediation host communicates with the EP and DC Node Manager pods.

        Note:

        List only Node Managers with scalable nodes. That is, don't list any Node Managers with Duplicate Check EP nodes or AP nodes.

      • scaleUpNMSameDataPV.exportConfigFile: Set this key only if you copied node chain configuration files onto the data PV. Set this to the absolute path and file name of your node chain configuration files, but do not include the file name extension. For example: /home/ocomcuser/customFiles/.

        For example, if the files are named export.nmx and export.xml, and they reside in /home/ocomcuser/ext, you would set exportConfigFile to /home/ocomcuser/ext/export.

      • scaleUpNMSameDataPV.importNMList: Set this to a comma-separated list of Node Manager pods to import, in the order in which they appear in the node chain. For example: CCNM_name@CCNM_host:Port, EPDC_name@EPDC_host:Port.

        Note:

        The Node Manager pods must be listed in the order in which they appear in the node chain.

  6. Save and close the file.

  7. If you are running the scale up job as a post-install hook, do the following:

    1. Create a Dockerfile that is similar to the following:

      FROM oc-cn-ocomc:12.0.0.x.0
      RUN mkdir -p /home/ocomcuser/customFiles/
      COPY export_20210311_024702.xml /home/ocomcuser/customFiles
      COPY export_20210311_024702.nmx /home/ocomcuser/customFiles
    2. Run the helm install command:

      helm install ReleaseName oc-cn-ocomc-helm-chart --namespace NameSpace --values OverrideValuesFile --timeout 15m

      Before scaling up the pods, the job confirms that the desired Node Manager pods are up and running, that a connection with the Administration Server has been established, and that all Node Manager hosts are reachable.

  8. If you are running the scale up job as a post-upgrade hook, run the helm upgrade command:

    helm upgrade ReleaseName oc-cn-ocomc-helm-chart --namespace NameSpace --values OverrideValuesFile --timeout 15m

    Before scaling up the pods, the job confirms that the desired Node Manager pods are up and running, that a connection with the Administration Server has been established, and that all Node Manager hosts are reachable.

  9. In your override-values.yaml file for oc-cn-ocomc-helm-chart, set the ocomc.job.scaleUpNMDifferentDataPV.flag or ocomc.job.scaleUpNMSameDataPV.flag key to false so that the jobs are not run again the next time you update the Helm release.

You can check the job's status in one of these log files:

  • OMC_home/log/scaleUpNMSegregatedDataPV-Date.log

  • OMC_home/log/scaleUpNMSameDataPV-Date.log

Scaling Up Node Manager Pods (Patch Set 4 Only)

You can scale up the number of Node Manager pod replicas in your Offline Mediation Controller cloud native environment based on the pod's CPU or memory utilization. This helps ensure that your Node Manager pods have enough capacity to handle the current traffic demand while still controlling costs.

In the Patch Set 4 release, you scale up Node Manager pods by performing these high-level tasks:

  1. Creating the Node Manager pods when you initially deploy Offline Mediation Controller.

    See "Creating Node Manager Pods During Deployment".

  2. Configuring the Node Manager pods by adding node chains and specifying the Node Manager pods and nodes to replicate.

    See "Configuring Your Node Manager Pods".

  3. When you are ready to scale up the number of Node Manager pods, running the scale up job.

    See "Running the Scale Up Job".

Creating Node Manager Pods During Deployment

During the Offline Mediation Controller deployment process, you specify the initial number of Node Manager pods to create and indicate whether the pods will share the same data PersistentVolume (PV).

To create your Node Manager pods:

  1. Open your override-values.yaml file for oc-cn-ocomc-helm-chart in a text editor.

  2. Enable your pods to be scaled up by setting the ocomc.configEnv.adminsvrHostBased key to true.

  3. Specify to create one Node Manager pod by setting the ocomc.deployment.nodemgr.count key to 1.

    Note:

    The count must be 1 for the first deployment. You can increment the number of Node Manager pods later when you run the scale up job.

  4. Specify whether the data PV will be shared across Node Manager pods by setting the ocomc.deployment.nodemgr.sharedDataPV key to one of the following:

    • true: All Node Manager pods will share the same data PV.

    • false: The Node Manager pods will have different data PVs.

  5. Set any other keys that are needed from Table 3-2.

  6. Save and close the file.

  7. Run the helm install command:

    helm install ReleaseName oc-cn-ocomc-helm-chart --namespace NameSpace --values OverrideValuesFile

    where:

    • ReleaseName is the release name, which is used to track this installation instance.

    • NameSpace is the name space in which to create Offline Mediation Controller Kubernetes objects. To integrate the Offline Mediation Controller cloud native deployment with the ECE and BRM cloud native deployments, they must be set up in the same name space.

    • OverrideValuesFile is the path to a YAML file that overrides the default configurations in the chart's values.yaml file.

The Node Manager pods are up and running in your Offline Mediation Controller cloud native environment.

Configuring Your Node Manager Pods

After Offline Mediation Controller is deployed, configure your Node Manager pods by:

  • Creating nodes and node chains. You can create them manually through the Administration Client or by importing them from a node chain configuration file.

  • Specifying the Node Manager pods and nodes to replicate.

    Note:

    The scale up process does not support the replication of Aggregation Processor (AP) and duplicate check Enhancement Processor (EP) nodes. These nodes should not be present in any Node Manager you want to replicate.

To configure your Node Manager pods:

  1. If you want to create nodes and node chains by importing them from node chain configuration (.nmx and .xml) files, do the following:

    Note:

    • For pods with different data PVs, the configuration files must define a Node Manager that contains all of the nodes.

    • For pods with the same data PV, the configuration files must define two Node Managers: one containing all of the Collection Cartridge (CC) node instances, and one containing all of the Enhancement Processor (EP) and Distribution Cartridge (DC) node instances with the connected routes between the CC and EP/DC nodes.

    1. Move your configuration files to the vol-external PVC, which has a default path of /home/ocomcuser/ext.

    2. Set the permissions for your configuration files to:

      chown 1000:1000 
      chmod 777
    3. If you are building your own images, create a Dockerfile that is similar to the following and then build the Offline Mediation Controller image:

      FROM oc-cn-ocomc:12.0.0.x.0
      RUN mkdir -p /home/ocomcuser/install/image
      COPY exportFile.xml /home/ocomcuser/install/image
      COPY exportFile.nmx /home/ocomcuser/install/image
  2. Open your override-values.yaml file for oc-cn-ocomc-helm-chart.

  3. If your Node Manager pods will use different data PVs, set the following keys under ocomc.job:

    • scaleUpNMDifferentDataPV.flag: Set this to true.

    • scaleUpNMDifferentDataPV.parent_NM: Set this to the Node Manager pod to replicate in the format Mediation_name@Mediation_host:Port, where Mediation_name is the mediation host's name configured in the Node Manager, Mediation_host is the hostname of the server on which the mediation host resides, and Port is the port number at which the mediation host communicates with the Node Manager pod.

    • scaleUpNMDifferentDataPV.exportConfigFile: Set this key only if you copied node chain configuration files onto the data PV. Set this to the absolute path and file name of your node chain configuration files, but do not include the file name extension.

      For example, if the files are named export.nmx and export.xml, and they reside in /home/ocomcuser/ext, you would set exportConfigFile to /home/ocomcuser/ext/export.

    • scaleUpNMDifferentDataPV.waitTime: Set this to the amount of time to wait between scaling up Node Manager pods. The default is one minute (1m).

  4. If your Node Manager pods will use the same data PV, set the following keys under ocomc.job:

    • scaleUpNMSameDataPV.flag: Set this to true.

    • scaleUpNMSameDataPV.CC_NM: Set this to the Node Manager that contains all of your Collection Cartridge (CC) nodes in the format CCNM_name@CCNM_host:Port, where CCNM_name is the mediation host's name configured in the Node Manager, CCNM_host is the hostname of the server on which the mediation host resides, and Port is the port number at which the mediation host communicates with the CC Node Manager pod.

    • scaleUpNMSameDataPV.EPDC_NM: Set this to the Node Manager that contains all of your Enhancement Processor (EP) and Distribution Cartridge (DC) nodes in the format EPDC_name@EPDC_host:Port, where EPDC_name is the mediation host's name configured in Node Manager, EPDC_host is the hostname of the server on which the mediation host resides, and Port is the port number at which the mediation host communicates with the EP and DC Node Manager pods.

    • scaleUpNMSameDataPV.exportConfigFile: Set this key only if you copied node chain configuration files onto the data PV. Set this to the absolute path and file name of your node chain configuration files, but do not include the file name extension.

      For example, if the files are named export.nmx and export.xml, and they reside in /home/ocomcuser/ext, you would set exportConfigFile to /home/ocomcuser/ext/export.

    • scaleUpNMSameDataPV.importNMList: Set this to a comma-separated list of Node Manager pods to import, in the order in which they appear in the node chain. For example: CCNM_name@CCNM_host:Port, EPDC_name@EPDC_host:Port. This key is mandatory when the exportConfigFile key is set.

    • scaleUpNMSameDataPV.waitTime: Set this to the amount of time to wait between scaling up Node Manager pods. The default is one minute (1m).

  5. Save and close the file.

  6. If you have multiple CC Node Manager pods sharing the same data PV, do one of the following:

    • Modify the fileLocation entry in the export.xml file (if you are importing the node chain configuration).

    • Modify the fileLocation entry in the general.cfg file on node-manager-pvc (if you configured the node chains manually using Administration Client).

    • In the Administration Client, right-click the cartridge, click Edit, and add a variable to the file location path (if you configured the node chains manually using Administration Client).

    For example, in the export.xml file, you would add ${VARIABLE} to the end of the file location path and then set VARIABLE to the appropriate value: INSTANCE, NODEID, or HOST.

    <configVariable name="fileLocation">/home/ocomcuser/data/${VARIABLE}</configVariable>

Example 7-1 Configuring Node Manager Pods with Different Data PVs

This shows sample override-values.yaml entries for configuring your Node Manager pods. In this example, the Node Manager pods use different data PVs and the parent Node Manager pod already has its node chain up and running.

ocomc:
  job:
    scaleUpNMDifferentDataPV:
      flag: "true"
      parent_NM: "node-mgr-app@node-mgr-app:55109"
      exportConfigFile: ""
      waitTime: "1m"
    scaleUpNMSameDataPV:
      flag: "false"

Example 7-2 Configuring Node Manager Pods that Share a Data PV

This shows sample override-values.yaml entries for configuring your Node Manager pods. In this example, the Node Manager pods all share the same data PV, the node chain configuration will be imported from your export.nmx and export.xml files, and the Node Manager pods have no nodes yet.

ocomc:
  job:
    scaleUpNMDifferentDataPV:
      flag: "false"
    scaleUpNMSameDataPV:
      flag: "true"
      CC_NM: "node-mgr-app@node-mgr-app:55109"
      EPDC_NM: "node-mgr-app-2@node-mgr-app-2:55109"
      exportConfigFile: "/home/ocomcuser/tmp/export"
      importNMList: "node-mgr-app@node-mgr-app:55109,node-mgr-app-2@node-mgr-app-2:55109"
      waitTime: "1m"

Running the Scale Up Job

When you want to scale up the number of Node Manager pods in your Offline Mediation Controller cloud native environment to meet current capacity requirements, you run a scale up job.

To run a scale up job:

  1. If the cartridge JARs are not part of the Offline Mediation Controller class path, do the following:

    1. Place the cartridge JARs in the directory specified in the ocomc.cofigEnv.cartridgeFolder key, which can be set to a directory in the external PV.

    2. In your override-values.yaml file for oc-cn-ocomc-helm-chart, increment the ocomc.configEnv.restart_count key by 1.

    3. Run the helm upgrade command to update the Offline Mediation Controller Helm release:

      helm upgrade ReleaseName oc-cn-ocomc-helm-chart --namespace NameSpace --values OverrideValuesFile

      where:

      • ReleaseName is the release name, which is used to track this installation instance.

      • NameSpace is the name space in which to create Offline Mediation Controller Kubernetes objects. To integrate the Offline Mediation Controller cloud native deployment with the ECE and BRM cloud native deployments, they must be set up in the same name space.

      • OverrideValuesFile is the path to a YAML file that overrides the default configurations in the chart's values.yaml file.

      All Offline Mediation Controller components are restarted.

  2. Create additional Node Manager pods by doing the following:

    1. In your override-values.yaml file for oc-cn-ocomc-helm-chart, increase the number of Node Manager pods to the desired amount in the ocomc.deployment.nodemgr.count key. For example:

      ocomc:
        deployment:
          nodemgr:
            count: 3
    2. Set the ocomc.cofigEnv.cartridgeFolder key to /home/ocomcuser/cartridgeJars/.

    3. Place any cartridge JARs in the /home/ocomcuser/cartridgeJars/ directory.

    4. Run the helm upgrade command to update the Helm release:

      helm upgrade ReleaseName oc-cn-ocomc-helm-chart --namespace NameSpace --values OverrideValuesFile

      Wait until all node-mgr-app pods are up and running.

    5. Verify that the number of running node-mgr-app pods matches the value you specified in the ocomc.deployment.nodemgr.count key by running the following command:

      kubectl get pods
  3. If SSL is enabled in Offline Mediation Controller, do the following:

    1. In your override-values.yaml file for oc-cn-ocomc-helm-chart, set the ocomc.job.copyTrustStore.flag to true.

    2. Run the helm upgrade command to update your Helm release:

      helm upgrade ReleaseName oc-cn-ocomc-helm-chart --namespace NameSpace --values OverrideValuesFile

      Your SSL certificate files are copied to the new Node Manager pods.

  4. Restart the admin-server-app pod by doing the following:

    1. Set the following keys in your override-values.yaml file for oc-cn-ocomc-helm-chart:

      • ocomc.job.copyTrustStore.flag: Set this to false.

      • ocomc.configEnv.restart_count: Increment this value by 1.

    2. Run the helm upgrade command to update your Helm release:

      helm upgrade ReleaseName oc-cn-ocomc-helm-chart --namespace NameSpace --values OverrideValuesFile
  5. If your Node Manager pods use different data PVs, configure your new Node Manager pods by setting the following keys in your override-values.yaml file for oc-cn-ocomc-helm-chart:

    • ocomc.job.scaleUpNMDifferentDataPV.flag: Set this to true.

    • ocomc.job.scaleUpNMDifferentDataPV.parent_NM: Set this to the Node Manager pod to replicate in the format Mediation_name@Mediation_host:Port, where Mediation_name is the mediation host's name configured in Node Manager, Mediation_host is the hostname of the server on which the mediation host resides, and Port is the port number at which the mediation host communicates with the Node Manager pod.

    • ocomc.job.scaleUpNMDifferentDataPV.exportConfigFile: Set this key only if you copied node chain configuration files onto the data PV. Set this to the absolute path and file name of your node chain configuration files, but do not include the file name extension.

      For example, if the files are named export.nmx and export.xml, and they reside in /home/ocomcuser/ext, you would set exportConfigFile to /home/ocomcuser/ext/export.

    • ocomc.job.scaleUpNMDifferentDataPV.waitTime: Set this to the amount of time to wait between scaling up Node Manager pods. The default is one minute (1m).

  6. If your Node Manager pods use the same data PV, configure your new Node Manager pods by setting the following keys in your override-values.yaml file for oc-cn-ocomc-helm-chart:

    • ocomc.job.scaleUpNMSameDataPV.flag: Set this to true.

    • ocomc.job.scaleUpNMSameDataPV.CC_NM: Set this to the Node Manager pod that contains all of your Collection Cartridge (CC) nodes in the format CCNM_name@CCNM_host:Port, where CCNM_name is the mediation host's name configured in Node Manager, CCNM_host is the hostname of the server on which the mediation host resides, and Port is the port number at which the mediation host communicates with the CC Node Manager pod.

    • ocomc.job.scaleUpNMSameDataPV.EPDC_NM: Set this to the Node Manager pod that contains all of your Enhancement Processor (EP) and Distribution Cartridge (DC) nodes in the format EPDC_name@EPDC_host:Port, where EPDC_name is the mediation host's name configured in Node Manager, EPDC_host is the hostname of the server on which the mediation host resides, and Port is the port number at which the mediation host communicates with the EP and DC Node Manager pods.

    • ocomc.job.scaleUpNMSameDataPV.exportConfigFile: Set this key only if you copied node chain configuration files onto the data PV. Set this to the absolute path and file name of your node chain configuration files, but do not include the file name extension.

      For example, if the files are named export.nmx and export.xml, and they reside in /home/ocomcuser/ext, you would set exportConfigFile to /home/ocomcuser/ext/export.

    • ocomc.job.scaleUpNMSameDataPV.importNMList: Set this to a comma-separated list of Node Manager pods to import, in the order in which they appear in the node chain. For example: CCNM_name@CCNM_host:Port, EPDC_name@EPDC_host:Port. This key is mandatory when the exportConfigFile key is set.

    • ocomc.job.scaleUpNMSameDataPV.waitTime: Set this to the amount of time to wait between scaling up Node Manager pods. The default is one minute (1m).

  7. Ensure that no other scale up job exists. If a job exists, delete it.

  8. Run the helm upgrade command to update the Helm release:

    helm upgrade ReleaseName oc-cn-ocomc-helm-chart --namespace NameSpace --values OverrideValuesFile

    Wait until the job completes. You can view the job's progress by checking the ScalingJob.txt log files in the OMC_home/log/ directory.

Ensure that you set the following keys in your override-values.yaml file to false so that the jobs are not run again the next time you update the Helm release: ocomc.job.scaleUpNMSameDataPV.flag and ocomc.job.scaleUpNMDifferentDataPV.flag.

Scaling Down Node Manager Pods (Patch Set 5 and Later)

Note:

Scaling down Node Manager pods is supported in Offline Mediation Controller 12.0 Patch Set 5 and later releases.

You scale down Node Manager pods in your Offline Mediation Controller cloud native environment by removing the Node Manager from the Administration Server, bringing down the Node Manager pods from the Kubernetes cluster, and then cleaning up the Offline Mediation Controller installation.

Note:

If your node chains include duplicate check EP nodes or AP nodes, follow the instructions in "Scaling CC, EP, and DC Nodes without Impacting Non-Scalable Nodes (Patch Set 5.1 and Later)" before you start this procedure.

To scale down the number of Node Manager pods:

  1. Open your override-values.yaml file for oc-cn-ocomc-helm-chart.

  2. Specify to wait for the Enhancement Processor (EP) and Distribution Cartridge (DC) nodes to finish processing all input network account records (NARs) before shutting down cartridges. To do so, set the following keys under ocomc.configEnv:

    • forceWaitForNARSToBeProcessed: Set this to true.

    • waitForNarsToProcessInSecs: Set this to a value between 1 and 60. This is the amount of time, in seconds, to wait for NARs to reach the input of a cartridge and then be processed.

  3. Specify the number of Node Manager pods to run by doing one of the following:

    • If your Node Manager pods use different data PVs, set the ocomc.ocomc.deployment.nodemgr.count key to the total number of pods to be up and running. For example, if the count was previously 4, you can scale down the number of pods to 3 or less.

    • If your Node Manager pods use the same data PV, configure the following keys under ocomc.deployment.nodemgr:

      • ccNMRangeEnd: The ending range for creating CC Node Manager pods. (The starting range is always 1).

      • epdcNMRangeStart: The starting range for the EP and DC Node Manager pods.

      • epdcNMRangeEnd: The ending range for the EP and DC Node Manager pods. This range must include both scalable and non-scalable EP Node Manager pods. Also, the non-scalable EP Node Manager must be the first or last Node Manager pod in the EPDC range.

      For example, if ccNMRangeEnd is 2, epdcNMRangeStart is 100, and epdcNMRangeEnd is 100, the following Node Manager pods would be created:

      • node-mgr-app (CC Node Manager)

      • node-mgr-app-2 (CC Node Manager)

      • node-mgr-app-100 (EP and DC Node Manager)

    Note:

    The ranges for the CC Node Manager pods and the EP and DC Node Manager pods should not overlap.

  4. Specify to run the Node Manager scale down and post scale down jobs. To do so, set the following keys under ocomc.job:

    • scaleDownNM.flag: Set this to true.

    • scaleDownNM.startAllNodes: Set this to yes if all cartridges in the Administration Server need to be started. Otherwise, set this to no.

    • postScalingDownNM.flag: Set this to true.

    • postScalingDownNM.backupNeeded: Specify whether to create a backup of your Offline Mediation Controller installation before scaling down the Node Manager pods. Possible values are yes or no.

  5. Close your override-values.yaml file.

  6. Run the helm upgrade command to update your Offline Mediation Controller release:

    helm upgrade ReleaseName oc-cn-ocomc-helm-chart --namespace NameSpace --timeout duration --values OverrideValuesFile

    where:

    • ReleaseName is the release name, which is used to track this installation instance.

    • NameSpace is the name space in which to create Offline Mediation Controller Kubernetes objects. To integrate the Offline Mediation Controller cloud native deployment with the ECE and BRM cloud native deployments, they must be set up in the same name space.

    • duration is the amount of time Kubernetes waits for a command to complete, such as 15m for 15 minutes or 10m for 10 minutes. The default is 5 minutes.

    • OverrideValuesFile is the path to a YAML file that overrides the default configurations in the chart's values.yaml file.

  7. In your override-values.yaml file for oc-cn-ocomc-helm-chart, set the following keys:

    • ocomc.job.scaleDownNM: Set this to false.

    • ocomc.job.postScalingDown: Set this to false.

    • ocomc.configEnv.forceWaitForNARSToBeProcessed: Set this to false.

Scaling CC, EP, and DC Nodes without Impacting Non-Scalable Nodes (Patch Set 5.1 and Later)

Note:

This functionality is supported in Offline Mediation Controller 12.0 Patch Set 5.1 and later releases.

If your Node Manager pods share the same data PV, you can scale up or scale down the number of Collection Cartridge (CC), Enhancement Processor (EP), and Distribution Cartridge (DC) nodes without impacting the following non-scalable Offline Mediation Controller nodes: Duplicate Check EP nodes and Aggregation Processor (AP) nodes.

To do so, you must identify separate Node Managers for each of the following:

  • Only CC nodes

  • Only DC nodes and scalable EP nodes

  • Only Duplicate Check EP nodes and AP nodes (This Node Manager will not be replicated)

You must also create separate mediation hosts for each Node Manager, create routes between the CC Node Managers and Duplicate Check EP Node Managers, and create routes between the Duplicate Check EP Node Managers and EP and DC Node Managers. You create mediation hosts and routes by using Administration Client or NMShell. See "Connecting Your Administration Client" and "Using NMShell to Automate Deployment of Node Chains (Patch Set 5 and Later)".

Figure 7-1 shows a sample Offline Mediation Controller architecture that contains six Node Managers: two CC Node Managers, one AP Node Manager, one EP Duplicate Check Node Manager, and two EPDC Node Managers.

Figure 7-1 Example Node Manager Architecture with Non-Scalable Nodes



To scale the Node Manager pods without impacting the non-scalable Node Manager pods, follow the instructions in "Scaling Up Node Manager Pods (Patch Set 5 and Later)" and "Scaling Down Node Manager Pods (Patch Set 5 and Later)", except do the following:

  • When specifying the range of EP and DC Node Manager pods to create, include all EP and DC Node Managers and all non-scalable Node Managers.

    For the example in Figure 7-1, you would configure three EPDC Node Manager pods as follows:

    ocomc:
       deployment:
          nodemgr:
             epdcNMRangeStart: 100
             epdcNMRangeEnd: 102

    In this case, the following Node Manager pods would be created: node-mgr-app-100 (Non-Scalable Node Manager), node-mgr-app-101 (EP and DC Node Manager), and node-mgr-app-102 (EP and DC Node Manager).

  • When configuring a scale up job, set the EPDC_NM key to a scalable Node Manager pod.

    For the example in Figure 7-1, you could specify to replicate either node-mgr-app-101 or node-mgr-app-102:

    ocomc:
       job:
          scaleUpNMSameDataPV:
             EPDC_NM: node-mgr-app-101@node-mgr-app-101:55109