4 Installing the Offline Mediation Controller Cloud Native Deployment Package

Learn how to install the Oracle Communications Offline Mediation Controller cloud native deployment package on a cloud native environment.

About Deploying into Kubernetes

Helm is the recommended package manager for deploying Offline Mediation Controller cloud native services into Kubernetes. A Helm chart is a collection of files that describe a set of Kubernetes resources. It includes YAML template descriptors for all Kubernetes resources and a values.yaml file that provides default configuration values for the chart.

The Offline Mediation Controller cloud native deployment package includes oc-cn-ocomc-helm-chart-15.0.0.x.0.tgz.

When you install the Helm chart, it generates valid Kubernetes manifest files by replacing default values from values.yaml with custom values from override-values.yaml, and creates Kubernetes resources. Helm calls this a new release. You use the release name to track and maintain this installation.

Automatically Pulling Images from Private Docker Registries

You can automatically pull images from your private Docker registry by creating an ImagePullSecrets, which contains a list of authorization tokens (or Secrets) for accessing a private Docker registry. You then add references to the ImagePullSecrets in your Offline Mediation Controller Helm chart's override-values.yaml file. This allows pods to submit the Secret to the private Docker registry whenever they want to pull images.

Automatically pulling images from a private Docker registry involves these high-level steps:

  1. Create a Secret outside of the Helm chart by entering this command:

    kubectl create secret docker-registry SecretName --docker-server=RegistryServer --docker-username=UserName --docker-password=Password -n NameSpace

    where:

    • SecretName is the name of your Kubernetes Secret.

    • RegistryServer is your private Docker registry's fully qualified domain name (FQDN) (repoHost:repoPort).

    • UserName and Password are your private Docker registry's user name and password.

    • NameSpace is the namespace you will use for installing the Offline Mediation Controller Helm chart.

    For example:

    kubectl create secret docker-registry my-docker-registry --docker-server=example.com:2660/ --docker-username=xyz --docker-password=password -n oms
  2. Add the imagePullSecrets key to your override-values.yaml file for oc-cn-ocomc:

    imagePullSecrets: SecretName
  3. Add the ocomc.imageRepository key to your override-values.yaml file:

    imageRepository: "RegistryServer"
  4. Deploy oc-cn-ocomc.

Automatically Rolling Deployments by Using Annotations

Whenever a ConfigMap entry or a Secret file is modified, you must restart its associated pod. This updates the container's configuration, but the application is notified about the configuration updates only if the pod's deployment specification has changed. Thus, a container could be using the new configuration, while the application keeps running with its old configuration.

You can configure a pod to automatically notify an application when a Container's configuration has changed. To do so, configure a pod to automatically update its deployment specification whenever a ConfigMap or Secret file changes by using the sha256sum function. Add an annotations section similar to this one to the pod's deployment specification:

kind: Deployment
spec:
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}

For more information, see Chart Development Tips and Tricks in the Helm documentation (https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments).

About the Offline Mediation Controller Pods

Table 4-1 lists the PersistentVolumeClaims (PVCs) used by the Offline Mediation Controller server.

Table 4-1 List of PVCs in Offline Mediation Controller Server

PVC Name Default Pod Internal File System

admin-server-pvc

OMC_home/config/adminserver

node-manager-pvc

OMC_home

vol-keystore

/home/ocomcuser/keystore

vol-data

/home/ocomcuser/data

vol-external

/home/ocomcuser/ext

vol-suspense

/home/ocomcuser/suspense

To share these PVCs between Offline Mediation Controller pods, you must use a persistent volume provisioner that:

  • Provides ReadWriteMany access and sharing between the pods.

  • Mounts all external volumes with a user (ocomcuser) that has a UID and GID of 1000 and that has full permissions.

  • Has its volume reclaim policy set to avoid data and configuration loss in a mounted file system.

  • Is configured to share data, external KeyStore volumes, and wallets between Offline Mediation Controller pods and the Administration Client.

You must place all CDR files inside the vol-data PVC and then configure the internal file system path of the vol-data PVC in your Administration Client. All CDRs must have read and write permission for the ocomcuser user.

You must place all necessary third-party and cartridge JAR files in a 3pp and cartridges directory inside the vol-external PVC, and then restart the pods. After the PVC is mounted, these cartridges will be available in each pod at /home/ocomcuser/ext/3pp and /home/ocomcuser/ext/cartridges.

The Offline Mediation Controller wallet files will be created and used through the shared vol-keystore PVC.

ECE-related configuration inside the UDCEnvironment file for the Administration Client must refer to the internal path of the pod.

Inside deployment templates, nodeSelector can be set in the pod's specification with the worker node having mounted PVC and the Administration Client installed.

Configuring Offline Mediation Controller Services

The Offline Mediation Controller unified Helm chart (oc-cn-ocomc) configures and deploys all of your product services. YAML descriptors in the oc-cn-ocomc/templates directory use the oc-cn-ocomc/values.yaml file for most of the values. You can override the values by creating an override-values.yaml file.

The unified Helm chart includes both Offline Mediation Controller Core and REST Services Manager under a single Helm chart. It contains both Core and RSM Helm charts as subcharts within it. You can use the following keys to toggle deployment between Offline Mediation Controller Core or REST Services Manager by setting their values to either true or false:

  • Use charts.enableCore to enable Offline Mediation Controller Core.
  • Use charts.enableRSM to enable Offline Mediation Controller RSM.

Table 4-2 lists the keys that directly impact Offline Mediation Controller services. Add these keys to your override-values.yaml file with the same path hierarchy.

Note:

  • If you are using a Windows-based client, the adminsvrIp, nmExternalPort, adminsvrExternalPort, and adminsvrFirewallPort keys must be set. To connect with the Windows-based client, use external services with a NodePort type. In this case, the adminsvrIp will be the worker node IP. Restart the pod after setting adminsvrIp.

  • If graphical desktop support such as VNC is available on a worker node, the client can be installed on the same worker node in which Administration Server and Node Manager pods are running. In this case, set the service type to ClusterIP and do not set the nmExternalPort, adminsvrExternalPort, and adminsvrFirewallPort keys.

Table 4-2 Offline Mediation Controller Server Keys

Key Path in values.yaml File Description

fromRelease

ocomcCore.upgrade

The latest base version of Offline Mediation Controller that you have installed on your system in the format 15.0.0.x.0.

When upgrading from one release to another, this value must be populated. When not performing an upgrade, leave this value empty.

imagePullSecrets

ocomcCore

The location of your imagePullSecrets, which stores the credentials (or Secret) for accessing your private Docker registry.

See "Automatically Pulling Images from Private Docker Registries".

ece.*

ocomcCore

The details for connecting to ECE. Add these keys only if you are integrating Offline Mediation Controller with ECE:

  • imageRepository: The Docker registry URL for the ECE image. The default is oc-cn-ece.
  • deployment.ecs.imageName: The name of the ECE image.
  • deployment.ecs.imageTag: The tag name for the ECE image.
  • deployment.ecs.imagePullPolicy: The pull policy of the ECE image. The default value is IfNotPresent, which specifies not to pull the image if it's already present. Applicable values are IfNotPresent and Always.
  • deployment.ecs.clusterName: The ECE cluster name. The default is BRM.
  • deployment.ecs.serviceName: The ECE service name. The default is ece-server.
  • deployment.ecs.persistenceEnabled: Whether ECE will persist its cache data in the Oracle database: true or false. The default is false.
  • deployment.ecs.coherenceClusterPort: The optional value indicating the Coherence port used by the ECE component.

For example:

ece:
  imageRepository: ""
  deployment:
      ecs:
        imageName: "oc-cn-ece"
        imageTag: "15.0.0.x.0"
        imagePullPolicy: IfNotPresent
        clusterName: BRM
        serviceName: ece-server
        persistenceEnabled: "true"
        coherenceClusterPort: ""

serviceMonitor.enabled

ocomcCore.ocomc

Whether to automatically scrape Offline Mediation Controller metrics and send them to Prometheus Operator.

See "Enabling the Automatic Scraping of Metrics".

imageRepository

ocomcCore.ocomc

The Docker registry URL for the Offline Mediation Controller image.

name

ocomcCore.ocomc.secretEnv

The name of your Offline Mediation Controller Secret, such as ocomc-secret-env.

uniPass

ocomcCore.ocomc.secretEnv

Use this key to apply a uniform password to all Offline Mediation Controller cloud native services, including:

  • Database Schemas
  • Offline Mediation Controller Root Login
  • Oracle Wallets

To override this password for a specific service, specify a different password in the service's key.

Note: Use this key for test or demonstration systems only.

nmKeypass

ocomcCore.ocomc.secretEnv

The password for the Node Manager domain SSL identity key.

nmKeystorepass

ocomcCore.ocomc.secretEnv

The Offline Mediation Controller Secrets required for SSL and installation.

adminKeypass

ocomcCore.ocomc.secretEnv

The password for the Administration Server domain SSL identity key.

adminKeystorepass

ocomcCore.ocomc.secretEnv

The password for the Administration Server domain SSL identity store.

walletPassword

ocomcCore.ocomc.secretEnv

The string password for opening the wallet.

ocomcPassword

ocomcCore.ocomc.secretEnv

The Offline Mediation Controller password.

adminServerPassword

ocomcCore.ocomc.secretEnv

The Administration Server password.

restart_count

ocomcCore.ocomc.configEnv

Increment the existing value by 1 to re-create pods using the helm upgrade command.

name

ocomcCore.ocomc.configEnv

The name of your Offline Mediation Controller ConfigMap, such as ocomc-configmap-env.

sslEnabled

ocomcCore.ocomc.configEnv

Whether SSL is enabled in your Offline Mediation Controller cloud native environment: true or false.

eceEnabled

ocomcCore.ocomc.configEnv

Whether ECE is deployed and enabled in your cloud native environment: true or false.

ecePath

ocomcCore.ocomc.configEnv

The directory in which you installed ECE in your Offline Mediation Controller cloud native environment.

Set this key to /home/ocomcuser/install/, unless you are creating custom images.

walletFolder

ocomcCore.ocomc.configEnv

The location of the Oracle wallet, which contains your TLS certificates. For example: /home/ocomcuser/ext/.

thirdPartyFolder

ocomcCore.ocomc.configEnv

The directory in which you installed the third-party software required by Offline Mediation Controller. For example: /home/ocomcuser/ext/3pp.

See "Offline Mediation Controller Cloud Native Deployment Software Compatibility" in Offline Mediation Controller Compatibility Matrix.

cartridgeFolder

ocomcCore.ocomc.configEnv

The directory in which you installed your Offline Mediation Controller cartridge packs.

Set this key to /home/ocomcuser/ext/cartridges, unless you are creating custom images.

nmDebug

ocomcCore.ocomc.configEnv

The Node Manager debug mode. The default is false.

nmPort

ocomcCore.ocomc.configEnv

The Node Manager port. The default is 55109.

nmExternalPort

ocomcCore.ocomc.configEnv

The external port for the Node Manager. It must be in the range specified for the Kubernetes NodePort service.

Set this key only if Administration Client is installed remotely or on a Windows system. See "Connecting Your Administration Client".

nmIp

ocomcCore.ocomc.configEnv

The external IP for the Node Manager.

This key is required only if Node Manager needs to run with a specific external IP. The default is node-mgr-app.

metricsPortCN

ocomcCore.ocomc.configEnv

The port number at which Node-Manager-level metrics are exposed in Prometheus format. The default is 32000.

The port number must be in the range specified for the Kubernetes NodePort service.

See "Using Prometheus Operator to Monitor Offline Mediation Controller Cloud Native" for more information.

metricsPort

ocomcCore.ocomc.configEnv

The port number at which JMV metrics are exposed in Prometheus format. The default is 8082.

See "Using Prometheus Operator to Monitor Offline Mediation Controller Cloud Native" for more information.

nmKeystorePath

ocomcCore.ocomc.configEnv

The path to the Node Manager domain KeyStore files.

Set this key to /home/ocomcuser/keystore/, unless you are creating custom images.

nmDname

ocomcCore.ocomc.configEnv

The distinguished name (DN) for Node Manager. The default is CN=$HOSTNAME,OU=OracleCloud,O=OracleCorporation,L=RedwoodShores,S=California,C=US.

nmsslAlias

ocomcCore.ocomc.configEnv

The SSL alias for Node Manager. The default is nodeManager.

nmKeystoreValidity

ocomcCore.ocomc.configEnv

The number of days the SSL KeyStore certificate for Node Manager will be valid, such as 365 for one year.

adminsvrPort

ocomcCore.ocomc.configEnv

The port number for the Administration Server.

adminsvrExternalPort

ocomcCore.ocomc.configEnv

The external port for the Administration Server. It must be in the range specified for the Kubernetes NodePort service.

Set this key only if Administration Client is installed remotely or on a Windows system. See "Connecting Your Administration Client".

adminsvrFilewallPort

ocomcCore.ocomc.configEnv

The Administration Server firewall port. It must be in the range specified for the Kubernetes NodePort service.

Set this key only if Administration Client is installed remotely or on a Windows system. See "Connecting Your Administration Client".

adminsvrIp

ocomcCore.ocomc.configEnv

The external Administration Server IP (that is, the worker node host IP where the Administration Server pod is scheduled).

See "Connecting Your Administration Client".

adminsvrNodeSelectorHostName

ocomcCore.ocomc.configEnv

The host name of the Administration Server node selector.

adminsvrNodeSelectorIp

ocomcCore.ocomc.configEnv

The external IP address of the Administration Server node selector.

adminsvrNodeSelector

ocomCore.ocomc.configEnv

The name of the Administration Server node selector.

adminsvrAuthMode

ocomcCore.ocomc.configEnv

Whether the Administration Server requires authorization. The default is false.

adminsvrDebug

ocomcCore.ocomc.configEnv

Whether the Administration Server debug mode is turned on. The default is false.

adminsvrCartridgeFolder

ocomcCore.ocomc.configEnv

The name of the directory in which the cartridge pack JAR files reside.

adminsvrPatchFolder

ocomcCore.ocomc.configEnv

The name of the directory in which your UDCEnvironment file and schema files reside.

adminsvrRuleFolder

ocomcCore.ocomc.configEnv

The name of the directory in which your rule file resides.

adminsvrSharedRuleFolder

ocomcCore.ocomc.configEnv

The name of the shared rule directory.

adminsvrDshost

ocomcCore.ocomc.configEnv

The host name of the Administration Server DS. The default is localhost.

adminsvrDsport

ocomcCore.ocomc.configEnv

The Administration Server DS port. The default is 13001.

adminsvrAuthuser

ocomcCore.ocomc.configEnv

Whether the Administration Server must authenticate users.

adminsvrLdapurl

ocomcCore.ocomc.configEnv

The URL and the LDAP listening port of the Oracle Unified Directory system.

adminsvrLdapdomain

ocomcCore.ocomc.configEnv

The base DN for the LDAP server.

adminsvrGrpinfo

ocomcCore.ocomc.configEnv

The AdminServerImpl.properties parameter used by the Administration Server.

adminsvrAdmindn

ocomcCore.ocomc.configEnv

The DN for the Administration Client. For example: uid=Admin,ou=People.

adminsvrMemberval

ocomcCore.ocomc.configEnv

The AdminServerImpl.properties parameter used by the Administration Server.

adminsvrTimeout

ocomcCore.ocomc.configEnv

The session timeout, in minutes, between the Administration Server and Administration Client. The default is 30.

adminsvrDisasterRecovery

ocomcCore.ocomc.configEnv

Whether to configure the ECE DC pod for disaster recovery. The default is false.

adminsvrHostBased

ocomcCore.ocomc.configEnv

The ability to scale up the number of Node Manager pods throughout the lifecycle of the pods. See "Scaling Up Node Manager Pods".

  • true: You can scale up the Node Manager pods to meet your current capacity requirements. This is the default.
  • false: You will not be able to scale up the Node Manager pods.

Note: You cannot change this key's value after a Node Manager has been added to the Administration Server. To change the setting, you must create a fresh installation of Offline Mediation Controller.

forceWaitForNARSToBeProcessed

ocomcCore.ocomc.configEnv

When scaling down Node Manager pods, this value must be set to true. This specifies to wait until the EP and DC nodes finish processing all network account records (NARs) that are already present in their input before shutting down cartridges.

For all other cases, set this key to false.

waitForNarsToProcessInSecs

ocomcCore.ocomc.configEnv

The amount of time, in seconds, to wait for NARs to reach the input of a cartridge and then be processed. Configure this based on the time the previous cartridge takes to write out its NARs. The default is 10.

The maximum allowed value is 60.

adminsvrTruststorePath

ocomcCore.ocomc.configEnv

The path to your Administration Server domain SSL TrustStore file.

Set this key to /home/ocomcuser/ext/, unless you are creating custom images.

adminsvrKeystorePath

ocomcCore.ocomc.configEnv

The path to your Administration Server domain SSL KeyStore file.

Set this key to /home/ocomcuser/keystore/, unless you are creating custom images.

adminsvrDname

ocomcCore.ocomc.configEnv

The DN for the Administration Server. The default is CN=$HOSTNAME,OU=OracleCloud,O=OracleCorporation,L=RedwoodShores,S=California,C=US.

adminsvrsslAlias

ocomcCore.ocomc.configEnv

The alias name for the Administration Server.

adminsvrKeystoreValidity

ocomcCore.ocomc.configEnv

The number of days the SSL KeyStore certificate for the Administration Server will be valid, such as 365 for one year.

adminclientTruststorePath

ocomcCore.ocomc.configEnv

The path to your Administration Client domain SSL TrustStore file.

Set this key to /home/ocomcuser/keystore/, unless you are creating custom images.

ocomcSoftwarePath

ocomcCore.ocomc.configEnv

Set this key to /container-scripts/OCOMC-15.0.0.x.0_generic_full.jar, unless you are creating custom images.

ocomcSoftwareUpgradePath

ocomcCore.ocomc.configEnv

If you are upgrading from a previous release to 15.0, set this to /container-scripts/OCOMC-15.0.0.x.0_generic_full.jar, where x is the patch set version you are upgrading to.

Otherwise, leave this key empty.

inventoryFilePath

ocomcCore.ocomc.configEnv

The inventory file path.

oudRootUserDn

ocomcCore.ocomc.configEnv

The DN for the Oracle Unified Directory root user. The default is cn=Directory Manager.

oudPath

ocomcCore.ocomc.configEnv

The path to the Oracle Unified Directory.

oudLdapPort

ocomcCore.ocomc.configEnv

The port number on which the LDAP server is listening. The default is 1389.

oudBaseDn

ocomcCore.ocomc.configEnv

The DN for the Oracle Unified Directory. The default is dc=ocomcexample.com.

adminConnectPort

ocomcCore.ocomc.configEnv

The Administration Server port for the Oracle Unified Directory. The default is 4444.

hostName

ocomcCore.ocomc.configEnv

The host name of the server on which Offline Mediation Controller is deployed. The default is localhost.

oracleHome

ocomcCore.ocomc.configEnv

The path where you want to install Offline Mediation Controller.

Set this key to /home/ocomcuser/install, unless you are creating custom images.

forceGenSslcert

ocomcCore.ocomc.configEnv

Whether to regenerate the SSL certificate when the pod restarts. The default is false.

oPatch

ocomcCore.ocomc.configEnv

The OPatch number you want to apply.

Copy the OPatch Zip file in /home/ocomcuser/ext if the oraInventory directory is present in the Offline Mediation Controller installation. If not, unzip the patch file in /home/ocomcuser/ext/ and give permissions to the folder.

Populate this field only when an installation is already present and you want to apply OPatch. Otherwise, keep it empty.

testNodeChain.enabled

ocomcCore.ocomc

Whether node chain testing is enabled (true) or not (false). The default is false.

gcOptions.*

ocomcCore.ocomc.nodeMgrOptions

The JVM garbage collection (GC) settings to apply to the Node Manager pods.

  • globalGC: The global JVM GC settings to apply to all Node Manager pods. This value takes precedence over the gc.x keys.
  • gc.x: The JVM GC settings to apply to the specified Node Manager pod. For example, gc.1 applies to node-mgr-app, gc.2 applies to node-mgr-app-2, and so on.

memoryOptions.*

ocomcCore.ocomc.nodeMgrOptions

The global JVM memory settings to apply to all Node Manager pods.

  • globalMem: The global JVM memory settings to apply to all Node Manager pods. This value takes precedence over the mem.x keys.
  • mem.x: The JVM memory settings to apply to the specified Node Manager pod. For example, mem.1 applies to node-mgr-app, mem.2 applies to node-mgr-app-2, and so on.

gcOptions

ocomcCore.ocomc.adminSvrOptions

The JVM GC settings to apply to the Administration Server pod.

memoryOptions

ocomcCore.ocomc.adminSvrOptions

The JVM memory settings to apply to the Administration Server pod.

adminserver.type

ocomcCore.ocomc.service

The service type for the Administration Server pod: ClusterIP or NodePort. The default is ClusterIP.

nodemgr.type

ocomcCore.ocomc.service

The service type for the Node Manager pod: ClusterIP or NodePort. The default is ClusterIP.

type

ocomcCore.ocomc.service

The service type for the Administration Server and Node Manager pod: NodePort or ClusterIP. The default is ClusterIP.

name

ocomcCore.ocomc.storageClass

The storage class name for persistent volume claims (PVCs).

copyTrustStore

ocomcCore.ocomc.job

Set this to true before you run any scaling job if SSL is enabled. The default is false.

For more information, see "Scaling Node Manager Pods".

scaleUpNMDifferentDataPV.*

ocomcCore.ocomc.job

Details about the pods to replicate when you scale up the number of Node Manager pods. Add these keys only if your Node Manager pods will have different data PVs:

  • flag: Set this to true if your Node Manager pods use different data PVs.
  • parent_NM: The Node Manager pod to replicate in the format: name@host:port.
  • exportConfigFile: The file name and absolute path of the node chain configuration file to import, without any extensions.

scaleUpNMSameDataPV.*

ocomcCore.ocomc.job

Details about the pods to replicate when you scale up the number of Node Manager pods. Add these keys only if your Node Manager pods will share the same data PV:

  • flag: Set this to true if your Node Manager pods share the same data PV.

  • CC_NM: The Node Manager pod that contains all of your Collection Cartridge (CC) nodes in the format: name@host:port. This Node Manager must contain only CC nodes.

  • EPDC_NM: The Node Manager pod that contains all of your Enhancement Processor (EP) and Distribution Cartridge (DC) nodes in the format: name@host:port. This Node Manager must contain only EP and DC nodes.

    Note: List only DC and scalable EP Node Manager pods. That is, do not include Duplicate Check EP Node Managers.

  • exportConfigFile: The file name and absolute path of the node chain configuration file to import, without any extensions.

  • importNMList: The comma-separated list of nodes to import, in the order in which they appear in the node chain. For example: CCNM_name@CCNM_host:Port, EPDC_name@EPDC_host:Port.

scaleDownNM.*

ocomcCore.ocomc.job

Details about the pod scale down job.

  • flag: Whether to run the scale down job (true) or not (false). The default is false.
  • startAllNodes: Whether to restart all of the cartridges in the Administration Server (yes) or not (no). The default is no.

postScalingDownNM.*

ocomcCore.ocomc.job

Details about the post pod scale down job.

  • flag: Whether to run the post scale down job (true) or not (yes).
  • backupNeeded: Whether to back up the installation to the /home/ocomcuser/install/hostName_bkp directory (yes) or not (no). The default is no.

runNMShell.*

ocomcCore.ocomc.job

The details for running the NMShell job:

  • flag: Whether to run the NMShell job (true) or not (false).
  • fileExtension: The file name extension of all input files to read. Only files with the specified extensions are read. The input file is a list of NMShell commands to run.
  • inputDir: The directory in which the input files will be placed.
  • strictMode: Specifies what happens if an error is encountered while processing a command in the input file. Possible values are cmd, block, or no. The default is no.
  • hook: Enter pre-upgrade if this job will be run just before a Helm command. Enter post-upgrade if this job will be run just before a Helm install.
  • hookWeight: This key applies only when the scaling down job and NMShell job are both enabled. The lowest weight number of the two jobs gets loaded first.

nodemgr.*

For Node Manager pods that use different data PVs

ocomcCore.ocomc.deployment

Information about the Node Manager pods to create.

  • count: The total number of Node Manager pods to create. Set this to 1 for the first deployment.

  • sharedDataPV: Set this to false.

nodemgr.*

For Node Manager pods that share the same data PV

ocomcCore.ocomc.deployment

Information about the Node Manager pods to create. Use these keys only for Node Manager pods that share the same data PV.

  • ccNMRangeEnd: The ending rage for creating CC Node Manager pods. (The starting range is always 1).

  • epdcNMRangeStart: The starting range for creating the EP and DC Node Manager pods.

  • epdcNMRangeEnd: The ending range for creating the EP and DC Node Manager pods.

    Note: The range must include both scalable and non-scalable EP Node Manager pods.

  • sharedDataPV: Set this to true.

Deploying Offline Mediation Controller Services

To deploy Offline Mediation Controller services on your cloud native environment, do this:

Note:

To integrate the Offline Mediation Controller cloud native deployment with the ECE and BRM cloud native deployments, they must use the same namespace.

  1. Validate the content of your charts by entering this command from the helmcharts directory:

    helm lint --strict oc-cn-ocomc

    You'll see this if the command completes successfully:

    1 chart(s) linted, no failures
  2. Run the helm install command from the helmcharts directory:

    helm install ReleaseName oc-cn-ocomc --namespace NameSpace --values OverrideValuesFile

    where:

    • ReleaseName is the release name, which is used to track this installation instance.

    • NameSpace is the namespace in which to create Offline Mediation Controller Kubernetes objects. To integrate the Offline Mediation Controller cloud native deployment with the ECE and BRM cloud native deployments, they must use the same namespace.

    • OverrideValuesFile is the path to a YAML file that overrides the default configurations in the chart's values.yaml file.

    For example, if the override-values.yaml file is in the helmcharts directory, the command for installing Offline Mediation Controller cloud native services would be:

    helm install ocomc oc-cn-ocomc --namespace ocgbu --values override-values.yaml