23 Administering ECE Cloud Native Services

Learn how to perform common system administration tasks in Oracle Communications Billing and Revenue Management (BRM) cloud native on your Elastic Charging Engine (ECE) cloud native services.

Topics in this document:

Running SDK Jobs

You can run sample scripts for ECE cloud native services by running an SDK job.

To run SDK jobs:

  1. In the override-values.yaml file for the ECE Helm chart, set the job.sdk.runjob key to true.

  2. The SDK directory containing the SDK sample scripts, configuration files, source code, and so on is exposed in the PVC defined under the pvc.sdk section of the values.yaml file.

  3. Run the helm install command to deploy the ECE Helm chart:

    helm install EceReleaseName oc-cn-ece-helm-chart --namespace BrmNameSpace --values OverrideValuesFile

    The command creates a default SDK job that prints the following since you have not run the SDK job with any valid parameters:

    "Run the SDK job with script name and parameters. Usage - cd usage; sh <scriptname> build; sh <scriptname> run <parameters>"

    The SDK job then goes into a Completed state.

  4. Check the logs printed by the job by running this command:

    kubectl logs sdkJobName
  5. After deployment completes and all of the pods are in a healthy state, you can run any sample SDK script by doing one of these:

    • Running the helm upgrade command in the following format:

      'helm upgrade eceDeploymentName helmChartFolder --set job.sdk.name=SDKJobName --set job.sdk.command="cd <folder-name>; sh <script-name> build; sh <scriptname> run <parameters>"'

      where:

      • eceDeploymentName is the deployment name given during Helm installation. The deployment name can be retrieved by running the helm ls command.

      • helmChartFolder is the location where the ECE Helm chart is located.

      • SDKJobName is the user-defined name for this instance of the SDK job.

      • job.sdk.command is set to the command to run as part of the job. The SDK job runs from the ocecesdk/bin directory, so you only need to provide the script file location from the reference point of the ocecesdk/bin directory.

        For example:

        helm upgrade ece . --set job.sdk.name=samplegprssessionjob --set job.sdk.command="cd usage; sh sample_gprs_session.sh build; sh sample_gprs_session.sh run 773-20190923 INITIATE 60 1024 1024 TelcoGprs EventDelayedSessionTelcoGprs 1.0 2020-02-10T00:01:00 1024 1024 sessionId CUMULATIVE 1"

        This command will not affect any other running pod in the namespace, except it creates the job specified in job.sdk.name. The job runs the command specified in job.sdk.command.

    • Setting the SDK job and SDK command in your override-values.yaml file:

      sdk:       
         name: "SDKJobName"       
         command: "cd <folder-name>; sh <script-name> build; sh <scriptname> run <parameters>"       
         runjob: "true"

      Then, running the helm upgrade command:

      helm upgrade eceDeploymentName helmChartFolder
  6. After the job completes, it goes into a Completed state. You can check the logs by running this command:

    kubectl logs sdkJobName

    sdkJobName will be available from the kubectl get po command. The job name will be in the format: JobName-IDfromKubernetes.

  7. To view the logs created by the SDK script, check the sdk logs folder in the PVC.

Error Handling for SDK Jobs

Any error that occurs while running an SDK job will result in the job going into an Error state. For example, an SDK job will go into an Error state when the SDK command includes invalid parameter values.

You can check the reason why an error occurred by doing the following:

  1. Running this command, which prints the output of the script:

    kubectl logs sdkJobName
  2. Checking the log file created under the SDK PVC location.

After correcting the error, run the helm upgrade command with a new job name. See "Running SDK Jobs".

If you don't provide SDK commands while running the helm upgrade command, it prints the following:

Run the SDK job with script name and parameters. 
Usage - cd usage; sh <scriptname> build; sh <scriptname> run <parameters>

If you don't provide a job name, it uses the default job name of sdk. However, since Kubernetes doesn't allow a completed job to be rerun, you must delete any previous job named sdk before running the helm upgrade command again.

Changing the ECE Configuration During Runtime

After initially deploying your ECE cloud native services, any updates to the ECE configuration require you to do a rolling update of the ECE pods.

Alternatively, you can update the ECE configuration during runtime without requiring you to restart ECE pods by:

Note:

You can run a Kubernetes job to reload either the ECE application configuration or the grid log level, but not both at the same time.

Creating a JMX Connection to ECE Using JConsole

To create a JMX connection to ECE cloud native using JConsole:

  1. In your override-values.yaml file, set the charging.jmxport key to the JMX port.

    Note:

    The global charging.jmxport key sets the default JMX port for all ECE pods. However, you can override the JMX port for an individual pod by specifying a different port in the pod's jmxport key.

    If an individual pod's JMX port is exposed for JMX connection, create custom services similar to ece-jmx-service-external for each ECE deployment type and set the jmxservice.port key to the same value as the pod's jmxport key.

  2. Label the pod as the ece-jmx-service-external service endpoint by running this command:

    kubectl label po ecs1-0 ece-jmx=ece-jmx-external
  3. Retrieve the worker node's IP address by running this command:

    kubectl get pod ecs1-0 -o wide
  4. Update the /etc/hosts file in the remote machine with the worker node's IP by running this command:

    ipAddress ecs1-0.ece-server.namespace.svc.cluster.local

    Note:

    You don't need to update the /etc/hosts file if JConsole is connecting to JMX from within a cluster or machines where the pod's FQDN is resolved by DNS.

  5. Connect to JConsole by running this command:

    jconsole ecs1-0.ece-server.namespace.svc.cluster.local:jmxport

Afterward, you can start using JConsole to change ECE configuration MBeans. See "Managing Online Charging Sessions" in ECE Implementing Charging.

Reloading ECE Application Configuration Changes

You can change the ECE appConfiguration during runtime by running a Kubernetes job. The job automatically reloads the application's configuration into the ECE cloud native cache and the charging-settings.xml file.

To reload ECE application configuration changes:

  1. Open your override-values.yaml file for the ECE Helm chart.

  2. Modify the ECE configuration MBeans to meet your business needs.

    For example, changing the charging.server.degradedModeThreshold to 3.

  3. Set the job.chargingConfigurationReloader.reloadAppConfig.runjob key to true.

    This specifies to run a Kubernetes job.

  4. Optionally, set the job.chargingConfigurationReloader.reloadAppConfig.command key to the location of the configuration MBean. For example, enter charging.server for the degradedModeThreshold Mbean, and enter charging.notification for the rarNotificationMode MBean.

  5. Do not change the pod's specification-related keys that can trigger a restart of the pod during a Helm upgrade. For example, do not change the restartCount, image, or jvmGCOpts keys.

  6. Run the helm upgrade command to update your Helm release:

    helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile -n BrmNameSpace

    The upgrade updates the charging-settings.xml file in the cache, updates the ECE charging-settings-namespace ConfigMap, and triggers the charging-configuration-reloader job.

  7. Validate that the MBean attribute was modified by running the query.sh script in the ecs pod.

    See "Using the query Utility to Test ECE" in ECE Implementing Charging for more information.

Note:

You do not need to restart the ecs, gateway, or ratedeventformatter pods for most ECE configuration changes. Restarts are required only for changes to database connection URL, Rated Event Formatter, Gateway-related, and Kafka-related appConfiguration parameters.

Reloading the Grid Log Level

You can change the grid log level for any ECE component at runtime by using a Kubernetes job.

To reload the grid log level during runtime:

  1. Open your override-values.yaml file for the ECE Helm chart.

  2. Set the job.chargingConfigurationReloader.reloadLogging.runjob key to true.

  3. Set the job.chargingConfigurationReloader.reloadLogging.command key to the following:

    loggerOperation oracle.communication.brm.charging.loggerName loggerLevel

    where:

    • loggerOperation: The type of log operation, which can be setGridLogLevel, setLogLevel, setGridLogLevelForFunctionalDomain, setLogLevelForFunctionalDomain, or updateSubscriberTraceConfiguration.

    • loggerName: The name of the component logger or functional name.

    • loggerLevel: Specifies the log level, which can be ALL, DEBUG, ERROR, INFO, TRACE, or WARN.

    For example, to set the grid log level for the ECE application configuration to error:

    setGridLogLevel oracle.communication.brm.charging.appconfiguration ERROR
  4. To persist the log level changes in the database, set the log4j2.logger.loggerName key to the log level. The loggerName and loggerLevel must match the values from step 3.

    For example, if the command key is set to setGridLogLevel oracle.communication.brm.charging.brmgateway INFO, you must set the key as follows:

    log4j2.logger.brmgateway: INFO
  5. Do not change the pod's specification-related keys that can trigger a restart of the pod during a Helm upgrade. For example, do not change the restartCount, image, or jvmGCOpts keys.

  6. Run the helm upgrade command to update your Helm release:

    helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile -n BrmNameSpace

After the job completes, the logging level is reflected in the ECE grid pods.

Using a Custom TLS Certificate for Secure Connections

To configure ECE to use a custom TLS certificate for communicating with external service providers, set these keys in the override-values.yaml file for oc-cn-ece-helm-chart:

  • charging.customSSLWallet: Set this to true.

  • charging.secretCustomWallet.name: Set this to the Secret name.

  • charging.emGatewayConfigurations.emGatewayConfigurationList.emGateway1Config.wallet: Set this to /home/charging/wallet/custom/cwallet.sso.

  • charging.emGatewayConfigurations.emGatewayConfigurationList.emGateway2Config.wallet: Set this to the custom wallet path.

  • charging.brmWalletServerLocation: Set this to the custom wallet path.

  • charging.brmWalletClientLocation: Set this to the custom wallet path.

  • charging.brmWalletLocation: Set this to the custom wallet path.

  • charging.radiusGatewayConfigurations.wallet: Set this to the custom wallet path.

  • charging.connectionConfigurations.BRMConnectionConfiguration.brmwallet: Set this to the custom wallet path.

Note:

If the custom wallet is deployed after ECE is installed, perform a Helm upgrade. You can update the wallet location configured for ECE pods such as radiusgateway, emgateway, and brmgateway by using JMX.

Configuring Subscriber-Based Tracing for ECE Services

You can selectively trace your subscribers' sessions based on one or more subscriber IDs. You can also specify to trace and log selective functions, such as alterations (discounts), charges, and distributions (charge sharing), for each subscriber.

ECE generates log files for the listed subscribers for each session. If a subscriber has multiple sessions, separate log files are generated for each session. The trace file names are unique and are in the format nodeName.subscriberID.sessionID.log. For example, ecs1.SUBSCRIBER1.SESSION1.log.

Note:

ECE does not archive or remove the log files that are generated. Remove or archive the log files periodically to avoid running out of disk space.

To configure subscriber-based tracing for your ECE services:

  1. To enable subscriber-based tracing, do the following:

    1. Open your override-values.yaml file for oc-cn-ece-helm-chart.

    2. Set the following keys under the subscriberTrace section:

      • logMaxSubscribers: Specify the maximum number of subscribers for whom you want to enable tracing. The default value is 100.

      • logMaxSubscriberSessions: Specify the maximum number of sessions for which the logs need to be generated per subscriber. The default value is 24.

      • logExpiryWaitTime: Specify how long to wait, in seconds, before the logging session expires. The default value is 1.

      • logCleanupInterval: Specify the interval time, in seconds, for log cleanup. The default value is 2.

      • logLevel: Specify the log level you want to use for generating logs, such as DEBUG or ERROR. The default value is DEBUG.

      • subscriberList: Specify a list or range of subscriber IDs to trace. For example, you could enter subscriberId1-subscriberId10 to specify the range of subscribers from 1 through 10.

    3. Save and close your override-values.yaml file.

  2. To enable subscriber-based tracing for the alterations, charges, and distribution functions, do the following:

    1. Open your charging-settings.yaml ConfigMap.

    2. Go to the subscriber-trace.xml section of the file.

    3. Update the <componentLoggerList> element to include the list of functions to trace and log.

      For example, to enable subscriber-based tracing and logging for the alteration function, you would add the following lines:

      <componentLoggerList config-class="java.util.ArrayList">                  
         <componentLogger                      
            loggerName="ALL"     
            loggerLevel="ERROR"
            config-class="oracle.communication.brm.charging.subscribertrace.configuration.internal.ComponentLoggerImpl"/>    
         <componentLogger
            loggerName="oracle.communication.brm.charging.rating.alteration"
            loggerLevel="DEBUG"
            config-class="oracle.communication.brm.charging.subscribertrace.configuration.internal.ComponentLoggerImpl"/>
      </componentLoggerList>
    4. Save and close your override-values.yaml file.

  3. Run the helm upgrade command to update your ECE Helm chart:

    helm upgrade EceReleaseName oc-cn-ece-helm-chart --values OverrideValuesFile -n BrmNameSpace

    where:

    • EceReleaseName is the release name for oc-cn-ece-helm-chart and is used to track this installation instance.

    • OverrideValuesFile is the name and location of your override-values.yaml file for oc-cn-ece-helm-chart.

    • BrmNameSpace is the namespace in which the BRM Kubernetes objects reside.

  4. In your override-values.yaml file for oc-cn-ece-helm-chart, set the charging.jmxport key to 31022.

  5. Label the ecs1-0 pod so that JMX can connect to it:

    kubectl -n namespace label pod ecs1-0 ece-jmx=ece-jmx-external
  6. Update the /etc/hosts file on the remote machine with the worker node of ecs1-0:

    IP_OF_WORKER_NODE ecs1-0.ece-server.namespace.svc.cluster.local
  7. Connect to JConsole by entering this command:

    jconsole ecs1-0.ece-server.namespace.svc.cluster.local:31022

    JConsole starts.

  8. Do the following in JConsole:

    1. In the editor's MBean hierarchy, expand the ECE Logging node.

    2. Expand Configuration.

    3. Expand Operations.

    4. Select updateSubscriberTraceConfiguration.

    5. Click the updateSubscriberTraceConfiguration button.

    6. In the editor's MBean hierarchy, expand the ECE Subscriber Tracing node.

    7. Expand SubscriberTraceManager.

    8. Expand Attributes.

  9. Verify that the values that you specified in step 3 appear.

    Note:

    The attributes displayed here are read-only. You can update these attributes by editing the ECE_home/config/subscriber-trace.xml file.

To disable subscriber-based tracing, remove the list of subscribers from the subscriberTrace.subscriberList key in your override-values.yaml file and then run the helm upgrade command.

Enabling SSL Communication When Separate Clusters for BRM and ECE

If BRM and ECE are located in different Kubernetes clusters or cloud native environments, enable SSL communication between BRM and the External Manager (EM) Gateway.

To enable SSL communication:

  1. In the CM configuration file (BRM_home/sys/cm/pin.conf), set the em_pointer parameter to the host name and port of either the emgateway service or the load balancer:

    - cm em_pointer ece ip hostname port

    where hostname is the worker node IP or LoadBalancer IP, and port is the emgateway service node port or LoadBalancer exposed port.

  2. In your override-values.yaml file for oc-cn-ece-helm-chart, set the emgateway.serviceFqdn key to the dedicated worker node IP or load balancer IP.

    The emgateway pod can be scheduled on specific worker nodes using nodeSelector.

  3. If this is the first time you are deploying ECE, run the helm install command:

    helm install EceReleaseName oc-cn-ece-helm-chart --namespace BrmNameSpace --values OverrideValuesFile
  4. If you have already deployed ECE, do the following:

    1. Delete the .brm_wallet_date hidden files from the ece-wallet-pvcLocation/brmwallet directory, where ece-wallet-pvcLocation is the directory for the wallet PVC.

    2. Move the ece-wallet-pvcLocation/brmwallet/server directory to server_bkp.

    3. Perform a rolling restart of the ecs1 pod by incrementing the restartCount key in your override-values.yaml file and then running a helm upgrade command. See "Rolling Restart of ECE Pods" for more information.

    4. Delete the emgateway pods. This enables the pods to read the updated BRM Server wallet entries.

    5. Run the helm upgrade command to update the ECE Helm chart:

      helm upgrade EceReleaseName oc-cn-ece-helm-chart --values OverrideValuesFile -n BrmNameSpace

Using Third-Party Libraries and Custom Mediation Specifications

To use third-party libraries and custom mediation specifications with ECE cloud native:

  1. Place all third-party libraries in the 3rdparty_jars directory inside external-pvc.

  2. Place your custom mediation specifications in the ece_custom_data directory inside external-pvc.

  3. Run the helm install command:

    helm install EceReleaseName oc-cn-ece-helm-chart --namespace BrmNameSpace --values OverrideValuesFile

    where:

    • BrmNameSpace is the namespace in which to create BRM Kubernetes objects for the BRM Helm chart.

    • EceReleaseName is the release name for oc-cn-ece-helm-chart and is used to track this installation instance. It must be different from the one used for the BRM Helm chart.

    • OverrideValuesFile is the path to the YAML file that overrides the default configurations in the chart's values.yaml file.

If you need to load custom mediation specifications into ECE cloud native after the ECE cluster is set up, do the following:

  1. Stop the configloader pod.

    Your mediation specifications will be loaded into the ECE cache from the configloader pod.

  2. Place your custom mediation specifications in the ece_custom_data directory inside external-pvc.

  3. Connect to JConsole. See "Creating a JMX Connection to ECE Using JConsole".

  4. In JConsole, click the MBeans tab.

  5. Expand the ECE Configuration node.

  6. Expand migration.loader.

  7. Expand Attributes.

  8. Set the configObjectsDataDirectory attribute to/home/charging/opt/ECE/oceceserver/sample_data/config_data/specifications/.

    This will load all mediation specifications that are placed inside the specifications directory, including those in the ece_custom_data directory.

    Note:

    To load only specific mediation specifications, set the configObjectsDataDirectory attribute to the absolute path where the specifications are located (that is, the external-pvc pod's mounted path). For example, set the attribute to /home/charging/ext/ece_custom_data or /home/charging/opt/ECE/oceceserver/sample_data/config_data/specifications/ece_custom_data.

  9. Exit JConsole.

  10. In your override-values.yaml file for oc-cn-ece-helm-chart, set the migration.loader.configObjectsDataDirectory key to the same value as specified in step 8.

  11. Run the helm upgrade command to update the ECE Helm release:

    helm upgrade EceReleaseName oc-cn-ece-helm-chart --values OverrideValuesFile -n BrmNameSpace

Setting Up ECE Cloud Native in Firewall-Enabled Environments

To set up your ECE cloud native services in a firewall-enabled environment, do the following:

  1. Ensure that the conntrack library is installed on your system. The library must be installed so Coherence can form clusters correctly. Most Kubernetes distributions install it for you.

    You can check whether the library is installed by running this command:

    rpm -qa | grep conntrack

    If it is installed, you should see output similar to the following:

    libnetfilter_conntrack-1.0.6-1.el7_3.x86_64
    conntrack-tools-1.4.4-4.el7.x86_64
  2. Kubernetes distributions can create iptables rules that block some types of traffic that Coherence requires to form clusters. If you are not able to form clusters, do the following:

    1. Check whether iptables rules are blocking traffic by running the following command:

      sudo iptables -t nat -v -L POST_public_allow -n

      If you have entries in the chain, you will see output similar to the following. Sample chain entries are shown in bold.

      Chain POST_public_allow (1 references)
      pkts bytes target prot opt in out source destination
      53 4730 MASQUERADE all -- * !lo 0.0.0.0/0 0.0.0.0/0
      0 0 MASQUERADE all -- * !lo 0.0.0.0/0 0.0.0.0/0
    2. Remove any chain entries. To do so, run this command for each chain entry:

      iptables -t nat -v -D POST_public_allow 1
    3. Ensure that the chain entries have been removed by running this command:

      sudo iptables -t nat -v -L POST_public_allow -n

      If all chain entries have been removed, you will see something similar to the following:

      Chain POST_public_allow (1 references)
      pkts bytes target prot opt in out source destination
  3. Open ports on the firewall for the following:

    • The ECE coherence cluster. That is, if the coherencePort key in your override-values.yaml file for oc-cn-ece-helm-chart is configured as 15000/tcp or 15000/udp, open them on the firewall service.

    • Open port 19612/tcp on the firewall for the pod init check done by the metric service.
    • Open a port on the firewall configured as jmxPort for JMX connection with ecs1 pod and node-ports for other ece services in values.yaml.

    • Ensure that ports specific to the network plugin, such as flannel and coredns, are open on the firewall.

    • Ensure that ports required by the volume provisioner are open on the firewall.

  4. Add your network interface and worker node subnets to your firewall by doing the following:

    1. Look up the network interface that the Kubernetes cluster uses for communication:

      sudo ip a

      The network interface is returned.

    2. Add the network interface to the firewall's trusted zone.

      For example, to change the subnet and interface specific to your cluster:

      sudo firewall-cmd --zone=trusted --add-interface=cni0 —permanent”
    3. (Optional) Add worker node subnets to the firewall's trusted zone. For example:

      sudo firewall-cmd --permanent --zone=trusted --add-source=ipAddress/16
      sudo firewall-cmd --permanent --zone=trusted --add-source=ipAddress/16
    4. Restart the firewall services.

Enabling Federation in ECE

Enabling federation in ECE allows you to manage and monitor your ecs pods across multiple clusters in the federation. You enable federation by:

  • Adding each Kubernetes cluster as a member of the Coherence federation

  • Specifying which cluster is the primary cluster and which ones are secondary clusters

  • Specifying how to connect to the ECE service

  • Adding the ecs pod to JMX

To enable federation in ECE:

  1. Set up the primary cluster by updating these keys in your override-values.yaml file for oc-cn-ece-helm-chart:

    Note:

    Set the jvmCoherenceOpts keys in each charging.coherenceMemberName section with Coherence Federation parameters for the primary and secondary clusters.

    • charging.clusterName: Set this to the name of your primary cluster.

    • charging.isFederation: Set this to true. This specifies that the cluster is a participant in a federation.

    • charging.primaryCluster: Set this to true.

    • charging.secondaryCluster: Set this to false.

    • charging.cluster.primary.eceServiceName: Set this to the ECE service name that creates the Kubernetes cluster with all ECE components in the primary cluster.

    • charging.cluster.primary.eceServicefqdnOrExternalIP: Set this to the fully qualified domain name (FQDN) of the ECE service running in the primary cluster. For example: ece-server.NameSpace.svc.cluster.local.

    • charging.cluster.secondary.eceServiceName: Set this to the ECE service name that creates the Kubernetes cluster with all ECE components in the secondary cluster.

    • charging.cluster.secondary.eceServicefqdnOrExternalIP: Set this to the FQDN of the ECE service. For example: ece-server.NameSpace.svc.cluster.local.

  2. Install oc-cn-ece-helm-chart by running this command from the helmcharts directory:

    helm install ReleaseName oc-cn-ece-helm-chart --namespace NameSpace --values OverrideValuesFile

    This brings up the necessary pods in the primary cluster.

  3. Set up the secondary cluster by updating these keys in your override-values.yaml file for oc-cn-ece-helm-chart:

    Note:

    Set the jvmCoherenceOpts keys in each charging.coherenceMemberName section with Coherence Federation parameters for the primary and secondary clusters.

    • charging.clusterName: Set this to the name of your secondary cluster.

    • charging.isFederation: Set this to true.

    • charging.secondaryCluster: Set this to true.

    • charging.primaryCluster: Set this to false.

    • charging.cluster.primary.eceServiceName: Set this to the ECE service name that creates the Kubernetes cluster with all ECE components in the primary cluster.

    • charging.cluster.primary.eceServicefqdnOrExternalIP: Set this to the fully qualified domain name (FQDN) of the ECE service running in the primary cluster. For example: ece-server.NameSpace.svc.cluster.local.

    • charging.cluster.secondary.eceServiceName: Set this to the ECE service name that creates the Kubernetes cluster with all ECE components in the secondary cluster.

    • charging.cluster.secondary.eceServicefqdnOrExternalIP: Set this to the FQDN of the ECE service in the secondary cluster. For example: ece-server-2.NameSpace.svc.cluster.local.

  4. Install oc-cn-ece-helm-chart by running this command from the helmcharts directory:

    helm install ReleaseName oc-cn-ece-helm-chart --namespace NameSpace --values OverrideValuesFile

    This brings up the necessary pods in the secondary cluster.

  5. Invoke federation from the primary production site to your secondary production sites by connecting from JConsole of the ecs1 pod.

    1. Update the label for the ecs1-0 pod:

      kubectl label -n NameSpace po ecs1-0 ece-jmx=ece-jmx-external
    2. Update the /etc/hosts file on the remote machine with the worker node of ecs1-0:

      IP_OF_WORKER_NODE ecs1-0.ece-server.namespace.svc.cluster.local
    3. Connect to JConsole:

      jconsole ecs1-0.ece-server.namespace.svc.cluster.local:31022

      JConsole starts.

    4. Invoke start() and replicateAll() with the secondary production site name from the coordinator node of each federated cache in JMX. To do so:

      1. Expand the Coherence node, expand Federation, expand BRMFederatedCache, expand Coordinator, and then expand Coordinator. Click on start(BRM2) and replicateAll(BRM2), where BRM2 is the secondary production site name.

      2. Expand the Coherence node, expand Federation, expand OfferProfileFederatedCache, expand Coordinator, and then expand Coordinator. Click on start(BRM2) and replicateAll(BRM2).

      3. Expand the Coherence node, expand Federation, expand ReplicatedFederatedCache, expand Coordinator, and then expand Coordinator. Click on start(BRM2) and replicateAll(BRM2).

      4. Expand the Coherence node, expand Federation, expand XRefFederatedCache, expand Coordinator, and then expand Coordinator. Click on start(BRM2) and replicateAll(BRM2).

    5. From the secondary production site, verify that data is being federated from the primary production site to the secondary production sites, and that all pods are running.

Enabling Parallel Pod Management in ECE

You can configure the Kubernetes StatefulSet controller to start all ecs pods simultaneously by enabling parallel pod management. To do so:

  1. Open your override-values.yaml file for oc-cn-ece-helm-chart.

  2. Set the parallelPodManagement key to one of the following:
    • true: The ecs pods will start in parallel. You must scale down the replicas manually. See "Scaling Down the ecs Pod Replicas".

    • false: The ecs pods will wait for a pod to be in the Running and Ready state or completely stopped prior to starting or stopping another pod. This is the default.

  3. Deploy the ECE Helm chart (oc-cn-ece-helm-cart):

    helm install EceReleaseName oc-cn-ece-helm-chart --namespace BrmNameSpace --values OverrideValuesFile

Scaling Down the ecs Pod Replicas

To scale down ecs pod replicas when parallelPodManagement is enabled:

  1. Ensure that the ecs pod is in the Usage Processing state.

  2. Check the ecs pod's current replica count by running one of these commands:

    • kubectl get po -n BrmNameSpace | grep -i ecs
    • kubectl get sts ecs -n BrmNameSpace

    where BrmNameSpace is the namespace in which the BRM Kubernetes objects reside.

  3. Reduce the ecs pod's replica count by one by running this command:

    kubectl scale sts ecs --replicas=newReplicaCount -n BrmNameSpace

    where newReplicaCount is the current replica count reduced by one.

    For example, if the current replica count is 6, you would run this command to scale down ecs to 5 replicas:

    kubectl scale sts ecs --replicas=5 -n BrmNameSpace
  4. Wait for the replica to stop.

  5. Continue reducing the ecs pod replica count until you reach the desired amount.

    The desired minimum ecs replica count is 3.

Customizing SDK Source Code

If you want to customize the ECE SDK source code for any of the sample scripts or Java code, the SDK directory with all of these files is exposed under the SDK PVC. You can change any file in the PVC, and the same will be reflected inside the pod.

When you run the SDK job with the build and run options, the customized code is built and run from the job.