1 Managing Pods and PVCs in BRM Cloud Native
Learn how to manage the pods and PersistentVolumeClaim (PVCs) in your Oracle Communications Billing and Revenue Management (BRM) cloud native environment.
Topics in this document:
Note:
This documentation uses the override-values.yaml file name for ease of use, but you can name the file whatever you want.
Setting up Autoscaling of BRM Pods
You can use the Kubernetes Horizontal Pod Autoscaler to automatically scale up or down the number of BRM pod replicas in your deployment based on a pod's CPU or memory utilization.
For more information about:
-
Kubernetes Horizontal Pod Autoscaler, see "Horizontal Pod Autoscaling" in the Kubernetes documentation
-
Kubernetes requests and limits, see "Resource Management for Pods and Containers" in the Kubernetes documentation
In BRM cloud native deployments, the Horizontal Pod Autoscaler monitors and scales these BRM pods:
- batch-controller
- brm-rest-services-manager
- cm
- dm-eai
- dm-kakfa
- dm-oracle
- realtime-pipe
- rel-daemon
- rated-event-manager
To set up autoscaling for BRM pods:
-
Open your override-values.yaml file for oc-cn-helm-chart.
-
Enable the Horizontal Pod Autoscaler by setting the ocbrm.isHPAEnabled key to true.
-
Specify how often, in seconds, the Horizontal Pod Autoscaler checks a BRM pod's memory usage and scales the number of replicas. To do so, set the ocbrm.refreshInterval key to the number of seconds between each check. For example, set it to 60 for a one-minute interval.
-
For each BRM pod, set these keys to the appropriate values for your system:
-
ocbrm.BRMPod.resources.limits.cpu: Set this to the maximum number of CPU cores the pod can utilize.
-
ocbrm.BRMPod.resources.requests.cpu: Set this to the minimum number of CPU cores required in a Kubernetes node to deploy a pod.
The pod is set to Pending if the minimum CPU amount is unavailable.
Note:
The node must have enough CPUs available for the CPU requests of all containers of the pod. For example, the cm pod would need to have enough CPUs for the cm container, eai_js container, and perflib container (if enabled).
-
ocbrm.BRMPod.resources.limits.memory: Set this to the maximum amount of memory a pod can utilize.
-
ocbrm.BRMPod.resources.requests.memory: Set this to the minimum memory required for a Kubernetes node to deploy a pod.
The pod is set to Pending if the minimum amount is unavailable due to insufficient memory.
-
ocbrm.BRMPod.hpaValues.minReplica: Set this to the minimum number of pod replicas that can be deployed in a cluster.
If a pod's utilization metrics drop below targetCPU or targetMemory, the Horizontal Pod Autoscaler scales down the number of pod replicas to this minimum count. No changes are made if the number of pod replicas is already at the minimum.
-
ocbrm.BRMPod.hpaValues.maxReplica: Set this to the maximum number of pod replicas to deploy when scale up is triggered.
If a pod's metrics utilization goes above targetCPU or targetMemory, the Horizontal Pod Autoscaler scales up the number of pods to this maximum count.
-
ocbrm.BRMPod.hpaValues.targetCpu: Set this to the percentage of requestCpu at which to scale up or down a pod.
If a pod's CPU utilization exceeds targetCpu, the Horizontal Pod Autoscaler increases the pod replica count to maxReplica. If a pod's CPU utilization drops below targetCpu, the Horizontal Pod Autoscaler decreases the pod replica count to minReplica.
-
ocbrm.BRMPod.hpaValues.targetMemory: Set this to the percentage of requestMemory at which to scale up or scale down a pod.
If a pod's memory utilization exceeds targetMemory, the Horizontal Pod Autoscaler increases the pod replica count to maxReplica. If memory utilization drops below targetMemory, the Horizontal Pod Autoscaler decreases the pod replica count to minReplica.
-
-
Save and close your override-values.yaml file.
-
Run the helm upgrade command to update your Helm release:
helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile --namespace BrmNameSpacewhere:
-
BrmReleaseName is the release name for oc-cn-helm-chart and is used to track this installation instance.
- OverrideValuesFile is the file name and path to your override-values.yaml file.
-
BrmNameSpace is the namespace in which to create BRM Kubernetes objects for the BRM Helm chart.
-
Automatically Rolling Deployments by Using Annotations
Whenever a ConfigMap entry or a Secret file is modified, you must restart its associated pod. This updates the container's configuration, but the application is notified about the configuration updates only if the pod's deployment specification has changed. Thus, a container could use the new configuration while the application keeps running with its old configuration.
You can configure a pod to automatically notify an application when a container's configuration has changed. To do so, configure a pod to automatically update its deployment specification whenever a ConfigMap or Secret file changes by using the sha256sum function. Add an annotations section similar to this one to the pod's deployment specification:
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}For more information, see "Automatically Roll Deployments" in Helm Chart Development Tips and Tricks.
Restarting BRM Pods
You may occasionally need to restart a BRM pod, such as when an error occurs that you cannot fix or a pod is stuck in a terminating status. You restart a BRM pod by deleting it with kubectl.
To restart a BRM pod:
-
Retrieve the names of the BRM pods by entering this command:
kubectl get pods -n NameSpace
where NameSpace is the namespace in which Kubernetes objects for the BRM Helm chart reside.
The following provides sample output:
NAME READY STATUS RESTARTS AGE cm-6f79d95887-lp7qs 1/1 Running 0 6d17h dm-oracle-5496bf8d94-vjgn7 1/1 Running 0 6d17h dm-kafka-d5ccf6dbd-l968b 1/1 Running 0 6d17h
-
Delete a pod by entering this command:
kubectl delete pod PodName -n NameSpace
where PodName is the name of the pod. For example, to delete and restart the cm pod, you would enter:
kubectl delete pod cm-6f79d95887-lp7qs -n NameSpace
Setting Minimum and Maximum CPU and Memory Values
You can specify the minimum and maximum CPU and memory resources BRM cloud native containers can use. Setting minimum values ensures containers can deploy successfully while setting maximum values prevents containers from consuming excessive resources, which could lead to system crashes.
Note:
For a pod to be scheduled on a node, the node must have enough CPUs available for the CPU requests of all containers of the pod. For example, in case of the cm pod, the node would need to have enough CPUs for the cm container, eai_js container, and perflib container (if enabled).
You should also tune the JVM parameter for heap memory when tuning container-level resources for Java-based containers. You do this adjustment through component-level keys.
To set the minimum and maximum amount of CPUs and memory for containers, include the following keys in your override-values.yaml file for oc-cn-helm-chart, oc-cn-init-db-helm-chart, oc-cn-op-job-helm-chart, oc-cn-ece-helm-chart:
componentName: resources: requests: cpu: value memory: value limits: cpu: value memory: value
where:
-
componentName: Specifies the component name in the values.yaml file, such as cm, rel_daemon, and vertex_dm.
-
limits.cpu: Specifies the maximum number of CPU cores the container can utilize, such as 1000m.
-
limits.memory: Specifies the maximum amount of memory a container can utilize, such as 2000Mi.
-
requests.cpu: Specifies the minimum number of CPU cores reserved in a Kubernetes node to deploy a container, such as 50m.
-
requests.memory: Specifies the minimum amount of memory a container can utilize, such as 256Mi.
You must perform a Helm install or Helm upgrade after making any changes.
For more information about requests and limits, see "Resource Management for Pods and Containers" in the Kubernetes documentation.
Using Static Volumes
By default, the BRM cloud native pods use dynamic volume provisioning. However, you can modify one or more pods to use static volumes instead to meet your business requirements. To do so, you add createOption keys to the override-values.yaml file for each pod that you want to use static volumes and then redeploy your Helm charts.
To change a pod to use dynamic volumes, remove the createOption keys from your override-values.yaml file and then redeploy your Helm charts.
To change one or more pods to use static volumes, do the following:
-
Open the override-values.yaml file for the appropriate Helm chart: oc-cn-op-job-helm-chart, oc-cn-helm-chart, and oc-cn-ece-helm-chart.
-
Under the appropriate pod's volume section, update the createOption keys.
For example, to use a hostPath-based volume, you would update the createOption key as shown below:
volume: createOption: hostPath: path: pathOnNode type: Directory
where pathOnNode is the location on the host system of the external PV.
Note:
The batchpipe, rated-event-manager, and rel_daemon pods require a separate volume for each schema in a multischema system. In this case, use pathOnNode/SCHEMA. When you perform a helm upgrade or install, it replaces SCHEMA with the schema number. For example, the Helm chart replaces SCHEMA with 1 for schema 1, 2 for schema 2, and so on.
-
Save and close your override-values.yaml file.
-
Redeploy your Helm charts. For more information, see "Deploying BRM Cloud Native Services" in BRM Cloud Native Deployment Guide.
The following shows sample override-values.yaml keys for changing the brm-sdk, batchpipe, and batch-controller pods to use a static hostPath-based volume:
ocbrm:
brm_sdk:
volume:
storage: 50Mi
createOption:
hostPath:
path: /sample/vol
type: Directory
batchpipe:
volume:
output:
storage: 100mi
createOption:
hostPath:
path: /sample/vol/out/SCHEMA
type: Directory
reject:
storage: 100mi
createOption:
hostPath:
path: /sample/vol/reject/SCHEMA
type: Directory
batch-controller:
volume:
input:
storage: 50mi
createOption:
hostPath:
path: /sample/vol/input
type: DirectoryAssigning Pods to Nodes Using nodeSelector and affinity
You can control where BRM cloud native pods run within your Kubernetes cluster by using the nodeSelector and affinity keys. Use these keys to ensure specific pods are scheduled only on suitable nodes, or to place certain pods together or apart for operational or compliance purposes. You can isolate workloads by scheduling critical components on dedicated nodes or optimize resource usage on specialized hardware, such as SSDs or GPU nodes.
BRM Helm charts expose nodeSelector and affinity as user-editable keys in the override-values.yaml file. This approach eliminates the need to modify chart templates.
For more information, see "Assigning Pods to Nodes" in the Kubernetes documentation.
Using nodeSelector
The nodeSelector key lets you specify that a pod runs only on nodes with specific labels, such as a particular hardware type or geographic region.
To assign a BRM pod to specific nodes with nodeSelector:
-
List node labels to find suitable scheduling criteria:
kubectl get nodes --show-labels -
In your override-values.yaml file for the appropriate Helm chart, add a nodeSelector section to the desired BRM component. For example, to add it to the CM:
cm: nodeSelector: disktype: ssdThis configures the cm pod to run only on nodes labeled with disktype set to ssd.
-
Run the helm upgrade command for the appropriate Helm chart.
Using Affinity
The affinity key provides advanced scheduling controls:
- Node affinity: Prefer or require scheduling on nodes with specific labels.
- Pod affinity: Prefer or require placement with other specific pods.
- Pod anti-affinity: Prefer or require scheduling away from other specific pods.
To control pod placement using affinity or anti-affinity
-
Choose whether you want pods to run together (affinity) or apart (anti-affinity), and on what criteria (such as labels or topology).
-
In your override-values.yaml file for the appropriate Helm chart, add an affinity section to the desired BRM component. For example, to add it to the realtime-pipe pod:
realtime_pipe: affinity: podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - cm topologyKey: "kubernetes.io/hostname" -
Run the helm upgrade command for the appropriate Helm chart.
Managing Labels and Annotations
You can add and manage Kubernetes labels and annotations for BRM resources created by Helm charts. This capability lets you integrate with cluster management tools, improve operations, and enable features such as webhooks or sidecar injectors that rely on metadata. You can add this metadata without modifying chart templates.
Table 1-1 describes how BRM cloud native uses labels and annotations.
Table 1-1 About Labels and Annotations
| Term | Description | Examples |
|---|---|---|
|
Labels |
Key-value pairs that identify and group resources. |
env=prod team=sre |
|
Annotation |
Key-value pairs used by tools read to determine behavior or actions. |
Sidecar injection Webhook triggers Credential injection |
BRM Helm charts automatically add two standard labels to each resource:
-
app.kubernetes.io/name: The application name for the resource.
-
app.kubernetes.io/part-of: The BRM application or component it belongs to, such as brm or bcws.
When you configure custom labels, the system adds them in addition to these defaults.
You can extend labels and annotations for most BRM resources created by the charts, such as ConfigMaps, Secrets, Jobs, and Domain resources.
You can define metadata at three levels. If a key exists at multiple levels, BRM applies precedence as shown below:
-
Resources: Applies to a single resource by its metadata.name. For resources with ordinal suffixes, such as my-app-1 or my-app-2, use the base name, such as my-app.
-
Kind: Applies to all resources of a specific Kubernetes kind. Use the exact name of the kind, such as Deployment, ConfigMap, or ServiceAccount.
-
Global: Applies to all resources in the chart.
To add labels or annotations:
-
Edit your override-values.yaml for the appropriate chart. Use the metadata.labels and metadata.annotations sections with the global, kind, and resource subkeys.
-
Run the helm upgrade command for the appropriate chart.
-
Validate the labels and annotations on the resources using kubectl. For example:
-
To view labels on a deployment:
kubectl get deployment cm -n BrmNamespace --show-labels
-
To view annotations on a ConfigMap:
kubectl get configmap cm-dep-plan -n BrmNamespace -o jsonpath='{.metadata.annotations}'
-
To list resources with a specific label:
kubectl get pods -n BrmNamespace -l env=prod
-
Example Resource Labels and Annotations
This example demonstrates how to configure resource labels and annotations in your override-values.yaml file:
metadata:
labels:
resources:
cm-dep-plan: # exact metadata.name (base name if ordinals are added at runtime):
component: "bcws"
managedBy: "helm"
annotations:
resources:
cm-dep-plan:
checksum/config: "abc123" # example for rolling restarts based on config checksumExample Kind Labels and Annotations
This example demonstrates how to configure kind labels and annotations in your override-values.yaml file:
metadata:
labels:
kind:
Deployment:
workload: "stateless"
ConfigMap:
configRole: "primary"
annotations:
kind:
ServiceAccount: eks.example.com/role-arn: "arn:example:iam::123456789012:role/brm-sa-role"About Customizing and Extending Pods
You can add or override settings in BRM cloud native pod specifications without modifying Helm chart templates or waiting for a product update. By adding values under the addOnPodSpec key in your override-values.yaml file, you can enable new features, enforce custom security policies, or change pod deployment behavior from a single configuration location.
During a Helm install or Helm upgrade, the Helm chart checks for any addOnPodSpec values. Settings defined under this key are merged into the pod specification at deployment and override identical settings specified elsewhere in the chart.
Note:
If a setting appears both elsewhere in the chart and under addOnPodSpec, the value in addOnPodSpec takes precedence.
Before customizing or extending pods, carefully review the following guidelines:
-
Use Direct Fields First: Where possible, use dedicated values.yaml keys, such as affinity or nodeSelector, for common pod configuration requirements.
-
Use addOnPodSpec for Extensions or Overrides: Use addOnPodSpec for custom settings not available as dedicated keys, or to override default behaviors.
-
Document Your Customizations: Record any changes made in addOnPodSpec for troubleshooting, maintenance, and auditing purposes.
The general steps for customizing or extending a pod:
-
Open the override-values.yaml file for your Helm chart.
-
Add the settings under the addOnPodSpec key.
For example, to add a securityContext to a pod:
addOnPodSpec: securityContext: runAsUser: 1200 fsGroup: 1000This specifies the user and groups under which a pod runs.
-
Deploy or update your BRM environment using the helm install or helm upgrade command with your updated override-values.yaml file.
Example Specifying Node Tolerations
This example demonstrates using the addOnPodSpec key to control on which nodes the pods can run:
addOnPodSpec:
tolerations:
- key: "example.com/role"
operator: "Equal"
value: "special"
effect: "NoSchedule"Example Distributing Pods Across Nodes
This example demonstrates using the addOnPodSpec key to distribute pods evenly across nodes, improving resilience and availability:
addOnPodSpec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: your-app