7 Improving Performance in BRM Cloud Native
Learn how to improve performance in your Oracle Communications Billing and Revenue Management (BRM) cloud native environment.
Deploying the CM and DM Containers in the Same Pod
You can improve system performance by deploying the CM and Oracle DM containers in the same pod.
To deploy the CM and DM in the same pod:
-
In the oc-cn-helm-chart/templates directory, rename the dm_oracle.yaml file to _dm_oracle.yaml.
-
Copy the dm_oracle containers and VolumeMounts entries from the oc-cn-helm-chart/templates/dm_oracle.yaml file into the oc-cn-helm-chart/templates/cm.yaml file. For example:
containers: - name: dm-oracle image: "{{ .Values.imageRepository }}{{ .Values.ocbrm.dm_oracle.deployment.imageName }}:{{ .Values.ocbrm.dm_oracle.deployment.imageTag }}" ports: - name: dm-pcp-port containerPort: 12950 env: - name: ROTATE_PASSWORD value: "{{ .Values.ocbrm.rotate_password }}" {{ if eq .Values.ocbrm.rotate_password true }} - name: NEW_BRM_ROOT_PASSWORD valueFrom: secretKeyRef: name: oms-schema-password key: new_brm_root_password {{ end }} {{- if eq .Values.ocbrm.existing_rootkey_wallet true }} - name: BRM_WALLET value: "/oms/client" {{- end }} - name: USE_ORACLE_BRM_IMAGES value: "{{ .Values.ocbrm.use_oracle_brm_images }}" - name: TZ value: "{{ .Values.ocbrm.TZ }}" - name: NLS_LANG value: "{{ .Values.ocbrm.db.nls_lang }}" - name: PIN_LOG_DIR value: "/oms_logs" - name: TNS_ADMIN value: "/oms/ora_k8" - name: DM_DEBUG value: "{{ .Values.ocbrm.dm_oracle.deployment.dm_debug }}" - name: DM_DEBUG2 value: "{{ .Values.ocbrm.dm_oracle.deployment.dm_debug2 }}" - name: DM_DEBUG3 value: "{{ .Values.ocbrm.dm_oracle.deployment.dm_debug3 }}" - name: SERVICE_FQDN value: "localhost" {{ if eq .Values.ocbrm.cmSSLTermination true }} - name: ENABLE_SSL value: "0" {{ else }} - name: ENABLE_SSL valueFrom: configMapKeyRef: name: oms-common-config key: ENABLE_SSL {{ end }} - name: ORACLE_CHARACTERSET valueFrom: configMapKeyRef: name: oms-common-config key: ORACLE_CHARACTERSET - name: DM_ORACLE_SERVICE_PORT value: "12950" - name: OMS_SCHEMA_USERNAME valueFrom: configMapKeyRef: name: oms-common-config key: OMS_SCHEMA_USERNAME {{ if .Values.ocbrm.brm_crypt_key }} - name: BRM_CRYPT_KEY valueFrom: secretKeyRef: name: oms-schema-password key: brm_crypt_key {{ end }} - name: OMS_DB_SERVICE valueFrom: configMapKeyRef: name: oms-common-config key: OMS_DB_SERVICE - name: OMS_DB_ALIAS value: "pindb" - name: LOG_LEVEL valueFrom: configMapKeyRef: name: oms-common-config key: LOG_LEVEL - name: DM_NO_FRONT_ENDS valueFrom: configMapKeyRef: name: oms-dm-oracle-config key: DM_NO_FRONT_ENDS - name: DM_NO_BACK_ENDS valueFrom: configMapKeyRef: name: oms-dm-oracle-config key: DM_NO_BACK_ENDS - name: DM_SHM_BIGSIZE valueFrom: configMapKeyRef: name: oms-dm-oracle-config key: DM_SHM_BIGSIZE - name: DM_MAX_PER_FE valueFrom: configMapKeyRef: name: oms-dm-oracle-config key: DM_MAX_PER_FE - name: DM_SHM_SEGMENT_SIZE valueFrom: configMapKeyRef: name: oms-dm-oracle-config key: DM_SHM_SEGMENT_SIZE - name: DM_NO_TRANS_BE_MAX valueFrom: configMapKeyRef: name: oms-dm-oracle-config key: DM_NO_TRANS_BE_MAX - name: DM_STMT_CACHE_ENTRIES valueFrom: configMapKeyRef: name: oms-dm-oracle-config key: DM_STMT_CACHE_ENTRIES - name: DM_SEQUENCE_CACHE_SIZE valueFrom: configMapKeyRef: name: oms-dm-oracle-config key: DM_SEQUENCE_CACHE_SIZE - name: VIRTUAL_TIME_SETTING valueFrom: configMapKeyRef: name: oms-common-config key: VIRTUAL_TIME_SETTING - name: VIRTUAL_TIME_ENABLED valueFrom: configMapKeyRef: name: oms-common-config key: VIRTUAL_TIME_ENABLED - name: SHARED_VIRTUAL_TIME_FILE value: /oms/virtual_time/shared/pin_virtual_time_file - name: BRM_LOG_STDOUT value: "FALSE" - name: SYNC_PVT_TIME value: "{{ .Values.ocbrm.virtual_time.sync_pvt_time }}" imagePullPolicy: {{ .Values.ocbrm.imagePullPolicy }} terminationMessagePolicy: FallbackToLogsOnError livenessProbe: exec: command: - /bin/sh - -c - sh /oms/test/is_dm_ready.sh initialDelaySeconds: 10 periodSeconds: 10 failureThreshold: 50 readinessProbe: exec: command: - /bin/sh - -c - sh /oms/test/is_dm_ready.sh initialDelaySeconds: 15 periodSeconds: 10 timeoutSeconds: 1 volumeMounts: - name: secret-volume mountPath: /etc/secret {{- if eq .Values.ocbrm.existing_rootkey_wallet true }} - name: wallet-pvc mountPath: /oms/client {{- end }} - name: dm-oracle-pin-conf-volume mountPath: /oms/pin.conf.tmpl subPath: pin.conf - name: dm-oracle-tnsnames-ora-volume mountPath: /oms/ora_k8 - name: oms-logs mountPath: /oms_logs - name: virtual-time-volume mountPath: /oms/virtual_time/shared - name: dm-oracle-pin-conf-volume configMap: name: dm-oracle-pin-conf-config - name: dm-oracle-tnsnames-ora-volume configMap: name: db-config items: - key: tnsnames.ora path: tnsnames.ora - key: sqlnet.ora path: sqlnet.ora
-
Copy the dm_oracle annotations entries from the oc-cn-helm-chart/templates/dm_oracle.yaml file into the oc-cn-helm-chart/templates/cm.yaml file. For example:
annotations: configmap_pin_conf_dm_oracle.yaml configmap_env_dm_oracle.yaml
-
In the cm-pin-conf-config ConfigMap, update the dm_pointer entry to point to localhost rather than dm-oracle. For example:
- cm dm_pointer databaseNumber ip localhost 12950
-
Run the helm upgrade command to update your Helm release:
helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile -n BrmNameSpace
where:
-
BrmReleaseName is the release name for oc-cn-helm-chart and is used to track this installation instance.
-
OverrideValuesFile is the file name and path to your override-values.yaml file.
-
BrmNameSpace is the name space in which to create BRM Kubernetes objects for the BRM Helm chart.
-
Tuning Your Application Connection Pools
You can improve an application's performance by tuning the number of threads that are available for its connection with the CM.
When the CM sends a request, it is assigned a thread from the application's connection pool for performing operations. When the CM completes its operation, the thread is returned to the pool.
If an incoming request cannot be assigned a thread immediately, the request is queued. The request waits for a thread to become available for a configurable period of time. If a thread does not become available during this time, an exception is thrown indicating that the request timed out.
To tune the number of threads in an application's connection pool:
-
Open the application's ConfigMap. For example:
-
For Web Services Manager with Tomcat, the wsm-infranet-properties ConfigMap.
-
For Web Services Manager with WebLogic Server, the wsm-wl-infranet-properties ConfigMap.
-
-
Edit the parameters shown in Table 7-1.
Table 7-1 Connection Pool Parameters
Entry Description infranet.connectionpool.minsize
The minimum number of threads that the application spawns when it starts. The default is 1.
infranet.connectionpool.maxsize
The maximum number of threads that the application can spawn for accepting requests from the CM. The default is 8.
infranet.connectionpool.timeout
The time, in milliseconds, that a connection request will wait in the pending request queue for a free thread before it times out. If a pending request is not assigned a thread during this time, an exception is thrown. The default is 30000.
infranet.connectionpool.maxidletime
The time, in milliseconds, that an unused thread remains in the connection pool before it is removed. The default is 10000.
Important: If the value is set too low, threads might be removed and restored too frequently. This can degrade system performance.
infranet.connectionpool.maxrequestlistsize
The maximum number of requests that can be held in the pending request queue. The default is 50.
-
Save and close the file.
-
Run the helm upgrade command to update your Helm release:
helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile -n BrmNameSpace
where:
-
BrmReleaseName is the release name for oc-cn-helm-chart and is used to track this installation instance.
-
OverrideValuesFile is the file name and path to your override-values.yaml file.
-
BrmNameSpace is the name space in which to create BRM Kubernetes objects for the BRM Helm chart.
-
Configuring Multiple Replicas of Batch Controller
If you load event files into your BRM cloud native deployment through Universal Event (UE) Loader, you can improve throughput by running multiple replicas of the batch-controller pod. In this case, each pod can select a file from those available in the UE Loader input PersistentVolumeClaim (PVC). When an individual pod copies the file into its local file system for processing, the other input files are distributed among the remaining batch-controller pod replicas. The time a file arrives in the input PVC determines which pod gets to process the file.
To configure the number of replicas:
-
In your override-values.yaml file for oc-cn-helm-chart, set the ocbrm.batch_controller.deployment.replicaCount key to the number of replicas to create of the batch-controller pod.
-
Run the helm upgrade command to update your Helm release:
helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile -n BrmNameSpace
where:
-
BrmReleaseName is the release name for oc-cn-helm-chart and is used to track this installation instance.
-
OverrideValuesFile is the file name and path to your override-values.yaml file.
-
BrmNameSpace is the name space in which to create BRM Kubernetes objects for the BRM Helm chart.
-
For more information about UE Loader, see "About Rating Events Created by External Sources" in BRM Loading Events.
Deploying Paymentech Data Manager in HA Mode
Paymentech supports only one connection to its batch port at any one time. To support high availability and increase throughput to the Paymentech server, you can deploy two Paymentech Data Manager (dm-fusa) images, with each image using a different batch port for connecting to the Paymentech server.
Deploying two images provides failover support for dm-fusa. If one dm-fusa deployment goes down, the traffic from CM to dm-fusa will be redirected to the other dm-fusa deployment. The load is also distributed among all dm-fusa deployments.
To deploy two dm-fusa images:
-
Edit the keys in the configmap_pin_conf_dm_fusa.yaml file for your system.
-
Edit these keys in the configmap_env_dm_fusa.yaml file:
DMF_BATCH_PORT_2: "8781" DMF_BATCH_SRVR_2: fusa-simulator-2 DMF_ONLINE_PORT_2: "9781" DMF_ONLINE_SRVR_2: fusa-simulator-2
Note:
Unlike the batch port, simultaneous transactions can be sent to the Paymentech online port. Thus, the values of DMF_ONLINE_PORT_2 and DMF_ONLINE_SRVR_2 can be the same as or different from that of the first dm-fusa deployment.
-
Rename the _dm_fusa_2.yaml file to dm_fusa_2.yaml.
-
Run the helm upgrade command to update the Helm release:
helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile -n BrmNameSpace
where:
-
BrmReleaseName is the release name for oc-cn-helm-chart and is used to track this installation instance.
-
OverrideValuesFile is the file name and path to your override-values.yaml file.
-
BrmNameSpace is the name space in which to create BRM Kubernetes objects for the BRM Helm chart.
-
Using the FUSA Simulator
For testing purposes, a second deployment of the FUSA simulator is provided in the templates directory. To deploy this second version, rename the _fusa_simulator_2.yaml file to fusa_simulator_2.yaml and then update the Helm release:
helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile -n BrmNameSpace
The deployment scripts and configuration files for the FUSA simulator are provided for testing purposes only. In a production environment, remove these files:
-
fusa_simulator.yaml
-
fusa_simulator_2.yaml
-
configmap_pin_conf_fusa_simulator.yaml
-
configmap_env_fusa_simulator.yaml