10 Configuring ECE Services

Learn how to configure Oracle Communications Elastic Charging Engine (ECE) services by configuring and deploying the ECE Helm chart.

Topics in this document:

For information about performing administrative tasks on your ECE cloud native services, see "Administering ECE Cloud Native Services" in BRM Cloud Native System Administrator's Guide.

Before installing the ECE Helm chart, you must first publish the metadata, config, and pricing data from the PDC pod.

Note:

Kubernetes looks for the CPU limit setting for pods. If it's not set, Kubernetes allocates a default value of 1 CPU per pod, which causes CPU overhead and Coherence scalability issues. To prevent this from happening, override each ECE pod's CPU limit to be the maximum CPU available on the node.

Adding Elastic Charging Engine Keys

Table 10-1 lists the keys that directly impact ECE deployment. Add these keys to your override-values.yaml file for oc-cn-ece-helm-chart. In the table, component-name should be replaced with the name of the ECE component, such as emgateway, radiusgateway, diametergateway, httpgateway, and ratedeventformatter.

Table 10-1 Elastic Charging Engine Keys

Key Path in values.yaml File Description

imagePullPolicy

container

The default value is IfNotPresent, which specifies to not pull the image if it's already present. Applicable values are IfNotPresent and Always.

containerPort

container

The port number that is exposed by this container.

chargingSettingManagementPath

volume

The location of the management folder, which contains the charging-settings.xml, test-tools.xml, and migration-configuration.xml files.

The default is /home/charging/opt/ECE/oceceserver/config/management.

chargingSettingPath

volume

The location of the configuration folder for ECE. The default is /home/charging/opt/ECE/oceceserver/config.

walletPassword

secretEnv

The string password for opening the wallet.

JMSQUEUEPASSWORD

secretEnv

The password for the JMS queue, which is stored under the key jms.queue.notif.pwd in the wallet.

RADIUSSHAREDSECRET

secretEnv

The RADIUS secret password, which is stored as radius.secret.pwd in the wallet.

BRMGATEWAYPASSWORD

secretEnv

The BRM Gateway password.

PDCPASSWORD

secretEnv

The PDC password, which is stored as pdc.pwd in the wallet.

Note: This key must match the pdcAdminUserPassword key in the override-values.yaml file for oc-cn-helm-chart.

PDCKEYSTOREPASSWORD

secretEnv

The PDC KeyStore password, which is stored as pdc.keystore.pwd in the wallet.

Note: This key must match the keyStoreIdentityStorePass key in the override-values.yaml file for oc-cn-helm-chart.

PERSISTENCEDATABASEPASSWORD

secretEnv

The database schema user password. This user is created using ece-persistence-job if it doesn't exist in the database.

ECEHTTPGATEWAYSERVERSSLKEYSTOREPASSWORD

secretEnv

The server SSL KeyStore password for the HTTP Gateway.

BRM_SERVER_WALLET_PASSWD

secretEnv

The password to open the BRM server wallet.

BRM_ROOT_WALLET_PASSWD

secretEnv

The root wallet password of the BRM wallet.

BRMDATABASEPASSWORD

secretEnv

The password for the BRM database.

If you are connecting ECE to a BRM multischema database, use these entries instead:

BRMDATABASEPASSWORD:  
   - schema: 1     
     PASSWORD: Password   
   - schema: 2     
     PASSWORD: Password

where:

  • schema is the schema number. Enter 1 for the primary schema, 2 for the secondary schema, and so on.
  • PASSWORD is the schema password.

SSLENABLED

sslconnectioncertificates

Whether SSL is enabled in ECE (true) or not (false).

DNAME

sslconnectioncertificates

The domain name. For example: "CN=Admin, OU=Oracle Communication Application, O=Oracle Corporation, L=Redwood Shores, S=California, C=US"

SSLKEYSTOREVALIDITY

sslconnectioncertificates

The validity of the KeyStore, in days. A value of 200 indicates that the validity is 200 days.

runjob

job.sdk

Whether the SDK job needs to be run as part of the deployment (true) or not (false). The default value is false.

If set to true, a default SDK job is run as part of the Helm installation or upgrade.

replicas

component-name.component-nameList

The number of replicas to be created while deploying the chart. The default replica count is 3 for ecs server, and 1 for all other components.

coherenceMemberName

component-name.component-nameList

The Coherence member name under which this component will be added to the Coherence cluster.

jmxEnabled

component-name.component-nameList

Whether the component is JMX-enabled (true) or not (false).

coherencePort

component-name.component-nameList

The optional value indicating the Coherence port used the component.

jvmGCOpts

component-name.component-nameList

This field gives an option to provide the Java JVM options such as GC details, max memory, and min memory.

jvmJMXOpts

component-name.component-nameList

This field gives an option to provide the JMX-related option.

jvmCoherenceOpts

component-name.component-nameList

This field gives an option to provide the Coherence-related options such as the override file and cache config file.

jvmOpts

component-name.component-nameList

This field is empty by default, and any additional JVM arguments can be provided here.

labels

charging

The label for all pods in the deployment. The default value is ece.

jmxport

charging

The JMX port exposed by ece, which can be used to log in to JConsole. The default is 31022.

terminationGracePeriodSeconds

charging

Used for graceful shutdown of the pods. The default value is 180 seconds.

persistenceEnabled

charging

Whether to persist the ECE cache data into the Oracle database. The default is true.

See "Enabling Persistence in ECE" in BRM Cloud Native System Administrator's Guide for more information.

hpaEnabled

charging

Whether to enable autoscaling using Kubernetes Horizontal Pod Autoscaler.

See "Setting Up Autoscaling of ECE Pods" in BRM Cloud Native System Administrator's Guide for more information.

timeoutSurvivorQuorum

charging

The minimum number of cluster members that must remain in the cluster when the cluster service is terminating suspect members, without data loss. The default is 3.

To calculate the minimum number, use this formula:

(chargingServerWorkerNodes – 1) * (sum of all ecs pods/chargingServerWorkerNodes)

chargingServerWorkerNodes

charging

The number of charging server worker nodes. The default is 3.

jmxexporterport

charging.jmxexporter

The JMX exporter port used to monitor the state of the ECE components. The default is 9090.

jmxConfigYaml

charging.jmxexporter

The configuration YAML file is by default empty since there are no filters applied while accessing the JMX exporter port for the data. A blocklist and allowlist of mbean names can be added if filtering is needed.

jmx_prometheus_jar

charging.jmxexporter

The JAR file used for running the JMX exporter. By default, the jmx_prometheus_javaagent-0.12.0.jar is used. To change to a different version of the JAR file, it can be added to the external mount PVC for third-party JAR files, and the new JAR name can be provided here.

pod_state_id_tag

charging.jmxexporter

The format of the mBean stateManager stateidentifier, which identifies the current status of the ECE deployment. This is the default format for the current version of the JMX exporter JAR that is being used. Update this value with the new format if the JMX exporter JAR version is upgraded.

pod_partition_unbalanced_tag

charging.jmxexporter

The format of the mBean partition unbalanced, which identifies the unbalanced partition count. This is the default format for the current version of the JMX exporter JAR that is being used. Update this value with the new format if the JMX exporter JAR version is upgraded.

eceServiceName

charging.cluster.primary

The ECE service name that creates the Kubernetes cluster with all of the ECE components in the primary cluster. The default is ece-server.

eceServicefqdn

charging.cluster.primary

The fully qualified domain name (FQDN) of the ECE service running in the primary cluster. For example: ece-server.NameSpace.svc.cluster.local.

eceServiceName

charging.cluster.secondary

The ECE service name that creates the Kubernetes cluster with all of the ECE components in the secondary cluster.

eceServicefqdn

charging.cluster.secondary

The fully qualified domain name (FQDN) of the ECE service in the secondary cluster. For example: ece-server.NameSpace.svc.cluster.local.

<tags>

migration

The different tags indicating the values that will be stored under migration-configuration.xml. The tag names are the same as the ones used in the migration-configuration.xml file for ease of mapping.

<tags>

testtools

The different tags indicating the values that will be stored under test-tools.xml. The tag names are the same as the ones used in the test-tools.xml file for ease of mapping.

<module>

log4j2.logger

The different log levels for each module represents the logging level for the corresponding module.

<tags>

eceproperties

The different tags indicating the values that will be stored under ece.properties. The tag names are the same as the ones used in the ece.properties file for ease of mapping.

<tags>

JMSConfiguration

The different tags indicating the values that will be stored under JMSConfiguration.xml. The tag names are the same as the ones used in the JMSConfiguration.xml file for ease of mapping.

name

secretEnv

The user-defined name to give for the Secrets. The default is secret-env.

SSLENABLED

sslconnectioncertificates

Whether to install ECE under SSL mode (true) or not (false). The default is true.

name

pv.external

The name of the external PV. The default is external-pv.

hostpath

pv.external

The location on the host system of the external PV. The default is /scratch/qa/ece_config/.

accessModes

pv.external

The access mode for the PV. The default is ReadWriteMany.

capacity

pv.external

The maximum capacity of the external PV.

name

pvc.logs

The name for the ECE log files. The default is logs-pv.

hostPath

pvc.logs

The location on the host system for the ECE log files.

accessModes

pvc.logs

The access mode for the PVC. The default is ReadWriteMany, since all of the different component pods will be writing their respective log files into a single logs directory.

storage

pvc.logs

The storage space required initially to create this PVC. If the storage specified here is not available in the machine, ensure that the PVC is not created and that the pods do not get initialized.

name

pvc.brmconfig

The name of the BRM Config PVC, in which all BRM configuration files such as the payload config file are exposed outside of the pod.

accessModes

pvc.brmconfig

The access mode for the PVC. The default is ReadWriteMany.

storage

pvc.brmconfig

The storage space required initially to create this PVC. If the storage specified here is not available in the machine, ensure that the PVC is not created and that the pods do not get initialized.

name

pvc.sdk

The name for the SDK PVC, in which all of the SDK files such as the config, sample scripts, and source files are exposed to the user.

accessModes

pvc.sdk

The access mode for the PVC. The default is ReadWriteMany.

storage

pvc.sdk

The storage space required initially to create this PVC. If the storage specified here is not available in the machine, ensure that the PVC is not created and that the pods do not get initialized.

name

pvc.wallet

The name for the wallet PVC, in which the wallet directory will be stored and shared by all of the ecs pods. The default is ece-wallet-pvc.

accessModes

pvc.wallet

The access mode for the PVC. The default is ReadWriteMany.

storage

pvc.wallet

The storage space required initially to create this PVC. If the storage specified here is not available in the machine, ensure that the PVC is not created and that the pods do not get initialized.

name

pvc.external

The name for the external PVC, in which the third-party JARs can be placed to share with the pods. The default is external-pvc.

accessModes

pvc.external

The access mode for the PVC. The default is ReadWriteMany.

storage

pvc.external

The storage space required initially to create this PVC. If the storage specified here is not available in the machine, ensure that the PVC is not created and that the pods do not get initialized.

name

pvc.rel

The name of the RE Loader PVC as created in the BRM deployment.

name

storageClass

The name of the storage class.

Enabling SSL in Elastic Charging Engine

To complete the configuration for SSL setup in ECE:

  1. Set these keys in the override-values.yaml file for oc-cn-ece-helm-chart:

    • sslconnectioncertificates.SSLENABLED: Set this to true.

    • sslEnabled: Set this to true in emGatewayConfigurations, httpGatewayConfigurations, and BRMConnectionConfiguration.

    • migration.pricingUpdater.keyStoreLocation: Set this to /home/charging/opt/ECE/oceceserver/config/client.jks.

    • charging.brmWalletServerLocation: Set this to /home/charging/wallet/brmwallet/server/cwallet.sso.

    • charging.brmWalletClientLocation: Set this to /home/charging/wallet/brmwallet/client/cwallet.sso.

    • charging.brmWalletLocation: Set this to /home/charging/wallet/brmwallet.

    • charging.emGatewayConfigurations.emGatewayConfigurationList.emGateway1Config.wallet: Set this to the BRM wallet location.

    • charging.emGatewayConfigurations.emGatewayConfigurationList.emGateway2Config.wallet: Set this to the BRM wallet location.

    • charging.radiusGatewayConfigurations.wallet: Set this to the BRM wallet location.

    • charging.connectionConfigurations.BRMConnectionConfiguration.brmwallet: Set this to the BRM wallet location.

  2. Copy the SSL certificates, such as client.jks and public-admin.cer, generated from PDC to the pdc_ssl_keystore directory in the external PVC.

  3. Configure the connectionURL, port, and protocol as per the PDC-configured t3s channel.

Connecting ECE Cloud Native to an SSL-Enabled Database

To connect your ECE cloud native services to an SSL-enabled Oracle database:

  1. Prepare for persistence schema creation.

    1. Go to the oc-cn-ece-helm-chart directory, and then create a directory named ece_ssl_db_wallet/schema1:

      cd oc-cn-ece-helm-chart
      mkdir -p ece_ssl_db_wallet/schema1
    2. Save the contents of the ECE SSL database wallet to the schema1 directory.

    3. Grant the necessary permissions to the ece_ssl_db_wallet directory:

      chmod -R 775 ece_ssl_db_wallet
    4. For multischema systems only, create a directory named schema2 in the ece_ssl_db_wallet directory and then copy the ECE SSL database wallet to the schema2 directory.

  2. Configure the SSL database wallets in the external volume mount.

    1. Go to the external volume mount location (ece-wallet-pvc).

    2. Create a directory named ece_ssl_db_wallet/schema1:

      mkdir -p ece_ssl_db_wallet/schema1
    3. Save the contents of the ECE SSL database wallet to the ece_ssl_db_wallet/schema1 directory.

    4. Create a directory named brm_ssl_db_wallet/schema1:

      mkdir -p brm_ssl_db_wallet/schema1
    5. Save the contents of the BRM SSL database to the brm_ssl_db_wallet/schema1 directory.

    6. Grant the necessary permissions to both new directories:

      chmod -R 775 ece_ssl_db_wallet brm_ssl_db_wallet
    7. For multischema systems only, create a schema2 directory inside both ece_ssl_db_wallet and brm_ssl_db_wallet directories. Then, copy the contents of the ECE SSL database to the ece_ssl_db_wallet/schema2 directory, and copy the contents of the BRM SSL database to the brm_ssl_db_wallet/schema2 directory.

  3. Configure ECE for an SSL-enabled Oracle persistence database.

    Under the charging.connectionConfigurations.OraclePersistenceConnectionConfigurations section, set the following keys:

    • dbSSLEnabled: Set this to true.

    • dbSSLType: Set this to the type of SSL connection required for connecting to the database: oneway, twoway, or none.

    • sslServerCertDN: Set this to the SSL server certificate distinguished name (DN). The default is DC=local,DC=oracle,CN=pindb.

    • trustStoreLocation: Set this to /home/charging/ext/ece_ssl_db_wallet/schema1/cwallet.sso.

    • trustStoreType: Set this to the type of file specified as the TrustStore for SSL connections: SSO or pkcs12.

  4. Configure customerUpdater for an SSL-enabled Oracle AQ database queue.

    Under the customerUpdater.customerUpdaterList.oracleQueueConnectionConfiguration section, set the following keys:

    • dbSSLEnabled: Set this to true.

    • dbSSLType: Set this to the type of SSL connection required for connecting to the database: oneway, twoway, or none.

    • sslServerCertDN: Set this to the SSL server certificate distinguished name (DN). The default is DC=local,DC=oracle,CN=pindb.

    • trustStoreLocation: Set this to /home/charging/ext/brm_ssl_db_wallet/schema1/cwallet.sso.

    • trustStoreType: Set this to the type of file specified as the TrustStore for SSL connections: SSO or pkcs12.

Note:

For database connectivity, ECE supports only the database service name and not the database service ID. Therefore, set the following keys to the database service name:

  • charging.connectionConfigurations.OraclePersistenceConnectionConfigurations.sid

  • customerUpdater.customerUpdaterList.oracleQueueConnectionConfiguration.sid

About Elastic Charging Engine Volume Mounts

Note:

You must use a provisioner that has ReadWriteMany access and sharing between pods.

The ECE container requires Kubernetes volume mounts for third-party libraries. The third-party volume mount shares the third-party libraries required by ECE from the host system with the container file system. For the list of third-party libraries to download, see "ECE Software Compatibility" in BRM Compatibility Matrix. Place the library files under the third-party volume mount.

The default configuration comes with a hostPath PersistentVolume. For more information, see "Configure a Pod to Use a PersistentVolume for Storage" in Kubernetes Tasks.

To use a different type of PersistentVolume, modify the oc-cn-ece-helm-chart/templates/ece-pvc.yaml file.

Loading Custom Diameter AVP

To load custom Diameter AVPs into your ECE cloud native environment:

  1. Create a diameter directory inside external-pvc.

  2. Move the custom AVP file, such as dictionary_custom.xml, to the diameter directory.

  3. If you need to load a custom AVP after ECE is set up, restart the diametergateway pod by doing the following:

    1. Increment the diametergateway.diametergatewayList..restartCount key by 1.

    2. Run the helm upgrade command to update the release.

Generating CDRs for Unrated Events

By default, the httpgateway pod sends all 5G usage requests to the ecs pod for online and offline charging.

You can configure httpgateway to convert some 5G usage requests into call detail record (CDR) files based on the charging type. You can then send the CDR files to roaming partners, a data warehousing system, or legacy billing systems for rating. For more information, see "About Generating CDRs" in ECE Implementing Charging.

You use the following to generate CDRs:

  • httpgateway pod

  • cdrgateway pod

  • cdrFormatter pod

  • CDR database

The cdrgateway and cdrFormatter pods can be scaled together, with one each per schema, or independently of the schemas. For more information, see "Scaling the cdrgateway and cdrFormatter Pods".

For details about the CDR format, see "CHF-CDR Format" in ECE 5G CHF Protocol Implementation Conformance Statement.

To set up ECE cloud native to generate CDRs:

  1. Configure your httpgateway pod to do the following:

    • Generate CDRs (set cdrGenerationEnabled to true).

    • Route offline charging requests to the ecs pod for rating (set rateOfflineCDRinRealtime to true) or to the cdrgateway pod for generating CDRs (set rateOfflineCDRinRealtime to false).

    • Route online charging requests to the ecs pod for rating (set generateCDRsForOnlineRequests to false) or to the cdrgateway pod for generating CDRs (set generateCDRsForOnlineRequests to true).

  2. Configure the cdrgateway pod to connect to the CDR database and do the following:

    • Generate individual CDR records for each request (set individualCdr to true) or aggregate multiple requests into a CDR record based on trigger criteria (set individualCdr to false). For information about the trigger criteria, see "About Trigger Types" in ECE Implementing Charging.

    • Store CDR records in an Oracle NoSQL database (set isNoSQLConnection to true) or in an Oracle database (set isNoSQLConnection to false).

  3. Configure the cdrFormatter pod to do the following:

    • Retrieve batches of CDR records from the CDR database and pass them to a specified cdrFormatter plug-in for processing.

    • Purge processed CDR records from the CDR database older than a specified number of days (configured in retainDuration).

    • Purge orphan CDR records from the CDR database.

      Orphan CDR records are incomplete ones that are older than a specified number of seconds (configured in cdrOrphanRecordCleanupAgeInSec). Orphan CDR records can be created when your ECE system goes down due to maintenance or failure.

  4. Configure the cdrFormatter plug-in to do the following:

    • Write a specified number of CDR records to each CDR file (set maxCdrCount to the maximum number).

    • Create JSON-formatted CDR files and then store them in your file system (set enableDiskPersistence to true) or send them to your Kafka messaging service (set enableKafkaIntegration to true).

To generate CDRs in ECE cloud native, you configure the following entries in your override-values.yaml file. This example configures:

  • httpgateway to route both online and offline charging requests to cdrgateway.

  • cdrgateway to aggregate multiple requests into a CDR record and then store it in an Oracle NoSQL database.

  • cdrFormatter to retrieve CDR records in batches of 2500 from the Oracle NoSQL database and then send them to the default plug-in module. Immediately after CDR records are retrieved, cdrFormatter purges them from the database. It would also purge orphan records older than 200 seconds from the database.

  • The cdrFormatter plug-in to create CDR files with a maximum of 20000 CDR records and a .out file name extension. It would store them in your file system in the path /home/charging/cdr_input.

cdrFormatter:
 cdrFormatterList:
   - schemaNumber: "1"
     replicas: 1
     jvmGCOpts: "-XX:+UnlockExperimentalVMOptions  -XX:+AlwaysPreTouch -XX:G1RSetRegionEntries=2048 -XX:ParallelGCThreads=10 -XX:+ParallelRefProcEnabled    -XX:MetaspaceSize=100M -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintAdaptiveSizePolicy -XX:-UseGCLogFileRotation -XX:+UseG1GC -XX:NumberOfGCLogFiles=99"
     jvmOpts: "-Xms16g -Xmx20g -Dece.metrics.http.service.enabled=true"
     cdrFormatterConfiguration:
       name: "cdrformatter1"
       primaryInstanceName: "cdrformatter1"
       partition: "1"
       noSQLConnectionName: "noSQLConnection"
       connectionName: "oraclePersistence1brm"
       threadPoolSize: "6"
       retainDuration: "0"
       ripeDuration: "60"
       checkPointInterval: "6"
       pluginPath: "ece-cdrformatter.jar"       
       pluginType: "oracle.communication.brm.charging.cdr.formatterplugin.internal.SampleCdrFormatterCustomPlugin"       
       pluginName: "cdrFormatterPlugin1"
       noSQLBatchSize: "2500"
       cdrOrphanRecordCleanupAgeInSec:"200"       
 
cdrgateway:
  cdrgatewayList:
    - coherenceMemberName: "cdrgateway1"
      replicas: 6
      jvmGCOpts: "-XX:+UnlockExperimentalVMOptions  -XX:+AlwaysPreTouch -XX:G1RSetRegionEntries=2048 -XX:ParallelGCThreads=10 -XX:+ParallelRefProcEnabled    -XX:MetaspaceSize=100M -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintAdaptiveSizePolicy -XX:-UseGCLogFileRotation -XX:+UseG1GC -XX:NumberOfGCLogFiles=99"
      jvmOpts: "-Xms6g -Xmx8g -Dece.metrics.http.service.enabled=true -DcdrServerCorePoolSize=64 -Dserver.sockets.metrics.bind-address=0.0.0.0 -Dece.metrics.http.port=19612"
      restartCount: "0"
      cdrGatewayConfiguration:
        partition: "1"
        noSQLConnectionName: "noSQLConnection"
        connectionName: "oraclePersistence1brm"
        threadPoolSize: "12"
        cdrPort: "8084"
        cdrHost: "ece-cdrgatewayservice"
        individualCdr: "false"
 
httpgateway:
   cdrGenerationEnabled: "true"
   cdrGenerationStandaloneMode: "true"
   rateOfflineCDRinRealtime: "false"   
   generateCDRsForOnlineRequests: "true"
   httpgatewayList:
      - coherenceMemberName: "httpgateway1"
        replicas: 8
        maxreplicas: 8
        jvmGCOpts: "-XX:+AlwaysPreTouch -XX:G1RSetRegionEntries=2048 -XX:ParallelGCThreads=10 -XX:+ParallelRefProcEnabled    -XX:MetaspaceSize=100M -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintAdaptiveSizePolicy -XX:-UseGCLogFileRotation -XX:+UseG1GC -XX:NumberOfGCLogFiles=99"
        jvmOpts: "-Xms10g -Xmx14g -Djava.net.preferIPv4Addresses=true -Dece.metrics.http.service.enabled=true -Dserver.sockets.metrics.bind-address=0.0.0.0 -Dece.metrics.http.port=19612"
        httpGatewayConfiguration:
           name: "httpgateway1"
           processingThreadPoolSize: "200"
           processingQueueSize: "32768"
           kafkaBatchSize: "10"
 
   connectionConfigurations:
         OraclePersistenceConnectionConfigurations:
              retryCount: "1"
              retryInterval: "1"
              maxStmtCacheSize: "100"
              connectionWaitTimeout: "3000"
              timeoutConnectionCheckInterval: "3000"
              inactiveConnectionTimeout: "3000"
              databaseConnectionTimeout: "6000"
              persistenceInitialPoolSize: "4"
              persistenceMinPoolSize: "4"
              persistenceMaxPoolSize: "20"
              reloadInitialPoolSize: "0"
              reloadMinPoolSize: "0"
              reloadMaxPoolSize: "20"
              ratedEventFormatterInitialPoolSize: "6"
              ratedEventFormatterMinPoolSize: "6"
              ratedEventFormatterMaxPoolSize: "24"

charging:
   cdrFormatterPlugins:
     cdrFormatterPluginConfigurationList:
       cdrFormatterPluginConfiguration:
         name: "cdrFormatterPlugin1"
         tempDirectoryPath: "/tmp/tmp"
         doneDirectoryPath: "/home/charging/cdr_input"
         doneFileExtension: ".out"
         enableKafkaIntegration: "false"
         enableDiskPersistence: "true"
         maxCdrCount: "20000"

Scaling the cdrgateway and cdrFormatter Pods

To increase performance and throughput, you can scale the cdrgateway and cdrFormatter pods together, with one each per schema, or scale them independently of the schemas.

Figure 10-1 shows an example of scaled cdrgateway and cdrFormatter pods that have CDR storage in an Oracle Database. This example contains:

  • One cdrgateway multi-replica deployment for all ECE schemas. All cdrgateway replicas have a single CDR Gateway service acting as a front end to httpgateway.

  • One cdrFormatter single-replica deployment for each ECE schema. Each cdrFormatter reads persisted CDRs from its associated ECE schema.

httpgateway forwards CDR requests to cdrgateway replicas in round-robin fashion. In this example, cdrgateway replicas 1-0, 1-1, and 1-2 persist CDRs in schema 1 tables, and replicas 1-3, 1-4, and 1-5 persist CDRs in schema 2 tables.

Figure 10-1 Scaled Architecture with an Oracle Database



Figure 10-2 shows an example of scaled cdrgateway and cdrFormatter pods that have CDR storage in an Oracle NoSQL Database. This example contains:

  • One cdrgateway multi-replica deployment for all ECE schemas. All cdrgateway replicas have a single CDR Gateway service acting as a front end to the httpgateway.

  • One cdrFormatter single-replica deployment for each major key partition in the ECE schema. Each cdrFormatter reads persisted CDRs from its associated partition.

Figure 10-2 Scaled Architecture with a NoSQL Database



Configuring ECE to Support Prepaid Usage Overage

You can configure ECE cloud native to capture any overage amounts by prepaid customers during an active session, which can help you prevent revenue leakage. If the network reports that the number of used units during a session is greater than a customer's available allowance, ECE cloud native charges the customer up to the available allowance. It then creates an overage record with information about the overage amount and sends it to the ECE Overage topic. You can create a custom solution for reprocessing the overage amount later on.

For example, assume a customer has a prepaid balance of 100 minutes, but uses 130 minutes during a session. ECE cloud native would charge the customer for 100 minutes, create an overage record for the remaining 30 minutes of usage, and then write the overage topic to the ECE Overage Kafka topic.

When the prepaid usage overage is disabled, ECE cloud native charges the customer for the full amount regardless of the amount of funds in the customer's balance.

To configure ECE cloud native to support prepaid usage overage, do the following:

  • Ensure that ECE cloud native is connected your Kafka Server

  • Enable ECE cloud native to support prepaid usage overage

  • Create an ECE Overage topic in your Kafka Server

To do so, set the following keys in your override-values.yaml file for oc-cn-helm-chart:
  • charging.kafkaConfigurations.kafkaConfigurationList.*: Specify how to connect ECE to your Kafka Server.

  • charging.server.checkReservationOverImpact: Set this to true.

  • charging.kafkaConfigurations.kafkaConfigurationList.overageTopicName: Set this to the name of the Kafka topic where ECE will publish overage records.

Recording Failed ECE Usage Requests

ECE cloud native may occasionally fail to process usage requests. For example, a data usage request could fail because a customer has insufficient funds. You can configure ECE cloud native to publish details about failed usage requests, such as the user ID and request payload, to the ECE failure topic in your Kafka server. Later on, you can reprocess the usage requests or view the failure details for analysis and reporting.

To configure ECE cloud native to record failed ECE usage requests:

  • Ensure that ECE cloud native is connected your Kafka Server

  • Enable the recording of failed ECE usage requests

  • Create an ECE failure topic in your Kafka Server

To do so, set the following keys in your override-values.yaml file for oc-cn-helm-chart:
  • charging.kafkaConfigurations.kafkaConfigurationList.*: Specify how to connect ECE to your Kafka Server.

  • charging.kafkaConfigurations.kafkaConfigurationList.persistFailedRequestsToKafkaTopic: Set this to true.

  • charging.kafkaConfigurations.kafkaConfigurationList.failureTopicName: Set this to the name of the topic that stores information about failed ECE usage requests.

Loading BRM Configuration XML Files

BRM is configured by using the pin_notify and payload_config_ece_sync.xml files. To ensure that the BRM pod can access these files for configuring the EAI Java Server (eai_js), they are exposed through the brm_config PVC within the pricingupdater pod. When new metadata is synchronized with ECE, if there are updates to the payload configuration file, it will create a new file in the location which can be accessed and configured in BRM.

For more information, see "Enabling Real-Time Synchronization of BRM and ECE Customer Data Updates" in ECE Implementing Charging.

Setting Up Notification Handling in ECE

You can configure ECE cloud native to send notifications to a client application or an external application during an online charging session. For example, ECE cloud native could send a notification when a customer has breached a credit threshold or when a customer needs to request reauthorization.

You can set up ECE cloud native to send notifications by using either Apache Kafka topics or Oracle WebLogic queues:

Creating an Apache Kafka Notification Topic

To create notification topics in Apache Kafka:

  1. Create these Kafka topics either in the Kafka entrypoint.sh script or after the Kafka pod is ready:

    • kafka.topicName: ECENotifications

    • kafka.suspenseTopicName: ECESuspenseQueue

  2. In the ZooKeeper runtime ConfigMap, set the ece-zookeeper-0.ece-zookeeper.ECENameSpace.svc.cluster.local key to the name of the Kafka Cluster.

  3. Set these Kafka and ZooKeeper-related environment variables appropriately:

    • KAFKA_PORT: Set this to the port number in which Apache Kafka is up and running.

    • KAFKA_HOST_NAME: Set this to the host name of the machine in which Apache Kafka is up and running. If it contains multiple Kafka brokers, create a comma-separated list.

    • REPLICATION_FACTOR: Set this to the number of topic replications to create.

    • PARTITIONS: Set this to the total number of Kafka partitions to create in your topics. The recommended number to create is calculated as follows:

      [(Max Diameter Gateways * Max Peers Per Gateway) + (1 for BRM Gateway) + Internal Notifications]

    • TOPIC_NAME: Set this to ECENotifications. This is the name of the Kafka topic where ECE will publish notifications.

    • SUSPENSE_TOPIC_NAME: Set this to ECESuspenseQueue. This is the name of the Kafka topic where BRM will publish failed notifications and will retry later.

    • ZK_CLUSTER: Set this to the name of your ZooKeeper cluster. This should match the value you set in step 2.

    • ZK_CLIENT_PORT: Set this to the port number in which ZooKeeper listens for client connections.

    • ZK_SERVER_PORT: Set this to the port number of the ZooKeeper server.

  4. Ensure that the Kafka and ZooKeeper pods are in a READY state.

  5. Set these keys in your override-values.yaml file for oc-cn-ece-helm-chart:

    • charging.server.kafkaEnabledForNotifications: Set this to true.

    • charging.server.kafkaConfigurations.name: Set this to the name of your ECE cluster.

    • charging.server.kafkaConfigurations.hostname: Set this to the host name of the machine on which Kafka is up and running.

    • charging.server.kafkaConfigurations.topicName: Set this to ECENotifications.

    • charging.server.kafkaConfigurations.suspenseTopicName: Set this to ECESuspenseQueue.

  6. Install the ECE cloud native service by entering this command from the helmcharts directory:

    helm install EceReleaseName oc-cn-ece-helm-chart --namespace BrmNameSpace --values OverrideValuesFile

The notification topics are created in Apache Kafka.

Creating an Oracle WebLogic Notification Queue

To create notification queues and topics in Oracle WebLogic:

  1. Ensure the following:

    • Oracle WebLogic is running in your Kubernetes cluster.

    • The ECE domain has already been created.

    • The following third-party libraries are in the 3rdparty_jars directory inside external-pvc:

      • external-pvc: com.oracle.weblogic.beangen.general.api.jar

      • wlthint3client.jar

    • For SSL-enabled WebLogic in a disaster recovery environment, move a common JKS certificate file for all sites to the ece_ssl_keystore directory inside external-pvc.

  2. Create an override-values.yaml file for oc-cn-ece-helm-chart.

  3. Set the following keys in your override-values.yaml file:

    • Set the secretEnv.JMSQUEUEPASSWORD key to the WebLogic user password.

    • If WebLogic SSL is enabled, set the secretEnv.NOTIFYEVENTKEYPASS key to the KeyStore password.

    • Set the job.jmsconfig.runjob key to true.

    • If the job needs to create the ECE JMS module and subdeployment, set the job.jmsconfig.preCreateJmsServerAndModule key to true.

    • Set the charging.server.weblogic.jmsmodule key to ECE.

    • Set the charging.server.weblogic.subdeployment key to ECEQueue.

    • Set the charging.server.kafkaEnabledForNotifications key to false.

    • In the JMSConfiguration section, set the HostName, Port, Protocol, ConnectionURL, and KeyStoreLocation keys to the appropriate values for your system.

    For more information about these keys, see Table 10-1.

  4. Copy the SSL certificate file (client.jks) to the ece_ssl_keystore directory in the external PVC.

  5. Install the ECE cloud native service by entering this command from the helmcharts directory:

    helm install EceReleaseName oc-cn-ece-helm-chart --namespace BrmNameSpace --values OverrideValuesFile

The following are created in the ECE domain of your WebLogic Server:

  • A WebLogic notification topic named NotificationTopic.

  • A WebLogic notification queue named SuspenseQueue.

  • A WebLogic connection factory named NotificationFactory.

Next, configure the connection factory resource so your clients can connect to the ECE notification queues and topics in Oracle WebLogic.

To configure the connection factory resource:

  1. On the WebLogic Server in which the JMS ECE notification queue resides, sign in to WebLogic Server Administration Console.

  2. In the Domain Structure tree, expand Services, expand Messaging, and then click JMS Modules.

    The Summary of JMS Modules page appears.

  3. In the JMS Modules table, click on the name ECE.

    The Settings for ECE page appears.

  4. In the Summary of Resources table, click on the name NotificationFactory.

    The Settings for NotificationFactory page appears.

  5. Click the Configuration tab, and then click the Client tab.

  6. On the Client page, do the following:

    1. In Client ID Policy, select Unrestricted.

    2. In Subscription Sharing Policy, select Sharable.

    3. In Reconnect Policy, select None.

    4. Click Save.

  7. Click the Transactions tab.

  8. On the Transactions page, do the following:

    1. In Transaction Timeout, enter 2147483647 which is the maximum timeout value.

    2. Click Save.

For more information, see Oracle WebLogic Administration Console Online Help.

Configuring ECE for a Multischema BRM Environment

If your BRM database contains multiple schemas, you must configure ECE to connect to each schema.

To configure ECE for a BRM multischema database:

  1. Open your override-values.yaml file for the oc-cn-ece-helm-chart chart.

  2. Specify the password for accessing each schema in the BRM database. To do so, configure these keys for each schema:

    • secretEnv.BRMDATABASEPASSWORD.schema: Set this to the schema number. Enter 1 for the primary schema, 2 for the secondary schema, and so on.

    • secretEnv.BRMDATABASEPASSWORD.PASSWORD: Set this to the schema password.

    This shows example settings for two schemas:

    secretEnv:
       BRMDATABASEPASSWORD:   
          - schema: 1     
            PASSWORD: Password   
          - schema: 2     
            PASSWORD: Password
  3. Configure a customerUpdater pod for each schema. To do so, add a -schemaNumber list for each schema. In the list:

    • Set the SchemaNumber key to 1 for the primary schema, 2 for the secondary schema, and so on.

    • Set the amtAckQueueName key to the fully qualified name of the acknowledgment queue to which the pin_amt utility listens to Account Migration Manager (AMM)-related acknowledgment events. The value is in the format primarySchema.ECE_AMT_ACK_QUEUE, where primarySchema is the name of the primary schema.
    • Set the hostName and jdbcUrl keys to their corresponding values for each schema.

    This shows example settings for two schemas:

    customerUpdater:
       customerUpdaterList:
          - schemaNumber: "1"
            coherenceMemberName: "customerupdater1"
            replicas: 1
            jmxEnabled: true
            coherencePort: ""
            jvmGCOpts: ""
            jvmJMXOpts: ""
            jvmCoherenceOpts: ""
            jvmOpts: ""
            jmxport: ""
            restartCount: "0"
            oracleQueueConnectionConfiguration:
               name: "customerupdater1"
               gatewayName: "customerupdater1"
               hostName: ""
               port: "1521"
               sid: "pindb"
               userName: "pin"
               jdbcUrl: ""
               queueName: "IFW_SYNC_QUEUE"
               suspenseQueueName: "ECE_SUSPENSE_QUEUE"
               ackQueueName: "ECE_ACK_QUEUE"
               amtAckQueueName: "pin0101.ECE_AMT_ACK_QUEUE"
               batchSize: "1"
               dbTimeout: "900"
               retryCount: "10"
               retryInterval: "60"
               walletLocation: "/home/charging/wallet/ecewallet/"
     
          - schemaNumber: "2"
            coherenceMemberName: "customerupdater2"
            replicas: 1
            jmxEnabled: true
            coherencePort: ""
            jvmGCOpts: ""
            jvmJMXOpts: ""
            jvmCoherenceOpts: ""
            jvmOpts: ""
            jmxport: ""
            oracleQueueConnectionConfiguration:
               name: "customerupdater2"
               gatewayName: "customerupdater2"
               hostName: ""
               port: "1521"
               sid: "pindb"
               userName: "pin"
               jdbcUrl: ""
               queueName: "IFW_SYNC_QUEUE"
               suspenseQueueName: "ECE_SUSPENSE_QUEUE"
               ackQueueName: "ECE_ACK_QUEUE"
               amtAckQueueName: "pin0101.ECE_AMT_ACK_QUEUE"
               batchSize: "1"
               dbTimeout: "900"
               retryCount: "10"
               retryInterval: "60"
               walletLocation: "/home/charging/wallet/ecewallet/"
  4. Configure a ratedEventFormatter pod for processing rated events belonging to each BRM schema. To do so, add a -schemaNumber list for each schema. In the list, set the schemaNumber and partition keys to 1 for the primary schema, 2 for the secondary schema, and so on.

    This shows example settings for two schemas:

    ratedEventFormatter:
       ratedEventFormatterList:
          - schemaNumber: "1"
            replicas: 1
            coherenceMemberName: "ratedeventformatter1"
            jmxEnabled: true
            coherencePort:
            jvmGCOpts: ""
            jvmJMXOpts: ""
            jvmCoherenceOpts: ""
            jvmOpts: ""
            jmxport: ""
            restartCount: "0"
            ratedEventFormatterConfiguration:
               name: "ratedeventformatter1"
               primaryInstanceName: "ratedeventformatter1"
               partition: "1"
               noSQLConnectionName: "noSQLConnection"
               connectionName: "oraclePersistence1"
               threadPoolSize: "6"
               retainDuration: "0"
               ripeDuration: "60"
               checkPointInterval: "6"
               pluginPath: "ece-ratedeventformatter.jar"
               pluginType: "oracle.communication.brm.charging.ratedevent.formatterplugin.internal.BrmCdrPluginDirect"
               pluginName: "brmCdrPlugin1"
               noSQLBatchSize: "25"
     
          - schemaNumber: "2"
            replicas: 1
            coherenceMemberName: "ratedeventformatter2"
            jmxEnabled: true
            coherencePort:
            jvmGCOpts: ""
            jvmJMXOpts: ""
            jvmCoherenceOpts: ""
            jvmOpts: ""
            jmxport: ""
            ratedEventFormatterConfiguration:
               name: "ratedeventformatter2"
               primaryInstanceName: "ratedeventformatter2"
               partition: "2"
               noSQLConnectionName: "noSQLConnection"
               connectionName: "oraclePersistence1"
               threadPoolSize: "6"
               retainDuration: "0"
               ripeDuration: "60"
               checkPointInterval: "6"
               pluginPath: "ece-ratedeventformatter.jar"
               pluginType: "oracle.communication.brm.charging.ratedevent.formatterplugin.internal.BrmCdrPluginDirect"
               pluginName: "brmCdrPlugin1"
               noSQLBatchSize: "25"
  5. Save and close your override-values.yaml file for oc-cn-ece-helm-chart.

  6. In the oc-cn-ece-helm-chart/templates/charging-settings.yaml ConfigMap, add poidIdConfiguration in itemAssignmentConfig for each schema.

    This shows example settings for three schemas:

    <itemAssignmentConfigconfig-class="oracle.communication.brm.charging.appconfiguration.beans.item.ItemAssignmentConfig" itemAssignmentEnabled="true" delayToleranceIntervalInDays="0" poidPersistenceSafeCount="12000">
       <schemaConfigurationGroup config-class="java.util.ArrayList">                    
          <poidIdConfigurationconfig-class="oracle.communication.brm.charging.appconfiguration.beans.item.PoidIdConfiguration" schemaName="1" poidQuantity="2000000">
          </poidIdConfiguration>                    
          <poidIdConfigurationconfig-class="oracle.communication.brm.charging.appconfiguration.beans.item.PoidIdConfiguration" schemaName="2" poidQuantity="2000000">
          </poidIdConfiguration>                    
          <poidIdConfigurationconfig-class="oracle.communication.brm.charging.appconfiguration.beans.item.PoidIdConfiguration" schemaName="3" poidQuantity="2000000">
          </poidIdConfiguration>                
       </schemaConfigurationGroup>            
    </itemAssignmentConfig>

After you deploy oc-cn-ece-helm-chart in "Deploying BRM Cloud Native Services", the ECE pods will be connected to your BRM database schemas.