1 New Cloud Native Features

Learn about the new features in the Oracle Communications Billing and Revenue Management (BRM) cloud native deployment option.

Topics in this document:

New Features in Cloud Native 15.2

BRM cloud native 15.2 includes the following enhancements:

Rolling Back Your BRM Cloud Native Database Upgrades

You can now use the rollback feature to revert your BRM cloud native database and supported components to their previous stable state if issues occur during or after an upgrade. For example, if you upgrade your database from BRM 12.0.0.8.0 to BRM 15.2 and encounter problems, the rollback feature allows you to restore it to BRM 12.0.0.8.0.

The rollback process is supported only if reverting to the version that existed prior to the upgrade. For instance, after an upgrade from BRM 15.0 to 15.2, you can roll back only to 15.0. Rolling back to intermediate or unrelated versions is not supported. Review all prerequisites before beginning a rollback, such as avoiding changes to system keys and not adding secondary schemas until the rollback has completed.

There are several limitations:

  • Rollback is supported only for BRM 15.2 or later. Earlier versions cannot be rolled back.

  • The rollback target must be BRM 12.0.0.4.0 or later.

  • Rollback can only be performed immediately after an upgrade.

  • Full BRM installations are not eligible for rollback; these require reinstallation to revert to an earlier version.

After successfully rolling back the BRM database, you can resume using the previous BRM release or perform another upgrade to 15.2.

For more information, see "Rolling Back Your BRM Database Upgrade" in BRM Cloud Native Deployment Guide.

Support for SFTP-Based Batch Payment Processing for Paymentech Integration

BRM cloud native now supports SFTP-based transport for batch payment processing with Paymentech to align with PCI 4.0 compliance requirements. The Paymentech Data Manager (dm_fusa) pod can connect to Paymentech using SFTP for batch transactions, while online (real-time) transactions continue to use TCP/IP.

The SFTP-based method is added in addition to the existing TCP/IP mechanism for batch payments. This provides you with the flexibility to use either method based on your security and operational requirements. Secure file transfers are enabled using public key authentication, and new configuration options allow you to specify SFTP servers, directories, and connection properties.

For more information, see "Setting Up Payment Processing with Paymentech" in BRM Cloud Native System Administrator's Guide.

Enhanced Pod Customization with addOnPodSpec

BRM cloud native now lets you easily tailor Kubernetes pod specifications through a single configuration setting, without the need to edit Helm templates or wait for new product releases. By specifying custom options under the addOnPodSpec key in your override-values.yaml file, you can quickly introduce new pod features, apply advanced security controls, or refine deployment details. Any settings you configure here automatically override standard values when you deploy or upgrade using Helm. This provides flexible, maintainable configuration for a wide variety of use cases, such as enforcing node constraints or improving pod placement across your cluster.

For more information, see "About Customizing and Extending Pods" in BRM Cloud Native System Administrator's Guide.

Expanded Support for External Kubernetes Secrets

Additional BRM cloud native components now support external Kubernetes Secrets, including:

  • brm-apps jobs

  • Event Stream Processor

  • Kafka DM

  • LDAP DM

  • Paymentech DM

  • Standalone Web Services Manager

  • Taxation Gateway

  • Vertex DM

External Kubernetes Secrets allow you to pre-create BRM KeyStore certificates as Secrets in the Kubernetes cluster, which eases maintenance because it allows you to replace KeyStore certificates without updating the values.yaml files and performing a Helm install or upgrade.

For more information, see "About Using External Kubernetes Secrets" in BRM Cloud Native System Administrator's Guide.

BRM Cloud Native Supports Labels and Annotations

You can now add and manage Kubernetes labels and annotations on BRM resources deployed with Helm charts. This enhancement streamlines integration with cluster management tools and enables advanced operations, such as automated webhooks or sidecar injection, without modifying chart templates. You can apply labels and annotations at the resource, kind, or global chart level. The system automatically adds standard labels to all resources, and you can extend metadata using your override-values.yaml file.

For more information, see "Managing Labels and Annotations" in BRM Cloud Native System Administrator's Guide.

Enhanced Pod Scheduling Control with nodeSelector and affinity

BRM cloud native now supports advanced pod scheduling through the nodeSelector and affinity keys. You can control where BRM pods run in your Kubernetes cluster by specifying node labels or defining affinity and anti-affinity rules in your override-values.yaml file. This allows you to isolate critical workloads on dedicated nodes, optimize use of specialized hardware such as SSDs or GPUs, and coordinate placement of related pods for operational or compliance requirements.

For more information, see "Assigning Pods to Nodes Using nodeSelector and affinity" in BRM Cloud Native System Administrator's Guide.

Enhanced ECE Alert Configuration Management

This release adds new options for configuring and managing ECE alert rules in cloud native environments. Alert configuration management includes the following enhancements:
  • Alert Configuration Template and Documentation: The ECE Docker archive now includes an Alert Configuration Template with sample alert rules and Grafana dashboards. The template provides example configurations for monitoring ECE system health, JVM, Coherence, Kubernetes, and all main gateways.

  • Prometheus and Alertmanager Integration: The ECE metrics endpoint outputs data in the OpenMetrics (Prometheus) format. You can use Prometheus or Prometheus Operator (Kubernetes) to monitor metrics and apply alert rules. Prometheus Alertmanager distributes alerts to external channels such as email, Slack, or PagerDuty.

  • Customizable Rules and Thresholds: Edit the eceAlertRules.yaml file to update alert logic, thresholds, or durations. Documentation provides instructions for creating, editing, and deploying alert rules. Oracle recommends testing configuration changes in a non-production environment before deployment.

  • Grafana Dashboard Integration: This release includes sample Grafana dashboards. You can link these to Prometheus for metric and alert visualization across ECE components.

  • Best Practices: Validate rule syntax using Prometheus tools before deployment. Use version control for rule files and document custom thresholds. Adjust templates to match your environment and requirements.

For more information on setup steps and details, see "Monitoring ECE in a Cloud Native Environment" and "Setting Up Alerts with the ECE Alert Configuration Template" in the BRM Cloud Native System Administrator's Guide.

New Features in Cloud Native 15.1

BRM cloud native 15.1 includes the following enhancements:

BRM Cloud Native Pods Now Use Dynamic Volume Provisioning

BRM cloud native pods now use dynamic volume provisioning by default. However, you can modify one or more pods to use static volumes instead to meet your business requirements. To do so, add the createOption key to the override-values.yaml file for each pod for which you want to use static volumes and then redeploy your Helm charts.

For information about changing dynamic volume provisioning to static volume provisioning, see "Using Static Volumes" in BRM Cloud Native System Administrator's Guide.

Default Service Type Changes in BRM Cloud Native

The default service types of several BRM cloud native services have been changed.

Table 1-1 lists the cloud native services that are now deployed as ClusterIP by default, but were deployed as NodePort in previous releases.

Table 1-1 Services Now Deployed as ClusterIP

Service Name Description of Change

brm-rest-services-manager

BRM REST Services Manager creates ClusterIP by default.

To deploy this service as NodePort instead, set the ocrsm.rsm.service.type key to NodePort in your override-values.yaml file for oc-cn-helm-chart.

pdcrsm

This service is now deployed as ClusterIP by default.

To deploy this service as NodePort instead, set the ocpdcrsm.service.type key to NodePort in your override-values.yaml file for oc-cn-helm-chart.

wsm-tomcat

This service is now deployed as ClusterIP by default.

To deploy the service as NodePort instead, set the ocbrm.wsm.deployment.tomcat.service.type key to ClusterIP in your override-values.yaml file for oc-cn-helm-chart.

Table 1-2 lists the cloud native services that are no longer deployed as NodePort by default. You use a different cloud native service to deploy with a ClusterIP service type.

Table 1-2 NodePort Services No Longer Created By Default

Service Name Description of Changes

bcws-domain-admin-server-ext

Billing Care REST API no longer creates the NodePort external service (bcws-domain-admin-server-ext) by default. You can use the ClusterIP service (bcws-domain-admin-server) instead.

To configure Billing Care REST API to create the NodePort external service, set the ocbc.bcws.wop.adminChannelPort key to the NodePort where the admin-server's HTTP service will be accessible. You must set this key in the override-values.yaml file for oc-cn-helm-chart.

billingcare-domain-admin-server-ext

Billing Care no longer creates the NodePort external service (billingcare-domain-admin-server-ext) by default. You can use the ClusterIP service (billingcare-domain-admin-server) instead.

To configure Billing Care to create the NodePort external service, set the ocbc.bc.wop.adminChannelPort key to the NodePort where the admin-server's HTTP service will be accessible. You must set this key in the override-values.yaml file for oc-cn-helm-chart.

boc-domain-admin-server-ext

Business Operations Center no longer creates the NodePort external service (boc-domain-admin-server-ext) by default. You can use the ClusterIP service (boc-domain-admin-server) instead.

To configure Business Operations Center to create the NodePort external service, set the ocboc.boc.wop.adminChannelPort key to the NodePort where the admin-server's HTTP service will be accessible. You must set this key in the override-values.yaml file for oc-cn-helm-chart.

brmdomain-admin-server-ext

BRM Web Services Manager no longer creates the NodePort external service (brmdomain-admin-server-ext) by default. You can use the ClusterIP service (brmdomain-admin-server) instead.

To configure BRM Web Services Manager to create the NodePort external service, set the ocbrm.wsm.deployment.weblogic.adminServerNodePort key to the NodePort where the admin-server's HTTP service will be accessible in your override-values.yaml file for oc-cn-helm-chart.

pcc-domain-admin-server-ext

Pipeline Configuration Center no longer creates the NodePort external service (pcc-domain-admin-server-ext) by default. You can use the ClusterIP service (pcc-domain-admin-server) instead.

To configure Pipeline Configuration Center to create the NodePort external service, set the ocpcc.pcc.wop.adminChannelPort key to the NodePort where the admin-server's HTTP service will be accessible. You must set this key in the override-values.yaml file for oc-cn-helm-chart.

pdc-service

PDC no longer creates the NodePort external service (pdc-service) by default.

To configure PDC to create the NodePort external service, set the ocpdc.service.type key to NodePort in your override-values.yaml file for oc-cn-op-job-helm-chart.

All Cloud Native Containers Now Support Requests and Limits

All BRM cloud native containers now support the setting of minimum and maximum CPU and memory values. This feature helps prevent containers from consuming too many resources, which can lead to system crashes.

For more information, see "Setting Minimum and Maximum CPU and Memory Values" in BRM Cloud Native System Administrator's Guide.

BRM Cloud Native Now Supports Using External Kubernetes Secrets

You can now create BRM cloud native KeyStore certificates as Kubernetes Secrets in two different ways:

  • Pre-create the BRM cloud native KeyStore certificates as Secrets in the Kubernetes cluster. Pre-creating the Kubernetes Secrets eases maintenance because it allows you to replace KeyStore certificates without updating the values.yaml files and performing a Helm install or upgrade.

  • Have the BRM cloud native installer create the Kubernetes Secrets for you. In this case, you store the KeyStore certificates in the cloud native Helm charts. During the Helm install or upgrade process, the KeyStores are created as Kubernetes Secrets, which eventually end up as Secrets in the Kubernetes cluster.

In previous releases, you could only store the certificates in the Helm charts.

For more information, see "About Using External Kubernetes Secrets" in BRM Cloud Native System Administrator's Guide.

Improved Processing of Realtime Pipeline Semaphore Files

BRM cloud native now processes semaphore files more efficiently when it contains multiple realtime-pipe pod replicas. Semaphore files are now processed by one replica at a time. For example, assume your cloud native deployment contains multiple replicas of the realtime-pipe pod. When a semaphore file is placed in the common-semaphore PVC directory, one realtime-pipe replica picks up and locks the file. After it finishes processing the file, the replica unlocks it so the next replica can pick up, lock, and process it. This process continues until all replicas have processed the semaphore file.

In previous releases, all realtime-pipe replicas processed the semaphore file in parallel.

You enable realtime-pipe to process semaphore files one replica at a time by setting the TimeBasedChecking key to true in your wirelessrealtime-reg-config ConfigMap. By default, this key is set to true.

The realtime-pipeline pod uses Kubernetes liveness probes to monitor the container and automatically restart it when problems occur. You configure how often the liveness probe checks the container and when to perform restarts using the following keys in your oc-cn-helm-chart/templates/realtime_pipeline.yaml file:

  • LivenessProbe.initialDelaySeconds: Specifies how long to wait in seconds before performing the first liveness probe. Ensure this value is equal to or longer than the semaphore processing time. The default is 10.

  • LivenessProbe.periodSeconds: Specifies the interval in seconds between performing liveness probes. Ensure this value is equal to or longer than the semaphore processing time. The default is 10.

  • LivenessProbe.failureThreshold: Specifies the number of times the liveness probe can fail before triggering a container restart. The default is 2.

Note:

If you set periodSeconds to a value less than the semaphore processing time, you must set failureThreshold to a higher value to prevent unnecessary restarts of the realtime-pipe pod.

ECE Cloud Native Now Allows Configuration of Journal Space

You can now control the amount of space in ECE cloud native deployments that the Oracle Coherence Elastic Data journals use to meet your business needs. By default, ECE cloud native creates journal space for small-to-medium-sized deployments with up to 20,000 TPS. For larger deployments, you may need to increase the size of the journal space.

For more information, see "Managing ECE Journal Storage" in BRM Cloud Native System Administrator's Guide.

oc-cn-init-db-helm-chart Can Now Configure the Database For You

The oc-cn-init-db-helm-chart can now automatically configure your BRM database for demonstration or development systems. To do so, you must set the following keys in your override-values.yaml file before deploying the Helm chart:

  • db.user: The user name of the system administrator.

  • db.password: The password for the system administrator.

  • db.port: The port number of the database server.

Note:

For production systems, your database administrator must create the BRM database manually.

For more information, see "Deploying BRM with a New Database Schema" in BRM Cloud Native Deployment Guide.

New Features in Cloud Native 15.0.1

BRM cloud native 15.0.1 includes the following enhancements:

PDC Deployment Creates Default PDC Groups

When you deploy the oc-cn-op-job-helm-chart Helm chart to create the PDC WebLogic domain, it now creates the following PDC groups:

  • PricingDesignAdmin: This group's users have administrative privileges on PDC. They can perform operations on all PDC UI screens, pricing components, and setup components.

  • PricingAnalyst: This group's users have administrative privileges for pricing components and view-only privileges for setup components.

  • PricingReviewer: This group's users have view-only privileges for all pricing and setup components.

When you create PDC users, you add them to one of these groups to control their access to PDC operations. For example, to create a user that can perform all operations in PDC, you can configure these new values.yaml keys for the oc-cn-op-job-helm-chart Helm chart:

ocpdc:
   wop:
      users:
         name: JohnDoeAdmin
         description: New Pricing Administrator
         password: EncodedPassword
         groups: PricingDesignAdmin

For information, see "Creating PDC Users" in BRM Cloud Native System Administrator's Guide.

PCC Now Supports WebLogic Kubernetes Operator

PCC now supports WebLogic Kubernetes Operator for its cloud native deployment.

For more information on the configuration and testing of the PCC's integration with the WebLogic Operator for cloud-native environment (CNE) deployment, see "Configuring Pipeline Configuration Center" in the BRM Cloud Native Deployment Guide.

Cloud Native Documentation Contains New Instructions

The BRM cloud native documentation has been updated to include instructions for doing the following:

Note:

These instructions apply to Release 15.0.0 or later.

New Features in Cloud Native 15.0.0

BRM cloud native 15.0.0 includes the following enhancements:

Images Now Available on Oracle Container Registry

BRM cloud native images are now available on Oracle Container Registry (https://container-registry.oracle.com).

For more information, see "Pulling BRM Images from the Oracle Container Registry" in BRM Cloud Native Deployment Guide.

BRM Cloud Native Enhancements

BRM cloud native includes the following enhancements:

  • BRM cloud native now supports Podman for building container images.

  • The real-time and batch rating engines now support SSL-enabled communication with BRM.

  • BRM cloud native now supports the BRM SDK, which allows you to make and compile customizations for your BRM system.

For more information, see BRM Cloud Native Deployment Guide.

Changing ECE Configuration During Runtime

You can now use Kubernetes jobs to do the following during ECE runtime without requiring you to restart ECE pods:

  • Reload configuration information from the charging-settings.xml file into ECE cache

  • Change grid-level logging options

For more information, see "Changing the ECE Configuration During Runtime" in BRM Cloud Native System Administrator's Guide.

PDC Cloud Native Deployment Enhancements

PDC cloud native includes the following enhancements:

  • Deploying and configuring Pricing Design Center cloud native services no longer requires manual post-installation tasks. These tasks are now automated in the PDC cloud native deployment process.

  • PDC cloud native images have been decoupled from Fusion Middleware images, reducing the overall size of the PDC images. To use PDC cloud native, you now download the Fusion Middleware images and provide their location in your PDC override-values.yaml file.

For more information, see "Configuring Pricing Design Center" in BRM Cloud Native Deployment Guide.