4 Setting Up Prerequisite Software

Learn about prerequisite tasks to perform before installing the Oracle Communications Billing and Revenue Management (BRM) cloud native deployment package, such as installing Podman and Helm.

Topics in this document:

Caution:

Oracle does not provide support for any prerequisite third-party software installation or configuration. The customer must handle any installation or configuration issues related to non-Oracle prerequisite software.

BRM Cloud Native Prerequisite Tasks

As part of preparing your environment for BRM cloud native, you choose, install, and set up various components and services in ways that are best suited for your cloud native environment. The following shows the high-level prerequisite tasks for BRM cloud native:

  1. Ensure you have downloaded the latest supported software compatible with BRM cloud native.

  2. Create a Kubernetes cluster.

  3. Install Podman and a container runtime supported by Kubernetes.

  4. Install Helm.

  5. Create and configure a BRM database.

  6. Install and configure an external provisioner.

  7. If you plan to deploy Pricing Design Center (PDC), Billing Care, the Billing Care REST API, Web Services Manager, or Business Operations Center:

    • Install and configure WebLogic Kubernetes Operator.

    • Install an ingress controller.

  8. If you plan to deploy Elastic Charging Engine (ECE), install and setup up an ingress controller and an egress controller.

  9. If you plan to deploy the Billing Care REST API or the BRM REST Services Manager API, install Oracle Access Management. See the "Install Oracle Access Management 12c" tutorial for installation instructions.

  10. If you plan to integrate your BRM cloud native deployment with a Kafka Server, install the Apache Kafka software. See "Apache Kafka Quickstart" on the Apache Kafka website for installation instructions.

  11. If you plan to integrate your BRM cloud native deployment with Oracle Analytics Publisher, install Oracle Analytics Publisher. See "Installing the Oracle Analytics Server Software" in Oracle Analytics Installing and Configuring Oracle Analytics Server for installation instructions.

    Note:

    The Oracle Analytics Publisher software was previously named Oracle Business Intelligence (BI) Publisher.

Prepare your environment with these technologies installed, configured, and tuned for performance, networking, security, and high availability. Make sure backup nodes are available in case of system failure in any of the cluster's active nodes.

The following sections provide more information about the required components and services, the options you can choose from, and how you must set them up for your BRM cloud native environment.

Software Compatibility

To run, manage, and monitor your BRM cloud native deployment, ensure you use the latest versions of all compatible software. See "BRM Cloud Native Deployment Software Compatibility" in BRM Compatibility Matrix.

Creating a Kubernetes Cluster

Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. It groups containers into logical units for easy management and discovery. When you deploy Kubernetes, you get a physical cluster with machines called nodes. A reliable cluster must have multiple worker nodes spread over separate physical infrastructure, and a very reliable cluster must have multiple primary nodes spread over separate physical infrastructure.

Figure 4-1 illustrates the Kubernetes cluster and the components that it interacts with.

Figure 4-1 Overview of the Kubernetes Cluster



Set up a Kubernetes cluster for your BRM cloud native deployment, securing access to the cluster and its objects with the help of service accounts and proper authentication and authorization modules. Also, set up the following in your cluster:

  • Volumes: Volumes are directories that are accessible to the containers in a pod and provide a way to share data. The BRM cloud native deployment package uses persistent volumes for sharing data in and out of containers, but does not enforce any particular type. You can choose from the volume type options available in Kubernetes.

  • A networking model: Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Every pod gets its own IP address, so you do not need to explicitly create a link between pods or map container ports to host ports. Several implementations are available that meet the fundamental requirements of Kubernetes’ networking model. Choose the networking model depending on the cluster requirement.

For more information about Kubernetes, see "Kubernetes Concepts" in the Kubernetes documentation.

Installing Podman

You use the Podman platform to containerize BRM products. Install Podman if you want to do one of these:

  • Use the prebuilt images provided with the BRM cloud native deployment package.

  • Build your own BRM images by writing your own Dockerfiles using the sample Dockerfiles from the BRM cloud native deployment package.

You can use Podman or any container runtime that supports the Open Container Initiative if it supports the Kubernetes version specified in "BRM Cloud Native Deployment Software Compatibility" in BRM Compatibility Matrix.

Installing Helm

Helm is a package manager that helps you install and maintain software on a Kubernetes system. In Helm, a package is called a chart, which consists of YAML files and templates rendered into Kubernetes manifest files. The BRM cloud native deployment package includes Helm charts that help create Kubernetes objects, such as ConfigMaps, Secrets, controller sets, and pods, with a single command.

The following shows sample steps for installing and validating Helm:

  1. Download the Helm software from https://github.com/helm/helm/releases.

    For the list of supported Helm versions, see "BRM Cloud Native Deployment Software Compatibility" in BRM Compatibility Matrix.

  2. Extract the Helm files from the archive:

    tar -zxvf helm-version-linux-amd64.tar.gz

    where version is the Helm version number.

  3. Find the helm binary in the unpacked directory and move it to your desired directory. For example:

    mv linux-amd64/helm /usr/local/bin/helm
  4. Check the version of Helm:

    helm version

Helm leverages kubeconfig for users running the helm command to access the Kubernetes cluster. By default, this is $HOME/.kube/config. Helm inherits the permissions set up for this access into the cluster. If role-based access control (RBAC) is configured, you must grant sufficient cluster permissions to Helm users.

For more information about installing Helm, see "Installing Helm" in the Helm documentation.

Creating and Configuring Your BRM Database

You must install an Oracle database accessible through the Kubernetes network so BRM cloud native pods can perform database operations. The Oracle database you use can be:

  • On-premises, which can be either physical, VM, or containerized

  • Cloud-based, such as Bare Metal, VM, containerized, or DBaaS on Oracle Cloud Infrastructure

You can use an existing BRM database or create a new one. See "BRM Software Compatibility" in BRM Compatibility Matrix for the latest supported database versions.

To create and configure a new BRM database:

  1. When you install and create your database, pay particular attention to the following requirements:

    • Install Oracle Enterprise Edition

    • Install the following Oracle components: Oracle XML DB, Oracle XML Developer's Kit (XDK), and Oracle JServer

    • To partition the tables in your BRM database, install the Oracle Partitioning component
    • Set the Character Set to AL32UTF8

    • Set the National Character Set to UTF8

  2. (Optional) Set up TLS authentication in the BRM database. See "Configuring Transport Layer Security Authentication" in Oracle Database Security Guide. Also, ensure that you:

    • Create a TLS certificate or obtain one from a certificate provider

    • Install the certificate in the Oracle Database Server

  3. Set your LD_LIBRARY_PATH environment variable to $ORACLE_HOME/lib.

  4. You can configure your database manually or let the BRM installer configure the database for you. You can do one of the following:

    • Use the BRM installer to configure a demonstration database for you

      The BRM installer can automatically configure your database for demonstration or development systems. The BRM installer configures your database by:

      • Creating the following tablespaces: pin00 (for data), pinx00 (for indexes), and PINTEMP (for a temporary tablespace)

      • Creating a BRM user named pin

      • Granting connection privileges to the pin user

    • Configure a demonstration database manually

      You can configure your database manually so it contains additional or larger tablespaces. For more information, see "Configuring Your Database Manually for Demonstration Systems" in BRM Installation Guide.

    • Configure a production database manually

      For production systems, you must create multiple tablespaces for the BRM data and indexes. For information on estimating your database size, creating multiple tablespaces, and mapping the tablespaces to BRM tables, see "Planning Your Database Configuration" in BRM Installation Guide.

  5. Grant the BRM schema user select permission on the V$SESSION database table. To do so, connect to the Oracle database with SQL*Plus as the system user and then enter this command:

    SQL> GRANT SELECT ON TABLE V$SESSION TO brmSchemaUser;

The installers for PDC, Billing Care, and all other products automatically create the tablespaces and users that are required for those products.

Installing an External Provisioner

An external provisioner creates shared, persistent storage for the containers in your BRM cloud native environment. It stores:

  • Input data, such as pricing XML files

  • Output data, such as archive files and reject files from Rated Event Loader and Universal Event Loader

  • Data that needs to be shared between containers, such as pin_virtual_time

Install and set up an external provisioner with ReadWriteMany access in your system that provisions volumes dynamically.

Installing WebLogic Kubernetes Operator

Oracle WebLogic Kubernetes Operator helps you to deploy and manage WebLogic domains in your Kubernetes environment. It consists of several parts:

  • The operator runtime

  • The model for a Kubernetes customer resource definition (CRD)

  • A Helm chart for installing the operator

In the BRM cloud native environment, you use WebLogic Kubernetes Operator to maintain the domains and services for Billing Care, the Billing Care REST API, Web Services Manager, PDC, and Business Operations Center.

The following shows sample steps for installing WebLogic Kubernetes Operator on your BRM cloud native environment:

  1. Add the Helm repository for WebLogic Kubernetes Operator:

    helm repo add weblogic-operator https://oracle.github.io/weblogic-kubernetes-operator/charts
  2. Create a new namespace for WebLogic Kubernetes Operator. For example, this kubectl command creates the namespace operator:

    kubectl create namespace operator
  3. Install WebLogic Kubernetes Operator:

    helm install weblogic-operator weblogic-operator/weblogic-operator --namespace operator --version version

    where version is the version of WebLogic Kubernetes Operator, such as 2.5.0 or 3.0.0. See "BRM Cloud Native Deployment Software Compatibility" in BRM Compatibility Matrix for a list of supported versions.

    If the installation is successful, you will see something similar to this:

    NAME: weblogic-operator 
    LAST DEPLOYED: Tue Oct 6 08:29:03 2020 
    NAMESPACE: weblogic-operator 
    STATUS: deployed 
    REVISION: 1 
    TEST SUITE: None
  4. Check the pod:

    kubectl get pods

    You should see something similar to this:

    NAME                                 READY   STATUS    RESTARTS   AGE 
    weblogic-operator-849cc6bdd8-vkx7n   1/1     Running   0          57s

For more information about WebLogic Kubernetes Operator, see "Introduction" in the WebLogic Kubernetes Operator documentation.

Installing an Ingress Controller

Using an ingress controller exposes BRM services outside the Kubernetes cluster and allows clients to communicate with BRM.

The ingress controller monitors the ingress objects and acts on the configuration embedded in these objects to expose BRM HTTP and T3 services to the external network. Adding an external load balancer provides highly reliable single-point access to the services exposed by the Kubernetes cluster. In this case, the ingress controller exposes the services on behalf of the BRM cloud native instance. Using a load balancer removes the need to expose Kubernetes node IPs to the larger user base, insulates users from changes (in terms of nodes appearing or being decommissioned) to the Kubernetes cluster, and enforces access policies.

If you are using Billing Care, the Billing Care REST API, or Business Operations Center, you must add a load balancer to your BRM cloud native system that has:

  • Path-based routing for the WebLogic Cluster service.

  • Sticky sessions enabled. That is, if the load balancer redirects a client’s login request to Managed Server 1, all subsequent requests from that client are redirected to Managed Server 1.

  • TLS enabled between the client and the load balancer to secure communications outside of the Kubernetes cluster.

    Business Operations Center and Billing Care use HTTP and rely on the load balancer to terminate HTTPS.

See "Ingress" in the WebLogic Kubernetes Operator documentation for more information about setting up an ingress controller and sample load balancers.

Setting Up ECE Cloud Native Ingress and Egress Flows

Ingress and egress controllers expose ECE services outside the Kubernetes cluster, allowing external networks to communicate with ECE. For example, an ingress controller can route requests from Diameter and 5G HTTP clients to the httpgateway and diametergateway pods for processing. Likewise, an egress controller can send CDR records from the cdrformatter pod to the ECE database.

You can expose external network IPs for ingress traffic from external clients using the following:

  • A load balancer exposing the IPs on the external network. The load balancer sends ingress traffic to Kubernetes services or node ports.

  • Kubernetes service IPs or worker node IPs residing on the external network.

You can route egress traffic through an external network IP that is hosted by a worker node interface.

Figure 4-2 shows an ECE cloud native deployment with sample ingress and egress flows.

Figure 4-2 ECE Cloud Native Ingress and Egress Flows



In this figure:

  • The ingress flows traverse a load balancer, but you can use an alternate ingress flow to meet your business requirements.

  • The egress flows are depicted generically. Network source addressing and routing may vary based on your business requirements.

  • ECE cloud native uses logical external networks. The number and content of these networks may vary depending on your business requirements.

Table 4-1 describes the egress flow from each ECE pod to an endpoint.

Table 4-1 ECE Cloud Native Egress Flows

ECE Pod Egress Endpoints

brmgateway

ECE database

BRM database

Note: The BRM database is accessed during installation.

cdrformatter

ECE database

Note: The ECE database is used for CDR management.

cdrgateway

ECE database

Note: The ECE database is used for CDR management.

configloader

ECE database

customerupdater

ECE database

BRM database

diametergateway

Remote HTTP Gateway (for active-active deployments only)

Remote Kafka server (for active-active deployments only)

Note: This pod does not initiate Diameter connections to Diameter signaling clients.

ece-customerloader-job

ECE database

BRM database

ece-persistence-job

ECE database

ece-persistence-upgrade-job

ECE database

ecs

ECE database

BRM database

Remote ECE Coherence Federation (for active-active and active-standby deployments only)

Note: The BRM database is accessed during customer loading.

emgateway

BRM database

Remote HTTP Gateway (for active-active deployments only)

Note: This pod forwards requests to the remote HTTP Gateway. This is optional for active-active deployments.

httpgateway

Charging signaling clients

Remote HTTP Gateway (for active-active deployments only)

Remote Kafka server (for active-active deployments only)

Note: This pod sends HTTP/2 requests to 5G clients.

Note: The httpgateway pod's egress to a remote ECE HTTP Gateway is needed only if it is processing 5G charging traffic.

monitoringagent

Monitoring Agents

Remote Monitoring Agent (for active-active and active-standby deployments only)

pricingupdater

Pricing Design Center

ratedeventformatter

ECE database

BRM database (for Rated Event Manager plug-in only)

Note: Direct access to the BRM database occurs only when the Rated Event Manager plug-in is configured to write rated events directly to the BRM database.