4 Setting Up Prerequisite Software

You perform prerequisite tasks, such as installing Docker and Helm, before installing the Oracle Communications Billing and Revenue Management (BRM) cloud native deployment package.

Topics in this document:

Caution:

Oracle does not provide support for any prerequisite third-party software installation or configuration. Any installation or configuration issues related to non-Oracle prerequisite software needs to be handled by the customer.

BRM Cloud Native Prerequisite Tasks

As part of preparing your environment for BRM cloud native, you choose, install, and set up various components and services in ways that are best suited for your cloud native environment. The following shows the high-level prerequisite tasks for BRM cloud native:

  1. Ensure that you have downloaded the latest supported software that is compatible with BRM cloud native.

  2. Create a Kubernetes cluster.

  3. Install a Docker Engine and container runtime supported by Kubernetes.

  4. Install Helm.

  5. Create and configure a BRM database.

  6. Install and configure an NFS-based storage provisioner.

  7. If you plan to deploy Billing Care, the Billing Care REST API, Web Services Manager, or Business Operations Center:

    • Install and configure WebLogic Kubernetes Operator

    • Install an ingress controller

  8. If you plan to deploy the Billing Care REST API or the BRM REST Services Manager API, install Oracle Access Management. For installation instructions, see the "Installing Oracle Access Management 12c" tutorial.

  9. If you plan to integrate your BRM cloud native deployment with a Kafka Server, install the Apache Kafka software. For installation instructions, see "Apache Kafka Quickstart" on the Apache Kafka website.

  10. If you plan to integrate your BRM cloud native deployment with Oracle Business Intelligence (BI) Publisher, install the Oracle Business Intelligence software. For installation instructions, see "Installing the Oracle Business Intelligence Software" in Oracle Fusion Middleware Installing and Configuring Oracle Business Intelligence.

Prepare your environment with these technologies installed, configured, and tuned for performance, networking, security, and high-availability. Make sure there are backup nodes available in case of system failure in any of the cluster's active nodes.

The following sections provide more information about the required components and services, the available options that you can choose from, and the way you must set them up for your BRM cloud native environment.

Software Compatibility

In order to run, manage, and monitor your BRM cloud native deployment, ensure that you are using the latest versions of all compatible software. See "BRM Cloud Native Deployment Software Compatibility" in BRM Compatibility Matrix.

Creating a Kubernetes Cluster

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers into logical units for easy management and discovery. When you deploy Kubernetes, you get a physical cluster with machines called nodes. A reliable cluster must have multiple worker nodes spread over separate physical infrastructure, and a very reliable cluster must have multiple primary nodes spread over separate physical infrastructure.

Figure 4-1 illustrates the Kubernetes cluster and the components that it interacts with.

Figure 4-1 Overview of the Kubernetes Cluster



Set up a Kubernetes cluster for your BRM cloud native deployment, securing access to the cluster and its objects with the help of service accounts and proper authentication and authorization modules. Also, set up the following in your cluster:

  • Volumes: Volumes are directories that are accessible to the containers in a Pod and provide a way to share data. The BRM cloud native deployment package uses persistent volumes for sharing data in and out of containers, but does not enforce any particular type. You can choose from the volume type options available in Kubernetes.

  • A networking model: Kubernetes assumes that Pods can communicate with other Pods, regardless of which host they land on. Every Pod gets its own IP address, so you do not need to explicitly create a link between Pods or map container ports to host ports. Several implementations are available that meet the fundamental requirements of Kubernetes’ networking model. Choose the networking model depending on the cluster requirement.

For more information about Kubernetes, see "Kubernetes Concepts" in the Kubernetes documentation.

Installing Docker

The Docker platform is used to containerize BRM products. Install Docker Engine if you want to do one of these:

  • Use the prebuilt images provided with the BRM cloud native deployment package.

  • Build your own BRM images by writing your own Dockerfiles using the sample Dockerfiles from the BRM cloud native deployment package.

You can use Docker Engine or any container runtime that supports the Open Container Initiative, as long as it supports the Kubernetes version specified in "BRM Cloud Native Deployment Software Compatibility" in BRM Compatibility Matrix.

For more information about installing Docker, see "Install Docker Engine" in the Docker documentation.

Installing Helm

Helm is a package manager that helps you install and maintain software on a Kubernetes system. In Helm, a package is called a chart, which consists of YAML files and templates that are rendered into Kubernetes manifest files. The BRM cloud native deployment package includes Helm charts that help create Kubernetes objects such as ConfigMaps, Secrets, controller sets, and Pods with a single command.

The following shows sample steps for installing and validating Helm:

  1. Download the Helm software from https://github.com/helm/helm/releases.

    For the list of supported Helm versions, see "BRM Cloud Native Deployment Software Compatibility" in BRM Compatibility Matrix.

  2. Extract the Helm files from the archive:

    tar -zxvf helm-version-linux-amd64.tar.gz

    where version is the Helm version number.

  3. Find the helm binary in the unpacked directory and then move it to your desired directory. For example:

    mv linux-amd64/helm /usr/local/bin/helm
  4. Check the version of Helm:

    helm version

Helm leverages kubeconfig for users running the helm command to access the Kubernetes cluster. By default, this is $HOME/.kube/config. Helm inherits the permissions set up for this access into the cluster. You must ensure that if role-based access control (RBAC) is configured, sufficient cluster permissions are granted to users running Helm.

For more information about installing Helm, see "Installing Helm" in the Helm documentation.

Creating and Configuring Your BRM Database

You must install an Oracle database that is accessible through the Kubernetes network so that the BRM cloud native Pods can perform database operations. The Oracle database you use can be:

  • On-premises, which can be either physical, VM, or containerized.

  • Cloud-based, such as Bare Metal, VM, containerized, or DBaaS on Oracle Cloud Infrastructure.

You can use an existing BRM database or create a new one. For the latest supported database versions, see "BRM Software Compatibility" in BRM Compatibility Matrix.

To create and configure a new BRM database:

  1. When you install and create your database, pay particular attention to the following requirements:

    • Install the Oracle Enterprise Edition.

    • Install the following Oracle components: Oracle XML DB, Oracle XML Developer's Kit (XDK), and Oracle JServer.

    • To partition the tables in your BRM database, install the Oracle Partitioning component.
    • Set the Character Set to AL32UTF8.

    • Set the National Character Set to UTF8.

  2. Set your LD_LIBRARY_PATH environment variable to $ORACLE_HOME/lib.

  3. You have the option to configure your database manually or let the BRM installer configure the database for you. You can do one of the following:

    • Use the BRM installer to configure a demonstration database for you

      The BRM installer provides the option to automatically configure your database for demonstration or development systems. The BRM installer configures your database by:

      • Creating the following tablespaces: pin00 (for data), pinx00 (for indexes), and PINTEMP (for a temporary tablespace).

      • Creating a BRM user named pin.

      • Granting connection privileges to the pin user.

    • Configure a demonstration database manually

      You can configure your database manually so that it contains additional or larger tablespaces. For more information, see "Configuring Your Database Manually for Demonstration Systems" in BRM Installation Guide.

    • Configure a production database manually

      For production systems, you must create multiple tablespaces for the BRM data and indexes. For information on how to estimate your database size, create multiple tablespaces, and map the tablespaces to BRM tables, see "Database Configuration and Tuning" in BRM Installation Guide.

The installers for PDC, Billing Care, and all other products automatically create the tablespaces and users that are required for those products.

Installing an NFS-Based Provisioner

An NFS-based provisioner creates shared, persistent storage for the containers in your BRM cloud native environment. It stores:

  • Input data, such as pricing XML files

  • Output data, such as archive files and reject files from Rated Event Loader and Universal Event Loader

  • Data that needs to be shared between containers, such as pin_virtual_time

Install and set up an NFS-based provisioner that has ReadWriteMany access in your system. For the list of supported NFS-based provisioner versions, see "BRM Cloud Native Deployment Software Compatibility" in BRM Compatibility Matrix.

The following procedure shows how to install and set up a sample nfs-provisioner in your system, but you can use any NFS provisioner.

  1. If you haven't already done so, create a shared file system. For more information, see "Configuring an NFS Server" in Oracle Linux 7 Managing File Systems.

  2. Create a new Kubernetes namespace. For example, this kubectl command creates a namespace named brmnfs_apps:

    kubectl create namespace brmnfs-apps
  3. Set the Kubernetes namespace as the default for your context. For example:

    kubectl config set-context --current --namespace=brmnfs-apps
  4. Download and install nfs-provisioner from https://quay.io/repository/kubernetes_incubator/nfs-provisioner?tag=latest&tab=tags.

  5. Create an override-values.yaml file that changes nfs-provisioner to use the appropriate storage. For example:

    persistence:   
       enabled: value   
       storageClass: "storageType"  
       size: size

    where:

    • value specifies whether persistence is enabled (true) or not (false). The default is false.

    • storageType is the type of storage such as oci.

    • size is the storage size in Gigabytes.

  6. Install the nfs-provisioner Helm chart:

    helm install NFSReleaseName stable/nfs-server-provisioner -f override-values.yaml

    where NFSReleaseName is the release name for the nfs-provisioner Helm chart and is used to track this installation instance.

  7. Check the new storage class:

    kubectl get sc 

    You should see something similar to this:

    NAME            PROVISIONER                                  AGE 
    nfs             cluster.local/nfs-server-provisioner         33s 
  8. Ensure that the new nfs-provisioner Pod is running:

    kubectl get pods

    You should see something similar to this:

    NAME                             READY   STATUS    RESTARTS   AGE 
    nfs-server-provisioner-0         1/1     Running   0          94s

Installing an Ingress Controller

You use an ingress controller, such as a NodePort service or a load balancer, to expose BRM services outside of the Kubernetes cluster and allow clients to communicate with BRM.

The ingress controller monitors the ingress objects created by the BRM cloud native deployment and acts on the configuration embedded in these objects to expose BRM HTTP and T3 services to the external network. This is achieved using NodePort services exposed by the ingress controller. Adding an external load balancer provides a highly reliable single-point access into the services exposed by the Kubernetes cluster. In this case, the NodePort services are exposed by the ingress controller on behalf of the BRM cloud native instance. Using a load balancer removes the need to expose Kubernetes node IPs to the larger user base, insulates users from changes (in terms of nodes appearing or being decommissioned) to the Kubernetes cluster, and enforces access policies.

If you are using Billing Care, the Billing Care REST API, or Business Operations Center, you must add an ingress controller to your BRM cloud native system that has:

  • Path-based routing for the WebLogic Cluster service.

  • Support for sticky sessions. That is, if the load balancer redirects a client’s login request to Managed Server 1, all subsequent requests from that client are redirected to Managed Server 1.

  • TLS enabled between the client and the load balancer to secure communications outside of the Kubernetes cluster.

    Business Operations Center and Billing Care use HTTP and rely on the load balancer to do the HTTPS termination.

Installing WebLogic Kubernetes Operator

Oracle WebLogic Kubernetes Operator helps you to deploy and manage WebLogic domains in your Kubernetes environment. It consists of several parts: the operator runtime, the model for a Kubernetes customer resource definition (CRD), and a Helm chart for installing the operator. In the BRM cloud native environment, you use WebLogic Kubernetes Operator to maintain the domains and services for Billing Care, the Billing Care REST API, Web Services Manager, and Business Operations Center.

The following shows sample steps for installing WebLogic Kubernetes Operator on your BRM cloud native environment:

  1. Add the Helm repository for WebLogic Kubernetes Operator:

    helm repo add weblogic-operator https://oracle.github.io/weblogic-kubernetes-operator/charts
  2. Create a new namespace for WebLogic Kubernetes Operator. For example, this kubectl command creates the namespace operator:

    kubectl create namespace operator
  3. Set the namespace as the default for your context. For example:

    kubectl config set-context --current --namespace=operator
  4. Install WebLogic Kubernetes Operator:

    helm install weblogic-operator weblogic-operator/weblogic-operator --namespace operator --version version

    where version is the version of WebLogic Kubernetes Operator, such as 2.5.0 or 3.0.0. See "BRM Cloud Native Deployment Software Compatibility" in BRM Compatibility Matrix for a list of supported versions.

    If the installation is successful, you will see something similar to this:

    NAME: weblogic-operator 
    LAST DEPLOYED: Tue Oct 6 08:29:03 2020 
    NAMESPACE: weblogic-operator 
    STATUS: deployed 
    REVISION: 1 
    TEST SUITE: None
  5. Check the Pod:

    kubectl get pods

    You should see something similar to this:

    NAME                                 READY   STATUS    RESTARTS   AGE 
    weblogic-operator-849cc6bdd8-vkx7n   1/1     Running   0          57s

For more information about WebLogic Kubernetes Operator, see Oracle WebLogic Kubernetes Operator User Guide.