4 Setting Up Prerequisite Software

Learn about prerequisite tasks, such as installing Docker and Helm, that you perform before installing the Oracle Communications Billing and Revenue Management (BRM) cloud native deployment package.

Topics in this document:

Caution:

Oracle does not provide support for any prerequisite third-party software installation or configuration. Any installation or configuration issues related to non-Oracle prerequisite software needs to be handled by the customer.

BRM Cloud Native Prerequisite Tasks

As part of preparing your environment for BRM cloud native, you choose, install, and set up various components and services in ways that are best suited for your cloud native environment. The following shows the high-level prerequisite tasks for BRM cloud native:

  1. Ensure that you have downloaded the latest supported software that is compatible with BRM cloud native.

  2. Create a Kubernetes cluster.

  3. Install a Docker Engine and container runtime supported by Kubernetes.

  4. Install Helm.

  5. Create and configure a BRM database.

  6. Install and configure an external provisioner.

  7. If you plan to deploy Billing Care, the Billing Care REST API, Web Services Manager, or Business Operations Center:

    • Install and configure WebLogic Kubernetes Operator

    • Install an ingress controller

  8. If you plan to deploy the Billing Care REST API or the BRM REST Services Manager API, install Oracle Access Management. For installation instructions, see the "Install Oracle Access Management 12c" tutorial.

  9. If you plan to integrate your BRM cloud native deployment with a Kafka Server, install the Apache Kafka software. For installation instructions, see "Apache Kafka Quickstart" on the Apache Kafka website.

  10. If you plan to integrate your BRM cloud native deployment with Oracle Business Intelligence (BI) Publisher, install the Oracle Business Intelligence software. For installation instructions, see "Installing the Oracle Business Intelligence Software" in Oracle Fusion Middleware Installing and Configuring Oracle Business Intelligence.

Prepare your environment with these technologies installed, configured, and tuned for performance, networking, security, and high-availability. Make sure there are backup nodes available in case of system failure in any of the cluster's active nodes.

The following sections provide more information about the required components and services, the available options that you can choose from, and the way you must set them up for your BRM cloud native environment.

Software Compatibility

In order to run, manage, and monitor your BRM cloud native deployment, ensure that you are using the latest versions of all compatible software. See "BRM Cloud Native Deployment Software Compatibility" in BRM Compatibility Matrix.

Creating a Kubernetes Cluster

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers into logical units for easy management and discovery. When you deploy Kubernetes, you get a physical cluster with machines called nodes. A reliable cluster must have multiple worker nodes spread over separate physical infrastructure, and a very reliable cluster must have multiple primary nodes spread over separate physical infrastructure.

Figure 4-1 illustrates the Kubernetes cluster and the components that it interacts with.

Figure 4-1 Overview of the Kubernetes Cluster



Set up a Kubernetes cluster for your BRM cloud native deployment, securing access to the cluster and its objects with the help of service accounts and proper authentication and authorization modules. Also, set up the following in your cluster:

  • Volumes: Volumes are directories that are accessible to the containers in a pod and provide a way to share data. The BRM cloud native deployment package uses persistent volumes for sharing data in and out of containers, but does not enforce any particular type. You can choose from the volume type options available in Kubernetes.

  • A networking model: Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Every pod gets its own IP address, so you do not need to explicitly create a link between pods or map container ports to host ports. Several implementations are available that meet the fundamental requirements of Kubernetes’ networking model. Choose the networking model depending on the cluster requirement.

For more information about Kubernetes, see "Kubernetes Concepts" in the Kubernetes documentation.

Installing Docker

The Docker platform is used to containerize BRM products. Install Docker Engine if you want to do one of these:

  • Use the prebuilt images provided with the BRM cloud native deployment package.

  • Build your own BRM images by writing your own Dockerfiles using the sample Dockerfiles from the BRM cloud native deployment package.

You can use Docker Engine or any container runtime that supports the Open Container Initiative, as long as it supports the Kubernetes version specified in "BRM Cloud Native Deployment Software Compatibility" in BRM Compatibility Matrix.

Installing Helm

Helm is a package manager that helps you install and maintain software on a Kubernetes system. In Helm, a package is called a chart, which consists of YAML files and templates rendered into Kubernetes manifest files. The BRM cloud native deployment package includes Helm charts that help create Kubernetes objects, such as ConfigMaps, Secrets, controller sets, and pods, with a single command.

The following shows sample steps for installing and validating Helm:

  1. Download the Helm software from https://github.com/helm/helm/releases.

    For the list of supported Helm versions, see "BRM Cloud Native Deployment Software Compatibility" in BRM Compatibility Matrix.

  2. Extract the Helm files from the archive:

    tar -zxvf helm-version-linux-amd64.tar.gz

    where version is the Helm version number.

  3. Find the helm binary in the unpacked directory and move it to your desired directory. For example:

    mv linux-amd64/helm /usr/local/bin/helm
  4. Check the version of Helm:

    helm version

Helm leverages kubeconfig for users running the helm command to access the Kubernetes cluster. By default, this is $HOME/.kube/config. Helm inherits the permissions set up for this access into the cluster. If role-based access control (RBAC) is configured, you must grant sufficient cluster permissions to Helm users.

For more information about installing Helm, see "Installing Helm" in the Helm documentation.

Creating and Configuring Your BRM Database

You must install an Oracle database accessible through the Kubernetes network so BRM cloud native pods can perform database operations. The Oracle database you use can be:

  • On-premises, which can be either physical, VM, or containerized

  • Cloud-based, such as Bare Metal, VM, containerized, or DBaaS on Oracle Cloud Infrastructure

You can use an existing BRM database or create a new one. See "BRM Software Compatibility" in BRM Compatibility Matrix for the latest supported database versions.

To create and configure a new BRM database:

  1. When you install and create your database, pay particular attention to the following requirements:

    • Install Oracle Enterprise Edition

    • Install the following Oracle components: Oracle XML DB, Oracle XML Developer's Kit (XDK), and Oracle JServer

    • To partition the tables in your BRM database, install the Oracle Partitioning component
    • Set the Character Set to AL32UTF8

    • Set the National Character Set to UTF8

  2. (Optional) Set up TLS authentication in the BRM database. See "Configuring Transport Layer Security Authentication" in Oracle Database Security Guide. Also, ensure that you:

    • Create a TLS certificate or obtain one from a certificate provider

    • Install the certificate in the Oracle Database Server

  3. Set your LD_LIBRARY_PATH environment variable to $ORACLE_HOME/lib.

  4. You have the option to configure your database manually or let the BRM installer configure the database for you. You can do one of the following:

    • Use the BRM installer to configure a demonstration database for you

      The BRM installer provides the option to automatically configure your database for demonstration or development systems. The BRM installer configures your database by:

      • Creating the following tablespaces: pin00 (for data), pinx00 (for indexes), and PINTEMP (for a temporary tablespace)

      • Creating a BRM user named pin

      • Granting connection privileges to the pin user

    • Configure a demonstration database manually

      You can configure your database manually so contains additional or larger tablespaces. For more information, see "Configuring Your Database Manually for Demonstration Systems" in BRM Installation Guide.

    • Configure a production database manually

      For production systems, you must create multiple tablespaces for the BRM data and indexes. For information on how to estimate your database size, create multiple tablespaces, and map the tablespaces to BRM tables, see "Database Configuration and Tuning" in BRM Installation Guide.

  5. Grant the BRM schema user select permission on the V$SESSION database table. To do so, connect to the Oracle database with SQL*Plus as the system user and then enter this command:

    SQL> GRANT SELECT ON TABLE V$SESSION TO brmSchemaUser;

The installers for PDC, Billing Care, and all other products automatically create the tablespaces and users that are required for those products.

Installing an External Provisioner

An external provisioner creates shared, persistent storage for the containers in your BRM cloud native environment. It stores:

  • Input data, such as pricing XML files

  • Output data, such as archive files and reject files from Rated Event Loader and Universal Event Loader

  • Data that needs to be shared between containers, such as pin_virtual_time

Install and set up an external provisioner that has ReadWriteMany access in your system and that provisions volumes dynamically.

Installing an Ingress Controller

You use an ingress controller to expose BRM services outside of the Kubernetes cluster and allow clients to communicate with BRM.

The ingress controller monitors the ingress objects and acts on the configuration embedded in these objects to expose BRM HTTP and T3 services to the external network. Adding an external load balancer provides a highly reliable single-point access into the services exposed by the Kubernetes cluster. In this case, the services are exposed by the ingress controller on behalf of the BRM cloud native instance. Using a load balancer removes the need to expose Kubernetes node IPs to the larger user base, insulates users from changes (in terms of nodes appearing or being decommissioned) to the Kubernetes cluster, and enforces access policies.

If you are using Billing Care, the Billing Care REST API, or Business Operations Center, you must add a load balancer to your BRM cloud native system that has:

  • Path-based routing for the WebLogic Cluster service.

  • Sticky sessions enabled. That is, if the load balancer redirects a client’s login request to Managed Server 1, all subsequent requests from that client are redirected to Managed Server 1.

  • TLS enabled between the client and the load balancer to secure communications outside of the Kubernetes cluster.

    Business Operations Center and Billing Care use HTTP and rely on the load balancer to do the HTTPS termination.

See "Ingress" in the WebLogic Kubernetes Operator documentation for more information about setting up an ingress controller and sample load balancers.

Installing WebLogic Kubernetes Operator

Oracle WebLogic Kubernetes Operator helps you to deploy and manage WebLogic domains in your Kubernetes environment. It consists of several parts:

  • The operator runtime

  • The model for a Kubernetes customer resource definition (CRD)

  • A Helm chart for installing the operator

In the BRM cloud native environment, you use WebLogic Kubernetes Operator to maintain the domains and services for Billing Care, the Billing Care REST API, Web Services Manager, Pricing Design Center (PDC), and Business Operations Center.

The following shows sample steps for installing WebLogic Kubernetes Operator on your BRM cloud native environment:

  1. Add the Helm repository for WebLogic Kubernetes Operator:

    helm repo add weblogic-operator https://oracle.github.io/weblogic-kubernetes-operator/charts
  2. Create a new namespace for WebLogic Kubernetes Operator. For example, this kubectl command creates the namespace operator:

    kubectl create namespace operator
  3. Install WebLogic Kubernetes Operator:

    helm install weblogic-operator weblogic-operator/weblogic-operator --namespace operator --version version

    where version is the version of WebLogic Kubernetes Operator, such as 2.5.0 or 3.0.0. See "BRM Cloud Native Deployment Software Compatibility" in BRM Compatibility Matrix for a list of supported versions.

    If the installation is successful, you will see something similar to this:

    NAME: weblogic-operator 
    LAST DEPLOYED: Tue Oct 6 08:29:03 2020 
    NAMESPACE: weblogic-operator 
    STATUS: deployed 
    REVISION: 1 
    TEST SUITE: None
  4. Check the pod:

    kubectl get pods

    You should see something similar to this:

    NAME                                 READY   STATUS    RESTARTS   AGE 
    weblogic-operator-849cc6bdd8-vkx7n   1/1     Running   0          57s

For more information about WebLogic Kubernetes Operator, see "Introduction" in the WebLogic Kubernetes Operator documentation.