2 Installing Cloud Native Policy and Charging Rules Function Deployment Package

This chapter describes how to install Cloud Native Policy and Charging Rules Function (CNPCRF) on a cloud native environment.

This chapter contains the following:

Pre-Installation Tasks

Prior to installing the CNPCRF, perform the following tasks:

Checking the Software Requirements

The following softwares must be installed before installing Cloud Native Policy and Charging Rules Function (CNPCRF):

Software Version
Kubernetes v1.15.3
HELM v2.14.3

Additional software that needs to be deployed as per the requirement of the services:

Software App Version Notes
elasticsearch 7.4.0 Needed for Logging Area
elastic-curator 2.0.2 Needed for Logging Area
elastic-exporter 1.1.2 Needed for Logging Area
logs 2.7.0 Needed for Logging Area
kibana 7.4.0 Needed for Logging Area
grafana 3.8.4 Needed for Metrics Area
prometheus 9.2.0 Needed for Metrics Area
prometheus-node-exporter 1.6.0 Needed for Metrics Area
metallb 0.8.4 Needed for External IP
metrics-server 2.5.1 Needed for Metric Server
occne-snmp-notifier 0.3.0 Needed for Metric Server
tracer 0.13.3 Needed for Tracing Area

Note:

The above softwares are available if the CNPCRF is deployed in the Oracle Communications Cloud Native Environment (OCCNE). If you are deploying CNPCRF in any other environment, the above softwares must be installed before installing the CNPCRF. To check the installed software items,
helm ls
Some of the systems may need to use helm command with admin.conf file as follows:
helm --kubeconfig admin.conf

Checking the Environment Setup

Note:

This section is applicable only when the Cloud Native Policy and Charging Rules Function (CNPCRF) is deployed in the environment, other than OCCNE.

Network access

The Kubernetes cluster hosts must have network access to:

  • Local helm repository, where the CNPCRF helm charts are available.
    To check if the Kubernetes cluster hosts have network access to the local helm repository, execute the following command:
    helm repo update

    Note:

    Some of the systems may need to use helm command with admin.conf file as follows:
    helm --kubeconfig admin.conf
  • Local docker image repository, where the CNPCRF images are available.
    To check if the Kubernetes cluster hosts have network access to the local docker image repository, pull any image with tag name to check connectivity by executing the following command:
    docker pull docker-repo/image-name:image-tag
    where:

    docker-repo is the IP address or host name of the repository.

    image-name is the docker image name.

    image-tag is the tag the image used for the CNPCRF pod.

Note:

All the kubectl and helm related commands that are used in this guide must be executed on a system depending on the infrastructure/deployment. It could be a client machine, such as, a VM, server, local desktop, and so on.

Client Machine Requirements

Following are the requirements for the client machine where the deployment commands shall be executed:
  • It should have network access to the helm repository and docker image repository.
  • Helm repository must be configured on the client.
  • It should have network access to the Kubernetes cluster.
  • It should have necessary environment settings to run the kubectl commands. The environment should have privileges to create namespace in the Kubernetes cluster.
  • It should have helm client installed with the push plugin. The environment should be configured so that the helm install command deploys the software in the Kubernetes cluster.

Server or Space Requirements

For information on the server or space requirements, see the Oracle Communications Cloud Native Environment (OCCNE) Installation Guide.

Installation Tasks

Downloading CNPCRF package

To download the CNPCRF package from MOS:
  1. Login to My Oracle Support with your credentials.
  2. Select Patches and Updates tab to locate the patch.
  3. In Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in Product field, select Oracle Communications Cloud Native Core Policy and Charging Rules Function 1.6.0.0.0 from Release drop-down.
  5. Click Search. The Patch Advanced Search Results displays a list of releases.
  6. Click the required patch from the search results. A window opens and click Download.
  7. Click the zip file to download the package. Package is named as follows:

    ReleaseName-pkg-Releasenumber.tgz

    where:

    ReleaseName is a name which is used to track this installation instance.

    Releasenumber is the release number.

    For example, occnpcrf-pkg-1.6.0.tgz

Pushing the Images to Customer Docker Registry

To Push the images to customer docker resgistry:
  1. Untar the CNPCRF package file to get CNPCRF docker image tar file.

    tar -xvzf ReleaseName-pkg-Releasenumber.tgz

    The directory consists of the following:
    • CNPCRF Docker Images File:

      occnpcrf-images-1.6.0.tar

    • Helm File:

      occnpcrf-1.6.0.tgz

    • Readme txt File:

      Readme.txt (Contains cksum and md5sum of tarballs)

  2. Load the occnpcrf-images-1.6.0.tar file into the Docker system
    docker load --input /IMAGE_PATH/occnpcrf-images-1.6.0.tar
  3. Verify that the image is loaded correctly by entering this command:
    docker images
    ReferDocker Images for more information on docker images available in CNPCRF.
  4. Create a new tag for each imported image and push the image to the customer docker registry by entering this command:
    docker tag occnpcrf/diam-gateway:1.6.0 CUSTOMER_REPO/diam-gateway:1.6.0
    docker push CUSTOMER_REPO/diam-gateway:1.6.0 
     
    docker tag occnpcrf/pcrf_core:1.6.0 CUSTOMER_REPO/pcrf_core:1.6.0
    docker push CUSTOMER_REPO/pcrf_core:1.6.0
     
    docker tag occnpcrf/ocpm_cm_service:1.6.0 CUSTOMER_REPO/ocpm_cm_service:1.6.0
    docker push CUSTOMER_REPO/ocpm_cm_service:1.6.0 
     
    docker tag occnpcrf/ocpm_config_server:1.6.0 CUSTOMER_REPO/ocpm_config_server:1.6.0
    docker push CUSTOMER_REPO/ocpm_config_server:1.6.0 
     
    docker tag occnpcrf/ocpm_pre:1.6.0 CUSTOMER_REPO/ocpm_pre:1.6.0
    docker push CUSTOMER_REPO/ocpm_pre:1.6.0  
     
    docker tag occnpcrf/readiness-detector:1.6.0 CUSTOMER_REPO/readiness-detector:latest
    docker push CUSTOMER_REPO/readiness-detector:latest
     
    docker tag occnpcrf/soapconnector:1.6.0 CUSTOMER_REPO/soapconnector:1.6.0
    docker push CUSTOMER_REPO/soapconnector:1.6.0 
     
    docker tag occnpcrf/policyds:1.6.0 CUSTOMER_REPO/policyds:1.6.0
    docker push CUSTOMER_REPO/policyds:1.6.0
     
    docker tag occnpcrf/ldap-gateway:1.6.0 CUSTOMER_REPO/ldap-gateway:1.6.0
    docker push CUSTOMER_REPO/ldap-gateway:1.6.0
     
    docker tag occnpcrf/ocingress_gateway:1.5.1 <CUSTOMER_REPO>/ocingress_gateway:1.5.1
    docker push <CUSTOMER_REPO>/ocingress_gateway:1.5.1
     
    docker tag occnpcrf/ocpm_queryservice:1.6.0 <CUSTOMER_REPO>/ocpm_queryservice:1.6.0
    docker push <CUSTOMER_REPO>/ocpm_queryservice:1.6.0
     
    where:

    CUSTOMER_REPO is the docker registry address having Port Number, if registry has port attached.

    Note:

    For OCCNE, copy the package to bastion server and use localhost:5000 as CUSTOMER_REPO to tag the images and push to bastion docker registry.

    Note:

    You may need to configure the Docker certificate before the push command to access customer registry via HTTPS, otherwise, docker push command may fail.

Installing the CNPCRF Package

To install the CNPCRF package:

  1. Login to the server where the ssh keys are stored and SQL nodes are accessible.
  2. Connect to the SQL nodes.
  3. Login to the database as a root user.
  4. Create a privileged user and grant all the permissions to the privileged user by executing the following command:
    CREATE USER 'username'@'%' IDENTIFIED BY 'password';
    GRANT ALL PRIVILEGES ON *.* to 'username'@'%';
    FLUSH PRIVILEGES;
    where:

    username is the username and password is the password for MYSQL privileged user.

    For Example: In the below example "pcrfprivilegedusr" is used as username, "pcrfprivilegedpasswd" is used as password and granting all the permissions to "pcrfprivilegedusr".
    CREATE USER 'pcrfprivilegedusr'@'%' IDENTIFIED BY 'pcrfprivilegedpasswd';
    GRANT ALL PRIVILEGES ON *.* to 'pcrfprivilegedusr'@'%';
    FLUSH PRIVILEGES;
  5. Create a user and grant the necessary permissions to access the tables on all SQL nodes by executing the following command:
    CREATE USER 'username'@'%' IDENTIFIED BY 'password';

    where:

    username is the username and password is the password for MYSQL database user.

    For Example: In the below example "pcrfusr" is used as username, "pcrfpasswd" is used as password and granting the necessary permissions to "pcrfusr".
    CREATE USER 'pcrfusr'@'%' IDENTIFIED BY 'pcrfpasswd';
    GRANT SELECT, INSERT, UPDATE, DELETE ON *.* TO 'pcrfusr'@'%';

    Note:

    For multiple CNPCRF deployments pointing to the same MySQL database, ensure that the database names has to be unique for each CNPCRF deployment.The database name can be specified in the envMysqlDatabase parameter in the custom-value.yaml file.
  6. Execute the command, show grants for pcrfprivilegedusr, to confirm that privileged user has all the permission.
  7. Exit from database and logout from MYSQL node.
  8. Create namespace if already does not exists by entering the command:
    kubectl create namespace release_namespace
    where:

    release_namespace is the deployment CNPCRF namespace used by helm command.

  9. Create a kubernetes secret for a privileged and an application user. In CNPCRF, there are two type of users, one is application user who can perform INSERT, UPDATE, DELETE and SELECT operations on database created. Another one is privileged deployment user who can use helm hooks to perform DDL operations to perform install/upgrade/rollback or delete operations.
    To create a kubernetes secret for storing database username and password for these users :
    1. Create a yaml file with the application user's username and password. Below is a sample of yaml file, where, "pcrfusr" is used as username, "pcrfpasswd" is used as password.
      apiVersion: v1
      kind: Secret
      metadata:
        name: pcrf-db-pass
      type: Opaque
      data:
        mysql-username: cGNyZnVzcg==
        mysql-password: cGNyZnBhc3N3ZA==

      Note:

      The values for mysql-username and mysql-password should be base64 encoded.
    2. Create a yaml file with the privileged user's username and password. Below is a sample of yaml file, where, "pcrfprivilegedusr" is used as username, "pcrfprivilegedpasswd" is used as password.
      apiVersion: v1
      kind: Secret
      metadata:
        name: pcrf-privileged-db-pass
      type: Opaque
      data:
        mysql-username: cGNyZnByaXZpbGVnZWR1c3I=
        mysql-password: cGNyZnByaXZpbGVnZWRwYXNzd2Q=

      Note:

      The values for mysql-username and mysql-password should be base64 encoded.
    3. Execute the following commands to add the kubernetes secrets in a namespace:
      kubectl create -f yaml_file_name1 -n release_namespace
      kubectl create -f yaml_file_name2 -n release_namespace
      where:

      release_namespace is the deployment namespace used by the helm command.

      yaml_file_name1 is a name of the yaml file that is created in step 9a.

      yaml_file_name2 is a name of the yaml file that is created in step 9b.

  10. Create the customize custom-values-cnpcrf.yaml file with the required input parameters. To customize the file, see Customizing Cloud Native Policy and Charging Rules Function.

    Note:

    The values of the parameters mentioned in the custom values yaml file overrides the defaults values yaml file specified in the helm chart. If the envMysqlDatabase parameter is modified, then you should modify the envMysqlDatabaseConfigServer parameter with the same value in values.yaml file of policycds.
  11. Install CNPCRF by entering this command:
    helm install HELM_CHART --name release_name --namespace release_namespace -f CUSTOM_VALUES_YAML_FILE
    --atomic
    where:

    HELM_CHART is the location of the helm chart extracted from occnpcrf-pkg-1.6.0.tgz file

    release_name is the release name used by helm command.

    release_namespace is the deployment namespace used by helm command.

    CUSTOM_VALUES_YAML_FILE - is the name of the custom values yaml file (including location).

    For example:
    helm install /home/cloud-user/occnpcrf/pcrf --name occnpcrf --namespace occnpcrf -f ocpcrf-custom-values-1.6.0.yaml --atomic

    Note:

    Do not exit from helm install manually. After running helm install command, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out from helm install command. It leads to some anomalous behavior.
    Parameters in helm install command:
    • atomic: If this parameter is set, installation process purges chart on failure. The --wait flag will be set automatically.
    • wait: If this parameter is set, installation process will wait until all pods, PVCs, Services, and minimum number of pods of a deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout.
    • timeout duration: Specifies the time to wait for any individual kubernetes operation (like Jobs for hooks). Default value is 5m0s. If the helm install command fails at any point to create a kubernetes object, it will internally call the purge to delete after timeout value (default: 300s). Here timeout value is not for overall install, but it is for automatic purge on installation failure.
  12. Check the installation status by entering this command
    helm status release_name
    where:

    release_name is the release name used by helm command.

    You will see the status as DEPLOYED if the installation has been done successfully.
    Execute the following command to get status of jobs and pods:
    kubectl get jobs,pods -n release_namespace
    You will see the status as Running for all the nodes if the installation has been done successfully.

    If any pod is found with error status, execute this command to describe the pod events:

    kubectl describe pod pod-name -n release_namespace
    where, pod-name is the name of a pod with an error status.
    If any pod is found with error status, execute this command for log details:
    kubectl logs -f pod pod-name -n release_namespace