2 Installing OCNADD

This chapter describes how to install Oracle Communications Network Analytics Data Director (OCNADD) on the supported paltforms. The OCNADD installation is supported over the following platforms:
  • Oracle Communications Cloud Native Environment (OCCNE)

    This document descibes the OCNADD installation on OCCNE. To perform the installation on OCCNE, see, Prerequisites.

  • VMware Tanzu Application Platform (TANZU)

    The procedure for OCNADD installation on TANZU is similar to the OCNADD installation on OCCNE. However, any steps specific to TANZU platform are mentioned explicitely in the document.

Prerequisites

Before you begin with the procedure for installing Oracle Communications Network Analytics Data Director (OCNADD), make sure that the following requirements are met:

Caution:

User, computer and applications, and character encoding settings may cause an issue when copy-pasting commands or any content from PDF. PDF reader version also affects the copy-pasting functionality. It is recommended to verify the pasted content especially when hyphens or any special characters are part of copied content.

Software Requirements

The following software must be installed before installing Oracle Communications Network Analytics Data Director (OCNADD):

Table 2-1 Mandatory Software

Software Version
Kubernetes 1.22.x and 1.21.x
Helm 3.8.x
Docker/Podman 19.03.x/4.1.x

Note:

OCNADD 22.0.0 supports OCCNE 22.3.x.
To check the Oracle Communications Cloud Native Environment (OCCNE) version, run the following command:
echo $OCCNE_VERSION
To check the current Helm and Kubernetes versions installed in OCCNE, run the following commands:
kubectl version
helm version

Note:

Starting with OCCNE 1.8.0, podman is the preferred container platform instead of docker. For more information on installing and configuring podman, see the Oracle Communications Cloud Native Environment Installation Guide.

If you are installing OCNADD on TANZU, the following software must be installed:

Table 2-2 Mandatory Software

Software Version
Tanzu 1.4.1
To check the current TANZU version, run the following command:
tanzu version

Depending on the requirement, you may have to install additional software while deploying OCNADD. The list of additional software items, along with the supported versions and usage, is given in the following table:

Table 2-3 Additional Software

Software Version Required For
Prometheus-Operator 2.36.1 Metrics
Metallb 0.12.1 LoadBalancer
CNDBTier 22.3.x MYSQL Database

Note:

The softwares are available by default, if OCNADD is deployed in Oracle Communications Cloud Native Environment (OCCNE). If you are deploying OCNADD in any other environment, for instance, TANZU, the above-mentioned software must be installed before installing OCNADD.
To check the installed software items, run the following command:
helm ls -A

Environment Setup Requirements

This section provides information on environment setup requirements for installing Oracle Communications Network Analytics Data Director (OCNADD).

Network Access

The Kubernetes cluster hosts must have network access to the following repositories:

  • Local docker image repository – It contains the OCNADD docker images.
    To check if the Kubernetes cluster hosts can access the local docker image repository, pull any image with an image-tag, using the following command:
    docker pull docker-repo/image-name:image-tag

    where,

    docker-repo is the IP address or hostname of the docker image repository.

    image-name is the docker image name.

    image-tag is the tag assigned to the docker image used for the OCNADD pod.

  • Local helm repository – It contains the OCNADD helm charts.
    To check if the Kubernetes cluster hosts can access the local helm repository, run the following command:
    helm repo update
  • Service FQDN or IP Addresses of the required OCNADD services, for instance, Kafka Brokers must be discoverable from outside of the cluster, which is publicly exposed so that Ingress messages to OCNADD can come from outside of Kubernetes.

Client Machine Requirements

Note:

Run all the kubectl and helm commands in this guide on a system depending on the infrastructure and deployment. It could be a client machine, such as a virtual machine, server, local desktop, and so on.

This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.

The client machine must meet the following requirements:

  • network access to the helm repository and docker image repository.
  • configured helm repository
  • network access to the Kubernetes cluster.
  • required environment settings to run the kubectl, podman, and docker commands. The environment should have privileges to create namespace in the Kubernetes cluster.
  • The helm client installed with the push plugin. Configure the environment in such a manner that the helm install command deploys the software in the Kubernetes cluster.

Server or Space Requirements

For information on the server or space requirements for installing OCNADD on OCCNE, see Oracle Communications Cloud Native Environment Installation Guide.

Secret File Requirements

Caution:

Users should provide their own CAcert.pem and CAkey.pemfor generating certificates for the OCNADD SSL or TLS support.

For HTTPs, the certificates must be created before creating secret files for Keys and MySQL database credentials.

For more information about creating certificates, see Configuring SSL or TLS Certificates.

ServiceAccount Requirement

ServiceAccount is mandatory and it can be specified using the ocnaddServiceAccountName parameter. If it is not specified in the ocnadd/values.yaml file, OCNADD creates a default service account at the time of installation. You can also create a service account as described in Creating Service Account, Role, and RoleBinding, and then update the ocnaddServiceAccountName accordingly in the ocnadd/values.yaml file.

cnDBTier Requirement

OCNADD supports cnDBTier 22.3.x in a CNE environment. cnDBTier must be up and running in case of containerized Cloud Native Environment. For more information about the installation procedure, see Oracle Communications Cloud Native Core cnDBTier Installation Guide.

OCNADD Images

The following table lists Data Director microservices and their corresponding images:

Table 2-4 OCNADD images

Microservices Image Tag
OCNADD-Configuration ocnaddconfiguration 22.0.0
OCNADD-ConsumerAdapter ocnaddconsumeradapter 22.0.0
OCNADD-EgressGW ocnaddegressgateway 22.0.0
OCNADD-AGG

ocnaddnrfaggregation

ocnaddscpaggregation

22.0.0
OCNADD-Alarm ocnaddalarm 22.0.0
OCNADD-HealthMonitoring ocnaddhealthmonitoring 22.0.0
OCNADD-Kafka kafka-broker-x 22.0.0
OCNADD-Admin ocnaddadminservice 22.0.0
OCNADD-Backendrouter ocnaddbackendrouter 22.0.0
OCNADD-GUI ocnaddgui 22.0.0

Note:

The service images are prefixed with the OCNADD release name.

Resource Requirements

This section describes the resource requirements to install and run Oracle Communications Network Analytics Data Director (OCNADD).

Table 2-5 OCNADD Resource Requirements

Service vCPU Req vCPU Limit Memory Req(Gi) Memory Limit (Gi) Min Replica Max Replica Partitions Topic Name
ocnaddconfiguration 1 2 1 2 1 1    
ocnaddalarm 1 2 1 2 1 1    
ocnaddadmin 1 1 1 1 1 1    
ocnaddhealthmonitoring 1 2 1 2 1 1    
ocnaddbackendrouter 1 2 1 2 1 1    
ocnaddscpaggregation 3 3 4 4 1 2 3 SCP
ocnaddnrfaggregation 3 3 4 4 1 2 3 NRF
ocnaddadapter 2.5 2.5 8 8 3 5 6 MAIN
ocnaddegressgateway 4 4 8 8 2 3    
ocnaddkafka 4 4 24 24 3 3    
zookeeper 1 2 2 2 3 3    

Note:

To deploy beyond 50000 Messages Per Second (MPS), Standard Profile (KAFKA) is recommended.

Ephemeral Storage Requirements

Table 2-6 Ephemeral Storage

Service Name Ephemeral Storage (min) in Mi Ephemeral Storage (max) in Mi
<app-name>-adapter 200 800
<app-name>-gw 400 800
ocnaddadminservice 100 200
ocnaddalarm 100 500
ocnaddhealthmonitoring 100 500
ocnaddscpaggregation 100 500
ocnaddnrfaggregation 100 500
ocnaddconfiguration 100 500

Installation Sequence

This section provides information on how to install Oracle Communications Network Analytics Data Director (OCNADD). The steps are divided into two categories:

You are recommended to follow the steps in the given sequence for preparing and installing OCNADD.

Pre-Installation Tasks

To install OCNADD, perform the preinstallation steps described in this section.

Note:

The kubectl commands may vary based on the platform used for deploying OCNADD. Users are recommended to replace kubectl with environment-specific command line tool to configure Kubernetes resources through kube-api server. The instructions provided in this document are as per the OCCNE’s version of kube-api server.
Creating OCNADD Namespace

This section explains how to verify or create new namespace in the system.

To verify if the required namespace already exists in the system, run the following command:

kubectl get namespaces

If the namespace exists, you may continue with the next steps of installation.

If the required namespace is not available, create a namespace using the following command:

kubectl create namespace <required namespace>

Example

kubectl create namespace ocnadd_namespace

Naming Convention for Namespaces

While choosing the name of the namespace where you wish to deploy OCNADD, make sure the following requirements are met:

  • starts and ends with an alphanumeric character
  • contains 63 characters or less
  • contains only alphanumeric characters or '-'

Note:

It is recommended to avoid using prefix kube- when creating namespace. This is required as the prefix is reserved for Kubernetes system namespaces.
Creating Service Account, Role, and RoleBinding

This section describes the procedure to create service account, role, and rolebinding.

Important:

The steps described in this section are optional and you can skip it in any of the following scenarios:
  • If service accounts are created automatically at the time of OCNADD deployment.
  • If the global service account with the associated role and role-bindings is already configured or if you are using any internal procedure to create service accounts.

    If a service account with necessary rolebindings is already available, then update the ocnadd/values.yaml with the account details before initiating the installation procedure. In case of incorrect service account details, the installation fails.

Create Service Account

To create the global service account:

  1. Create an OCNADD resource file:
    vi <ocnadd resource file>

    Example:

    vi ocnadd-sampleserviceaccount-template.yaml
  2. Update the ocnadd-sampleserviceaccount-template.yaml with the release specific information:

    Note:

    Update <helm-release> and <namespace> with its respective OCNADD namespace and OCNADD helm release name.
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: <helm-release>-serviceaccount
    namespace: <namespace>
    
where, <helm-release> is the helm deployment name.

<namespace> is the name of the Kubernetes namespace of OCNADD. All the microservices are deployed in this Kubernetes namespace.

Define Permissions using Role

To define permissions using roles:
  1. Create an OCNADD resource file:
    vi <ocnadd sample role file>

    Example:

    vi ocnadd-samplerole-template.yaml
  2. Update the ocnadd-samplerole-template.yaml with the role specific information:
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: <helm-release>-role
    rules:
    - apiGroups: [""]
      resources:
      - pods
      - services
      - configmaps
      verbs: ["get", "list", "watch"]

Create RoleBindings

To bind the roles with the service account:
  1. Create an OCNADD rolebinding resource file:
    vi <ocnadd sample rolebinding file>

    Example:

    vi ocnadd-sample-rolebinding-template.yaml
  2. Update the ocnadd-sample-rolebinding-template.yaml with the role binding specific information:
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
    name: <helm-release>-rolebinding
    namespace: <namespace>
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: <helm-release>-role
    subjects:
    - kind: ServiceAccount
    name: <helm-release>-serviceaccount
    namespace: <namespace>

Create resources

Run the following commands to create resources:
kubectl -n <namespace> create -f ocnadd-sample-serviceaccount-template.yaml;

kubectl -n <namespace> create -f ocnadd-sample-role-template.yaml;
kubectl -n <namespace> create -f ocnadd-sample-rolebinding-template.yaml

Note:

Once the global service account is added, users must add global.ServiceAccountName in the ocnadd/values.yaml file; otherwise, installation may fail as a result of creating and deleting custom resource definitions (CRD).
Configuring OCNADD Database

OCNADD microservices use MySQL database to store the configuration and run time data.

The database is managed by the helm pre-install hook. However, OCNADD requires the database administrator to create a privileged user in MySQL database and provide the necessary permissions to access the databases. Before installing OCNADD it is required to create the MySQL user and databases.

Note:

  • If the privileged user is already available, then update the credentials, such as username and password (base64 encoded) in ocnadd/values.yaml.
  • If the privileged user is not available, then create it using the following procedure. Once the user is created, update the credentials for the user in ocnadd/values.yaml.

Creating Database

To create database:
  1. Run the following command to log in to MySQL pod.

    Note:

    Use the namespace in which the DBTier is deployed. For example, occne-cndbtier namespace is used. The default container name is mysqlndbcluster
    $ kubectl -n occne-cndbtier exec -it ndbmysqld-0 -- bash
    To verify all the available containers in the pod, run:
    Use 'kubectl describe pod/ndbmysqld-0 -n occne-cndbtier' 
  2. Run the following command to login to MySQL server using MySQL client:
    $ mysql -h 127.0.0.1 -uroot -p
    $ Enter password:
  3. To create a privileged user, run the following command:
    CREATE USER IF NOT EXISTS'<ocnadd privileged username>'@'%' IDENTIFIED BY '<ocnadd privileged user password>';
    

    Example:

    CREATE USER IF NOT EXISTS 'ocdd'@'%' IDENTIFIED WITH mysql_native_password BY 'ocdd';

    where:

    <ocnadd> is the privileged username and <ocnadd> is the password for MySQL privileged user

  4. Run the following command to grant the necessary permissions to the privileged user:
    GRANT ALL PRIVILEGES ON *.* TO 'ocdd'@'%' WITH GRANT OPTION;
    FLUSH PRIVILEGES;
  5. Access the ocnadd-secret-hook.yaml from the OCNADD helm files using the following path:
    helm-chart/templates/ocnadd-secret-hook.yaml
    .
  6. Update the following parameters in the ocnadd-secret-hook.yaml file to change the user credentials:
    data:   
    MYSQL_USER: b2NkZA==    
    MYSQL_PASSWORD: b2NkZA==
    To generate the base64 encoded user and password from the terminal, run the following command:
    echo -n <string> | base64 -w 0

Update Database Name

To update the database names in the Configuration Service, Alarm Service, and Health Monitoring services:

  1. Access the ocdd-db-resource.sql file from the helm chart using the following path:
    helm-charts/ocdd-db-resource.sql
  2. Update all occurrences of the database name in ocdd-db-resource.sql.

    Note:

    By default, the database names are configuration_schema, alarm_schema, and healthdb_schema for the respective services.
  3. Update the database IP and database name in ocnadd/values.yaml.
    database:
      db_ip: 10.20.30.40 (Change DB IP)
      db_port: 3306 (If using a different port for DB, change it. By default, DB port is 3306)
      configuration_db: configuration_schema (Change to new DB name)
      alarm_db: alarm_schema (Change to new DB name)
      health_db: healthdb_schema (Change to new DB name)

Note:

During the OCNADD re-installation, all three application databases must be removed manually by running the drop database <dbname>; command.
Configuring Secrets for Accessing OCNADD Database

The secret configuration for OCNADD database is automatically managed during the database creation the helm pre-install procedure.

Configuring SSL or TLS Certificates
Generate Certificates using CACert and CAKey

OCNADD allows the users to provide the CACert and CAKey and generate certificates for all the services by running a predefined script.

To generate certificates using CACert and CAKey:
  1. Navigate to the ssl_certs/default_values/values file.
  2. In the values.yaml file, edit the global parameters, CN, and SAN for each service based on the requirement as follows:

    Note:

    Edit only values for global parameters and RootCA common name, and add service blocks for all the services for which certificate needs to be generated. The values.yaml will be available once the OCNADD helm files are extracted.
    Global Params:
    [global]
    countryName=<country>
    stateOrProvinceName=<state>
    localityName=<city>
    organizationName=<org_name>
    organizationalUnitName=<org_bu_name>
    defaultDays=<days to expiry>
     
    
    Root CA common name (e.g., *.namespace.svc.domainName)
    ##root_ca
    commonName=<rootca_common_name>
     
    
    Service common name for client and server and SAN. (Make sure to follow exact same format and provide an empty line at the end of each service block)
     
    [service-name-1]
    client.commonName=client.cn.name.svc1
    server.commonName=server.cn.name.svc1
    IP=127.0.0.1
    DNS.1=localhost
     
    [service-name-2]
    client.commonName=client.cn.name.svc2
    server.commonName=server.cn.name.svc2
    IP = 10.20.30.40
    DNS.1 = *.svc2.namespace.svc.domainName
     
    [service-name-3]
    client.commonName=client.cn.name.svc3
     
    [service-name-4]
    server.commonName=server.cn.name.svc4
    IP.1 = 10.20.30.41
    IP.2 = 127.0.0.1
    DNS.1 = *.svc4.namespace.svc.domainName
    DNS.2 = *.svc44.namespace.svc.domainName
     
    ##end
  3. Run the generate_certs.sh script with the following command:
    ./generate_certs.sh -cacert <path to>/CAcert.pem -cakey <path to>/CAkey.pem
  4. Select “n” when prompted for create Certificate Authority (CA).
    Do you want to create Certificate Authority (CA)? n
  5. Copy CA Certificate pem file (as cacert.pem) to “demoCA” folder and CA certificate key file (as cakey.pem) to “demoCA/private” if the paths to cacert and cakey are not provided through flags.

    (The demoCA folder is created by script in the same path where script exist.)

    cp /path/to/CAcert.pem /path/to/generate_certs_script/demoCA/cacert.pem
    cp /path/to/CAkey.pem /path/to/generate_certs_script/demoCA/private/cakey.pem

    Note:

    Perform this step only if you have not provided the paths to cacert and cakey.
  6. Select “y” when prompted to use the existing CA to sign CSR for each service.
    Would you like to use existing CA to sign CSR for services? Y
  7. Enter the password for your CA key.
    password: <enter your ca key password>
  8. Select “y” when prompted to create CSR for each service.
    Create Certificate Signing Request (CSR) for each service? Y
  9. Select “y” when prompted for signing CSR for each service with CA Key.
    Would you like to sign CSR for each service with CA key? Y
  10. Select “y” if you would like to create secrets for each service in existing namespace or “n” if you want to create secrets in a new namespace.
    If “n”
    a.  Would you like to choose any above namespace for creating secrets (y/n) n
    b.  Enter new Kubernetes Namespace to create: <name of new ns to create>
    If “y”
    c.  Would you like to choose any above namespace for creating secrets (y/n) y
    d.  Enter new Kubernetes Namespace to create: <name of existing ns>

    The certificates are generated for each service and are available in the demoCA/services folder. The secret is created in the namespace, which is specified during the secret creation process.

  11. Run the following command to check if the secrets are created in the specified namespace.
    kubectl get secret -n <namespace>
  12. Run the following command to describe any secret created by script.
    kubectl describe secret <secret-name> -n <namespace>
Generate Certificate Signing Request (CSR)

Users can generate the certificate signing request for each of the services using the OCNADD script, and then can use the generated CSRs to generate the certificates using its own certificate signing mechanism (External CA server, Hashicorp Vault, and Venafi).

Perform the following procedure to generate the CSR:

  1. Navigate to the ssl_certs/default_values/values file.
  2. Edit global parameters, CN, and SAN for each service based on the requirement.

    Note:

    Edit only values for global parameters and RootCA common name, and add service blocks for all the services for which certificate needs to be generated.
    a.  Global Params:
    [global]
    countryName=<country>
    stateOrProvinceName=<state>
    localityName=<city>
    organizationName=<org_name>
    organizationalUnitName=<org_bu_name>
    defaultDays=<days to expiry>
     
    b.  Root CA common name (e.g., *.namespace.svc.domainName)
    ##root_ca
    commonName=<rootca_common_name>
     
    c.  Service common name for client and server and SAN. (Make sure to follow exact same format and provide an empty line at the end of each service block)
     
    [service-name-1]
    client.commonName=client.cn.name.svc1
    server.commonName=server.cn.name.svc1
    IP=127.0.0.1
    DNS.1=localhost
     
    [service-name-2]
    client.commonName=client.cn.name.svc2
    server.commonName=server.cn.name.svc2
    IP = 10.20.30.40
    DNS.1 = *.svc2.namespace.svc.domainName
     
    [service-name-3]
    client.commonName=client.cn.name.svc3
     
    [service-name-4]
    server.commonName=server.cn.name.svc4
    IP.1 = 10.20.30.41
    IP.2 = 127.0.0.1
    DNS.1 = *.svc4.namespace.svc.domainName
    DNS.2 = *.svc44.namespace.svc.domainName
     
    ##end
  3. Run the generate_certs.sh script with the --gencsr or -gc flag.
    ./generate_certs.sh --gencsr 
  4. Navigate to CSR and keys in the demoCA/services (separate for client and server). The CSR can be signed using your own certificate signing mechanism and certificates should be generated.
  5. Make sure that the certificates and keys naming is in the following format if the service is acting as client or server, or both.
    For Client
    servicename-clientcert.pem and servicename-clientprivatekey.pem
    For Server
    servicename-servercert.pem and servicename-serverprivatekey.pem
  6. Copy the certificates in the respective demoCA/services folder after certificates are generated for each service by signing CSR with your own CA key.

    The certificates should be separate for client and server as their CSR are generated.

  7. Run generate_certs.sh with the cacert path and --gensecret or -gs to generate secrets.
    ./generate_certs.sh -cacert /path/to/cacert.pem --gensecret
  8. Enter "y" to continue generating secrets.
    Would you like to continue to generate secrets? (y/n) y
  9. Select “y” if you want to create secrets for each service in the existing namespace or “n” if you want to create secrets in a new namespace.
    If “n”
    >    Would you like to choose any above namespace for creating secrets (y/n) n
    >    Enter new Kubernetes Namespace to create: <name of new ns to create>
    If “y”
    >    Would you like to choose any above namespace for creating secrets (y/n) y
    >    Enter new Kubernetes Namespace to create: <name of existing ns>

    The secret is created in the namespace, which is specified during the secret creation process.

  10. Run the following command to check if the secrets are created in the specified namespace:
    kubectl get secret -n <namespace> 
  11. Run the following command to describe any secret created by script:
    kubectl describe secret <secret-name> -n <namespace> 
Generate Certificates and Private Keys

Users can generate the certificates and private keys for all the required services, and then create Kubernetes secrets without using the OCNADD script.

Perform the following procedure to generate the certificates and private keys:

  1. Run the openssl command to generate CSR for each service (separate for client and server if required).
    1. Run the following command to generate private key:
      openssl req -x509 -nodes -sha256 -days 365 -newkey rsa:2048 -keyout rsa_private_key -out rsa_certificate.crt
      
    2. Run the following command to convert private key to pem:
      openssl rsa -in rsa_private_key -outform PEM -out rsa_private_key_pkcs1.pem
    3. Update CN, SAN, and global parameters for each service in the openssl.cnf file.
    4. Run the following command to generate CSR for each service using private key:
      openssl req -new -key rsa_private_key -out service_name.csr -config ssl.conf
  2. Sign each service CSR with Root CA private key to generate certificates.
  3. Generate Secrets using each service certificates and keys.
    1. Run the following command to create truststore and keystore password files:
      echo "<password>" >> trust.txt
      echo "<password>" >> key.txt
    2. Run the following command to create secrets using client and server certificates and cacert:
      kubectl create secret generic <service_name>-secret --from-file=path/to/cert/<service_name>-clientprivatekey.pem --from-file=path/to/cert/<service_name>-clientcert.pem --from-file=path/to/cacert/cacert.pem  --from-file=path/to/cert/<service_name>-serverprivatekey.pem --from-file=path/to/cert/<service_name>-servercert.pem  --from-file=trust.txt --from-file=key.txt --from-literal=javakeystorepass=changeit -n <namespace>

    Note:

    Repeat Step 1 and 2 for all services (separate for client and server).

Installation Tasks

This section describes the tasks that the user must follow for installing OCNADD.

Note:

Before starting the installation tasks, ensure that the Prerequisites and Pre-Installation Tasks arec completed.
Downloading OCNADD Package

To download the Oracle Communications Network Analytics Data Director (OCNADD) package from MOS, perform the following steps:

  1. Log in to My Oracle Support with your credentials.
  2. Select the Patches and Updates tab to locate the patch.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter "Oracle Communications Cloud Native Core - 5G" in the Product field, select "Oracle Communications Cloud Native Core Network Analytics Data Director 22.0.0.0.0" from Release drop-down list.
  5. Click Search. The Patch Advanced Search Results displays a list of releases.
  6. Select the required patch from the search results. The Patch Details window opens.
  7. Click Download. File Download window appears.
  8. Click the <p********_<release_number>_Tekelec>.zip file to download the OCNADD package file.
  9. Extract the zip file to download the network function patch to the system where the network function must be installed.
Pushing the Images to Customer Docker Registry

Docker Images

Important:

kubectl commands might vary based on the platform deployment. Replace kubectl with Kubernetes environment-specific command line tool to configure Kubernetes resources through kube-api server. The instructions provided in this document are as per the Oracle Communications Cloud Native Environment (OCCNE) version of kube-api server.

Oracle Communications Network Analytics Data Director (OCNADD) deployment package includes ready-to-use docker images and helm charts to help orchestrate containers in Kubernetes. The communication between Pods of services of OCNADD are preconfigured in the helm charts.

Table 2-7 Docker Images for OCNADD

Service Name Docker Image Name Image Tag
OCNADD-Configuration ocnaddconfiguration 22.0.0
OCNADD-ConsumerAdapter <app-name>-adapter 22.0.0
OCNADD-EgressGW <app-name>-gw 22.0.0
OCNADD-AGG

ocnaddnrfaggregation

ocnaddscpaggregation

22.0.0
OCNADD-Alarm ocnaddalarm 22.0.0
OCNADD-HealthMonitoring ocnaddhealthmonitoring 22.0.0
OCNADD-Kafka kafka-broker-x 22.0.0
OCNADD-Admin ocnaddadminservice 22.0.0
OCNADD-UIRouter ocnaddbackendrouter 22.0.0
OCNADD-GUI ocnaddgui 22.0.0

Note:

The service image names are prefixed with the OCNADD release name.

Pushing Docker Images

To Push the images to customer docker resgistry, perform the following steps:

  1. Untar the OCNADD package zip file to retrieve the OCNADD docker image tar file.
    tar -xvzf ocnadd-pkg-22.0.0.0.0.tgz
    The directory consists of the following:
    • OCNADD Docker Images File:
      ocnadd-images-22.0.0.tar
    • Helm File:
      ocnadd-22.0.0.tgz
    • Readme txt File:
      Readme.txt
  2. Load the ocnadd-images-22.0.0.tar file into the docker system
    docker load --input /IMAGE_PATH/ocnadd-images-22.0.0.tar
    For CNE 1.8.0 and later, use the following command:
    podman load --input /IMAGE_PATH/ocnadd-images-22.0.0.tar
  3. To verify if the image is loaded correctly, run the following command:
    docker images
  4. Create a new tag for each imported image and push the image to the customer docker registry by entering the following command:
    docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
    docker push <docker-repo>/<image-name>:<image-tag>

    Note:

    It is recommended to configure the docker certificate before running the push command to access customer registry via HTTPS, otherwise, docker push command may fail.
  5. Push the helm charts to the helm repository. Run the following command:
    helm push <image_name>.tgz <helm_repo>
Installing OCNADD Package

This section describes how to install the Oracle Communications Network Analytics Data Director (OCNADD) package.

To install the OCNADD package, perform the following steps:

Create OCNADD Namespace

Create the OCNADD namespace, if not already created, using the following command:
kubectl create ns <dd-namespace-name>
For more information, see Creating OCNADD Namespace.

Generate Certificates

  1. Run the following commands to generate certificates:
     
    Change directory to <chart_path>/ssl_certs, and updat the file permission as below
     
    $ chmod 775 generate_certs.sh
     
    $ chmod 775 generate_secrets.sh     
     
    (optional) Clean up the EOF encoding if copied from windows.
      
    sed -i -e 's/\r$//' default_values/values
    sed -i -e 's/\r$//' template/ca_openssl.cnf
    sed -i -e 's/\r$//' template/services_server_openssl.cnf
    sed -i -e 's/\r$//' template/services_client_openssl.cnf
    sed -i -e 's/\r$//' generate_certs.sh
    sed -i -e 's/\r$//' generate_secrets.sh

    Note:

    Make sure that changes made in default_values reflect the namespace and cluster as described in Configuring SSL or TLS Certificates section. For more information on the certificate generation process, see Configuring SSL or TLS Certificates.
  2. Perform the steps defined in Configuring SSL or TLS Certificates section to complete the certificate generation.

Update Database Parameters

To update the database parameters, see Configuring OCNADD Database.

Update values.yaml

Update the values.yaml (depending on the type of deployment model) with the required parameters. For more information on how to access and update the values.yaml files, see Customizing OCNADD.

Perform Kafka Pre-Install Configuration

  1. Clean the script file.

    Script files might get encoded in the Windows format if they are pushed from the Windows Git client or opened with editors that have EOF encoding in the Windows format. Run the following commands:

    chmod 755 <chartpath>/charts/ocnaddkafka/scripts/start-service.sh
    dos2unix <chartpath>/charts/ocnaddkafka/scripts/start-service.sh

    If dos2unix is not available in your system, run the following command:

    sed -i 's/\r$//' <chartpath>/charts/ocnaddkafka/scripts/start-service.sh
  2. Select the required brokers for deployment.
    By default, the deployment comes with three brokers. If the addition or removal of brokers is required, perform the following steps:
    1. To add a broker, run the following command:
      cp <chartpath>/charts/ocnaddkafka/default/ocnaddkafkaBrokerX.yaml <chartpath>/charts/ocnaddkafka/templates

      Example:

      cp <chartpath>/charts/ocnaddkafka/default/ocnaddkafkaBroker4.yaml <chartpath>/charts/ocnaddkafka/templates
    2. To remove a broker, run the following command:
      rm <chartpath>/charts/ocnaddkafka/default/ocnaddkafkaBrokerX.yaml

      Example:

      rm <chartpath>/charts/ocnaddkafka/default/ocnaddkafkaBroker3.yaml
  3. To change the profiles of the brokers, edit the respective values (CPU, memory, storage) in values.yaml.
    location : <chartpath>/charts/ocnaddkafka/values.yaml 
    If any formatting or indentation issues occur while editing, refer to the files in <chartpath>/charts/ocnaddkafka/default.
  4. To create a secret for the Brokers-Zookeeper connection, run the following command:
    kubectl create secret generic jaas-secret --from-literal=jaas_password=ocnadd -n $nameSpace

    Example:

    kubectl create secret generic jaas-secret --from-literal=jaas_password=ocnadd -n ocnadd-deploy
  5. To create the configmap, run the following command:
    kubectl create configmap allfiles-configmap  --from-file=<filepath> -n <namespace>

    The name used for the configmap is allfiles-configmap. The scripts file is placed under <chartpath>/charts/ocnaddkafka/scripts/.

    Example:

    kubectl create configmap allfiles-configmap --from-file=<chartpath>/charts/ocnaddkafka/scripts/ -n <namespace>
  6. To configure storageClass, update the storageClass in the following files with the respective storage class name of the TANZU platform, for example, zfs-storage-policy.

    Note:

    This step is specific to the TANZU platform. Skip this step if you are installing OCNADD on OCCNE. For OCCNE, the default storageClass is standard.
     <chartpath>/charts/ocnaddkafka/templates/ocnadd-zookeeper.yaml  
             <chartpath>/charts/ocnaddkafka/templates/ocnaddkafkaBrokerX.yaml  ----> where X stands for Broker1, Broker2, Broker3 ..etc
             <chartpath>/charts/ocnaddkafka/default/ocnaddkafkaBrokerx.yaml ----> where X stands for Broker1, Broker2, Broker3 ..etc

Configure OCNADD Backup Cronjob

  1. Configure the mysqlNameSpace and storageClass details in <chartpath>/values.yaml.
    cluster:
            secret:
                name: db-secret
            mysqlNameSpace:   
                name: occne-cndbtierone    #---> the namespace in which the dbtier is deployed 
            mysqlPod: ndbmysqld-0          #---> the pod can be ndbmysqld-0 or ndbmysqld-1 based on the dbTier deployment
            storageClass: standard         #---> Update the "storageClassName" with  the respective storage class name in the case if deployment on Tanzu  platform. For example "zfs-storage-policy"
  2. Configure the following parameters in <chartpath/charts/ocnaddbackuprestore/values.yaml.

    The values for BACKUP_DATABASES can be set to ALL, which includes healthdb_schema, configuration_schema, and alarm_schema, or to the individual database names. By default, the value is as ALL. PURGE_DAYS sets the backup retention period. The default value is 7 days.

    Example:

      env:
               BACKUP_STORAGE: 20Gi
               BACKUP_CRONEXPRESSION: "0 8 * * *"
               BACKUP_DATABASES: ALL
               PURGE_DAYS: 7
               STORAGE_CLASS: standard       #---> this should be same as cluster.storageClass

    Once the deployment is successful, the cronjob is spawned based on the CRONEXPRESSION mentioned in the <chartpath>/charts/ocnaddbackuprestore/values.yaml.

    For more information on backup and restore, refer to Oracle Communications Network Analytics Data Director Backup and Disaster Recovery Guide.

Install Helm Chart

Run any of the following helm install commands:

  • In the case of Helm 2:
    helm install <helm-repo> --name <deployment_name> --namespace <namespace_name> --version <helm_version>
  • In the case of Helm 3 and helm repo is used:
    helm3 install <release name> --namespace <namespace> <helm-repo>/chart_name --version <helm_version>
  • In case charts are extracted and Helm is used:
    helm install <release name> --namespace <namespace> <chartpath>

where:

helm_chart is the location of the helm chart extracted from ocnadd-22.0.0.tgz file

release name is the release name used by helm command.

Note:

The release_name should not exceed 63 character limit.

namespace is the deployment namespace used by helm command.

Example:
helm install ocnadd-22.0.0 --namespace ocnadd-deploy ocnadd

Caution:

Do not exit from helm install command manually. After running the helm install command, it takes some time to install all the services. In the meantime, you must not press Ctrl+C to come out from the command.. It leads to some anomalous behavior.

Note:

You can verify the installation while running the install command by entering this command on a separate terminal:
watch kubectl get jobs,pods -n release_namespace

Perform Kafka Post-Install Configuration

  1. Update the EXTERNAL-IP IPs in <chartpath>/charts/ocnaddkafka/values.yaml:
    1. Run the following command to get the EXTERNAL-IP details for the Kafka brokers:
      kubectl get svc -n <namespace>

      Sample output:

      kafka-broker1   LoadBalancer   10.20.30.40   10.xx.xx.xx  
              9092:30946/TCP,9093:31912/TCP,9094:30663/TCP
    2. update the following kafkaBrokerX.advertiseListeners1 parameter of each broker with respective EXTERNAL-IP details captured in the previous step:
      advertiseListeners1: PLAINTEXT://kafka-broker1:9092,SSL://kafka-broker1:9093,SASL_SSL://kafka-broker1:9094

      Example

      For EXTERNAL-IP=10.xx.xx.xx of Kafka broker1

      advertiseListeners1: PLAINTEXT://10.xx.xx.xx:9092,SSL://10.xx.xx.xx:9093,SASL_SSL://10.xx.xx.xx:9094

      Note:

      Other ports accessibility will be blocked if only EXTERNAL-IP of SASL_SSL port(9094) is updated.
  2. Upgrade the helm chart with the following command:
    helm upgrade <release-name> -n <namespace> <chartpath>

    Note:

    Once the charts are upgraded, the brokers restart is expected. In case the required brokers images are not available locally the Kafka brokers might take a few minutes until the zookeepers pull the image.
Verifying OCNADD Installation

This section describes how to verify if Oracle Communications Network Analytics Data Director (OCNADD) is installed successfully.

To check the status of OCNADD deployment, perform the following task:
  1. In the case of Helm, run one of the following commands:
    helm status <helm-release> -n <namespace>
    Example:
    helm list -n ocnadd

    The system displays the status as deployed if the deployment is successful.

  2. Run the following command to check whether all the services are deployed and active:
    kubectl -n <namespace_name> get services

    Run the following command to check whether all the pods are up and active:

    kubectl -n <namespace_name> get pods

    Example:

    kubectl -n ocnadd get pods
    kubectl -n ocnadd get services

Note:

  • All microservices status must be Running and Ready.
  • Take a backup of the following files that are required during disaster recovery:
    • Updated Helm charts
    • Secrets, certificates, and keys that are used during the installation
  • If the installation is not successful or you do not see the status as Running for all the pods, perform the troubleshooting steps. For more information, refer to Oracle Communications Network Analytics Data Director Troubleshooting Guide.
Creating OCNADD Topics

Create topics (MAIN, SCP, and NRF) using admin service, before starting data ingestion. For more details on topic and partitions see, "Kafka PVC Storage Requirements" section of Oracle Communications Network Analytics Data Director Benchmarking Guide.

To create a topic connect to any worker node and send a POST curl request to the API Endpoint described below.

API Endpoint : <ClusterIP:Admin Port>/ocnadd-admin-svc/v1/topic
{
    "topicName":"<topicname>",
    "partitions":"3",
    "replicationFactor":"2",
    "retentionMs":"120000"
}

Note:

  • In case worker node access is not available then the adminservice Service-Type can be changed to LoadBalancer or NodePort in the admin service values.yaml (helm upgrade is required for any such changes)
  • For Loadbalancer service ensure that the admin port is not blocked in the cluster.
Installing OCNADD GUI
This section describes how to install Oracle Communications Network Analytics Data Director (OCNADD) GUI using the following steps:

Install OCNADD GUI

Perform the following steps to install OCNADD GUI:
  1. Extract the helm charts from ocnaddgui-pkg-Releasenumber.tgz provided inside the ocnadd-pkg-22.0.0.0.0.tgz package.
  2. Update helm-charts/values.yaml with the namespace and clusterName in which OCNADD is installed.
  3. Update the ocnaddgui image repo path REPO_HOST_PORT in helm-charts/values.yaml.
  4. Run the following command to install the OCNADD GUI:
    helm install <chart-name> helm-charts/ -n <namespace>
    Example:
    helm install ocnadd-gui helm-charts/ -n ocnadd-deploy
  5. Run the following command to verify the OCNADD GUI installation:
    kubectl get all -n <namespace>
    Example:
    kubectl get all -n ocnadd

Note:

At present, the OCNADD GUI service supports single cluster deployments only.

Configure OCNADD GUI in CNCC

Prerequisite: To configure OCNADD GUI in CNC Console, you must have the CNCC installed. For information on how to install CNCC, refer to Oracle Communications Cloud Native Core Console Installation and Upgrade Guide.

Before installing CNCC, ensure to update the instances parameters with the following details in the occncc_custom_values.yaml file:


  instances:
    - id: Cluster1-dd-instance1
      type: DD-UI
      owner: Cluster1
      ip: 10.xx.xx.xx    #--> give the cluster/node IP
      port: 31456        #--> give the node port of ocnaddgui
      apiPrefix: /occne-12ipcluster/ocnadd
    - id: Cluster1-dd-instance1
      type: DD-API
      owner: Cluster1 
      ip: 10.xx.xx.xx   #--> give the cluster/node IP
      port: 32406       #--> give the node port of ocnaddbackendrouter
      apiPrefix: /occne-12ipcluster/ocnaddapi

# Applicable only for Manager and Agent core. Used for Multi-Instance-Multi-Cluster Configuration Validation
  validationHook:
    enabled: false   #--> add this enabled: false to validationHook

#--> do these changes under section : 
cncc iam attributes
# If https is disabled, this Port would be HTTPS/1.0 Port (secured SSL)
    publicHttpSignalingPort: 30085  #--> CNC console nodeport

#--> add these lines under cncc-iam attributes
# If Static node port needs to be set, then set staticNodePortEnabled flag to true and provide value for staticNodePort
    # Else random node port will be assigned by K8
    staticNodePortEnabled: true
    staticHttpNodePort: 30085  #--> CNC console nodeport
    staticHttpsNodePort: 30053

#--> do these changes under section : manager cncc core attributes
#--> add these lines under mcncc-core attributes

# If Static node port needs to be set, then set staticNodePortEnabled flag to true and provide value for staticNodePort
    # Else random node port will be assigned by K8
    staticNodePortEnabled: true
    staticHttpNodePort: 30075
    staticHttpsNodePort: 30043

#--> do these changes under section : agent cncc core attributes
#--> add these lines under acncc-core attributes
# If Static node port needs to be set, then set staticNodePortEnabled flag to true and provide value for staticNodePort
    # Else random node port will be assigned by K8
    staticNodePortEnabled: true
    staticHttpNodePort: 30076
    staticHttpsNodePort: 30044
If CNCC is already installed, ensure to upgrade it with the following parameters updated in the occncc_custom_values.yaml file:
instances:
  - id: Cluster1-dd-instance1
    type: DD-UI
    owner: Cluster1
    ip: 10.xx.xx.xx    #--> update the cluster/node IP
    port: 31456        #--> ocnaddgui port
    apiPrefix: /<clustername>/<namespace>/ocnadd   # the clustername and namespace where the OCNADD GUI is deployed
  - id: Cluster1-dd-instance1
    type: DD-API
    owner: Cluster1 
    ip: 10.xx.xx.xx   #--> update the cluster/node IP
    port: 32406       #--> ocnaddbackendrouter port
    apiPrefix: /<clustername>/<namespace>/ocnadd   # the clustername and namespace where the OCNADD GUI is deployed

Example:

If OCNADD GUI is deployed in the occne-ocdd cluster and the ocnadd-deploy namespace, then the prefix in CNCC occncc_custom_values.yaml will be as follows:
DD-UI apiPrefix: 
/occne-ocdd/ocnadd-deploy/ocnadd 
DD-API apiPrefix: 
/occne-ocdd/ocnadd-deploy/ocnaddapi

Access OCNADD GUI

To access OCNADD GUI, follow the procedure mentioned in the "Accessing CNC Console" section of Oracle Communications Cloud Native Core Console Installation and Upgrade Guide.