2 Installing OCNADD

This chapter provides information about installing Oracle Communications Network Analytics Data Director (OCNADD) on the supported platforms.

The OCNADD installation is supported over the following platforms:
  • Oracle Communications Cloud Native Core, Cloud Native Environment (CNE)
  • VMware Tanzu Application Platform (TANZU)
  • Oracle Cloud Infrastructure (OCI)

Note:

This document describes the OCNADD installation on CNE. However, the procedure for installation on OCI and TANZU is similar to the installation on CNE. Any steps unique to OCI or TANZU platform are mentioned explicitly in the document.

2.1 Prerequisites

Before installing and configuring OCNADD, make sure that the following requirements are met:

2.1.1 Software Requirements

This section lists the software that must be installed before installing OCNADD:

Table 2-1 Mandatory Software

Software Version
Kubernetes 1.31.x, 1.30.x, 1.29.x
Helm 3.15.2
Docker/Podman 4.6.1
OKE (on OCI) 1.27.x

Note:

  • OCNADD 25.1.100 supports CNE 25.1.1xx, 24.3.x and 24.2.x.
To check the Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) version, run the following command:
echo $OCCNE_VERSION
To check the current Helm and Kubernetes versions installed in CNE, run the following commands:
kubectl version
helm version
To check the current kubectl-hns version installed in CNE, run the following commands:
kubectl hns version

For more information about HNS installation, see Kubectl HNS Installation section.

Note:

Starting with CNE 1.8.0, Podman is the preferred container platform instead of docker. For more information on installing and configuring Podman, see the Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

If you are installing OCNADD on TANZU, the following software must be installed:

Table 2-2 Mandatory Software

Software Version
Tanzu 1.4.1
To check the current TANZU version, run the following command:
tanzu version

Note:

Tanzu was supported in release 22.4.0. Release 25.1.100 has not been tested on Tanzu.

Depending on the requirement, you may have to install additional software while deploying OCNADD. The list of additional software items, along with the supported versions and usage, is given in the following table:

Table 2-3 Additional Software

Software Version Required For
Prometheus-Operator 2.52.0 Metrics
Metallb 0.14.4 LoadBalancer
cnDBTier 25.1.1xx, 24.3.x, 24.2.x MySQL Database
hnc-controller-manager 1.1.0 To manage HNS and require only for centralized deployment with more than one worker group.
kubectl-hns 1.1.0 To manage HNS and require only for centralized deployment with more than one worker group.

Note:

  • Some of the software are available by default if OCNADD is getting deployed in Oracle Communications Cloud Native Core, Cloud Native Environment (CNE).
  • Install the additional software if any of them are not available by default with CNE.
  • If you are deploying OCNADD in any other environment, for instance, TANZU, then all the above mentioned software must be installed before installing OCNADD.
  • On OCI, the Prometheus-Operator is not required. The metrics and alerts will be managed using OCI monitoring and Alarm services.
  • If a centralized deployment with only the default worker group is planned, then no HNS is required, and the corresponding software installation can be skipped.
To check the installed software items, run the following command:
helm ls -A

2.1.2 Environment Setup Requirements

This section provides information on environment setup requirements for installing Oracle Communications Network Analytics Data Director (OCNADD).

Network Requirements

The Data Director services, such as Kafka and Redundancy Agent, require external access. These services are created as load balancer services, and the service FQDNs should be used for communication with them. Additionally, the service FQDNs must be configured in the DNS server.

CNLB Network and NADs for Data Director

Egress NADs
  1. Customers must know or create Egress NADs for their third-party feed endpoints before the CNLB CNE cluster installation. The Egress NADs must be defined in the cnlb.ini file of OCCNE for CNLB support.
  2. The Egress NADs are required for the following traffic segregation scenarios:
    1. Separate Egress NAD per third-party destination endpoint per third-party feed: Each destination endpoint of the consumer adapter will have its own egress network via a separate Egress NAD managed by CNLB.
    2. Separate Egress NAD per third-party feed: Each consumer adapter feed will have its own egress network via a separate Egress NAD managed by CNLB.
    3. Separate Egress NAD per Data Director: All consumer adapter feeds will have a single separate network via a separate Egress NAD managed by CNLB.
Ingress NADs
  1. Customers must know or create the required CNLB IPs (external IPs) and Ingress NADs for the Data Director Ingress Adapter service.
  2. Based on the ingress traffic segregation requirements for non-Oracle NFs, the necessary CNLB IP (external IPs) and Ingress NADs need to be configured for the Ingress Adapter in advance. The Ingress NADs must be defined in the cnlb.ini file of OCCNE for CNLB support.
  3. Each Ingress Adapter service instance should have an external IP and a corresponding Ingress NAD created and managed by the CNLB.
  4. Customers must know or create an Ingress NAD for the redundancy agent's external access and IP.
  5. The required CNLB external IP and corresponding Ingress NAD must be configured in the cnlb.ini file of OCCNE for CNLB support.

Note:

  • For more information about the feature see, "Enabling or Disabling Traffic Segregation Through CNLB in OCNADD " in the Oracle Communications Network Analytics Data Director User Guide.
  • For more information on CNLB and NADs, see the Oracle Communications Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

Environment Setup on OCCNE

Network Access

The Kubernetes cluster hosts must have network access to the following repositories:

  1. Local container image repository: It contains the OCNADD container images. To check if the Kubernetes cluster hosts can access the local container image repository, pull any image with an image-tag using the following command:
    podman pull docker-repo/image-name:image-tag

    where,

    • docker-repo is the IP address or hostname of the container image repository.
    • image-name is the container image name.
    • image-tag is the tag assigned to the container image used for the OCNADD pod.
  2. Local Helm repository: It contains the OCNADD Helm charts. To check if the Kubernetes cluster hosts can access the local Helm repository, run the following command:
    helm repo update
  3. ervice FQDN or IP Addresses of the required OCNADD services, for instance, Kafka Brokers, must be discoverable from outside of the cluster. This information should be publicly exposed so that Ingress messages to OCNADD can come from outside of Kubernetes.

Environment Setup on OCI

OCNADD can be deployed in OCI. While deploying OCNADD on OCI, the user must use the Operator instance/VM instead of Bastion Host.

For OCI infrastructure, see Oracle Communications Cloud Native Core, OCI Deployment Guide and Oracle Communications Cloud Native Core, OCI Adaptor User Guide documents.

After completing the OCI infrastructure setup requirements, proceed to the next section.

Client Machine Requirements

Note:

Run all the kubectl and helm commands in this guide on a system depending on the infrastructure and deployment. This system could be a client machine, such as a virtual machine, server, local desktop, etc.

This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.

The client machine must meet the following requirements:

  • network access to the helm repository and docker image repository.
  • configured Helm repository
  • network access to the Kubernetes cluster.
  • required environment settings to run the kubectl, podman, and docker commands. The environment should have privileges to create namespace in the Kubernetes cluster.
  • The Helm client installed with the push plugin. Configure the environment in such a manner that the helm install command deploys the software in the Kubernetes cluster.

Server or Space Requirements

For information on the server or space requirements for installing OCNADD, see the following documents:
  • Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide
  • Oracle Communications Network Analytics Data Director Benchmarking Guide
  • Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide

cnDBTier Requirement

Note:

Obtain the values of the cnDBTier parameters listed in the section "cnDBTier Customization Parameters" from the delivered ocnadd_dbtier_custom_values.yaml file and use these values in the new ocnadd_dbtier_custom_values.yaml file if the parameter values in the new file differ from those in the delivered file.

If you already have an older version of cnDBTier, upgrade cnDBTier with resources recommended for OCNADD by customizing the ocnadd_dbtier_custom_values.yaml file in the custom-templates folder of the OCNADD package with the required deployment parameters. Use the same PVC size as in the previous release. For more information, see the section "cnDBTier Customization Parameters."

OCNADD OCNADD supports cnDBTier 25.1.1xx, 24.3.x, and 24.2.x in a CNE environment. cnDBTier must be up and running before installing the Data Director. To install cnDBTier 25.1.1xx with resources recommended for OCNADD, customize the ocnadd_dbtier_custom_values.yaml file in the custom-templates folder in the OCNADD package with the required deployment parameters.

Note:

The

ocnadd_dbtier_custom_values.yaml

file in the DD custom-templates.zip should normally correspond to the same version as the Data Director; however, it may be possible that the cnDBTier custom values belong to a different version than the Data Director. In this case, the global.version parameter from the ocnadd_dbtier_custom_values.yaml should be checked, and the corresponding GA package of cnDBTier should be used for the installation or upgrade of cnDBTier before installing/upgrading the Data Director package.

cnDBTier parameters for the Data Director may vary. For more information, see section cnDBTier Customization Parameters.

For more information about the cnDBTier installation procedure, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

Note:

For OCI Environment, use the StorageClass as oci-bv in DBTier charts. To find the storage class name, run the below command:
kubectl get sc -n <namespace>

2.1.3 Resource Requirements

This section describes the resource requirements to install and run Oracle Communications Network Analytics Data Director (OCNADD).

OCNADD supports centralized deployment, where each data director site has been logically replaced by a worker group. The deployment consists of a management group and multiple worker groups. Traffic processing services are managed within the worker group, while configuration and administration services are managed within the management group.

In the case of centralized deployment, resource planning should consider the following points:

  • There will be only one management group consisting of the following services:
    • ocnaddconfiguration
    • ocnaddalarm
    • ocnaddadmin
    • ocnaddhealthmonitoring
    • ocnaddgui
    • ocnadduirouter
    • ocnaddredundancyagent
    • ocnaddexport
  • There can be one or more worker groups managed by the single management group and each worker group logically depicts the standalone data director site w.r.t traffic processing function. This includes the following services:
    • ocnaddkafka
    • zookeeper
    • ocnaddnrfaggregation
    • ocnaddseppaggregation
    • ocnaddscpaggregation
    • ocnaddcorrelation
    • ocnaddfilter
    • ocnaddconsumeradapter
    • ocnaddstorageadapter
    • ocnaddpcfaggregation
    • ocnaddbsfaggregation
  • The customer needs to plan for the resources corresponding to the management group and the number of worker groups required.

OCNADD supports various other deployment models. Before finalizing the resource requirements, see the OCNADD Deployment Models section. The resource usage and available features vary based on the deployment model selected. The centralized deployment model is the default model from 23.4.0 onward in the fresh installation with one management group and at least one worker group.

OCNADD Resource Requirements

Table 2-4 OCNADD Resource Requirements(Bases on HTTP2 Data Feed)

OCNADD Services vCPU Req vCPU Limit Memory Req (Gi) Memory Limit (Gi) Min Replica Max Replica Partitions Topic Name
ocnaddconfiguration 1 1 1 1 1 1 - -
ocnaddalarm 1 1 1 1 1 1 - -
ocnaddadmin 1 1 1 1 1 1 - -
ocnaddhealthmonitoring 1 1 1 1 1 1 - -
ocnaddscpaggregation 2 2 2 2 1 3 18 SCP
ocnaddnrfaggregation 2 2 2 2 1 1 6 NRF
ocnaddseppaggregation 2 2 2 2 1 2 12 SEPP
ocnaddadapter 3 3 4 4 2 14 126 MAIN
ocnaddkafka 6 6 64 64 4 4 - -
zookeeper 1 1 2 2 3 3 - -
ocnaddgui 1 2 1 1 1 2 - -
ocnadduirouter 1 2 1 1 1 2 - -
ocnaddcorrelation 3 3 24 64 1 4 - -
ocnaddfilter 2 2 3 3 1 4 - -
ocnaddredundancyagent 1 1 3 3 1 1 - -
ocnaddstorageadapter 3 3 24 64 1 4 - -
ocnaddexport 2 4 4 64 1 2 - -
ocnaddingressadapter 3 3 8 8 1 7 - -
ocnaddnonoracleaggregation 2 2 2 2 1 1 - -
ocnaddpcfaggregation 2 2 2 2 1 2 12 PCF
ocnaddbsfaggregation 2 2 2 2 1 2 6 BSF

Note:

For detailed information on the OCNADD profiles, see the "Profile Resource Requirements" section in the Oracle Communications Network Analytics Data Director Benchmarking Guide.

Ephemeral Storage Requirements

Table 2-5 Ephemeral Storage

Service Name Ephemeral Storage (min) in Mi Ephemeral Storage (max) in Mi
<app-name>-adapter 200 800
ocnaddadminservice 100 200
ocnaddalarm 100 500
ocnaddhealthmonitoring 100 500
ocnaddscpaggregation 100 500
ocnaddseppaggregation 100 500
ocnaddnrfaggregation 100 500
ocnaddconfiguration 100 500
ocnaddcorrelation 100 500
ocnaddfilter 100 500
ocnaddredundancyagent 100 500
ocnaddstorageadapter 400 800
ocnaddexport 100 2Gi
ocnaddingressadapter 400 800
ocnaddnonoracleaggregation 100 500
ocnaddpcfaggregation 100 500
ocnaddbsfaggregation 100 500

2.2 Installation Sequence

This section provides information on how to install Oracle Communications Network Analytics Data Director (OCNADD).

Note:

  • It is recommended to follow the steps in the given sequence for preparing and installing OCNADD.
  • Make sure you have the required software installed before proceeding with the installation.
  • This is the installation procedure for a standard OCNADD deployment. To install a more secure deployment (such as, adding users, changing password, enabling mTLS, and so on) see, Oracle Communications Network Analytics Suite Security Guide.

2.2.1 Pre-Installation Tasks

To install OCNADD, perform the preinstallation steps described in this section.

Note:

The kubectl commands may vary based on the platform used for deploying OCNADD. Users are recommended to replace kubectl with environment-specific command line tool to configure Kubernetes resources through kube-api server. The instructions provided in this document are as per the OCCNE’s version of kube-api server.
2.2.1.1 Downloading OCNADD Package

To download the Oracle Communications Network Analytics Data Director (OCNADD) package from MOS, perform the following steps:

  1. Log in to My Oracle Support with your credentials.
  2. Select the Patches and Updates tab to locate the patch.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter "Oracle Communications Network Analytics Data Director" in the Product field, select "Oracle Communications Network Analytics Data Director 25.1.100.0.0 from Release drop-down list.
  5. Click Search. The Patch Advanced Search Results displays a list of releases.
  6. Select the required patch from the search results. The Patch Details window opens.
  7. Click Download. File Download window appears.
  8. Click the <p********_<release_number>_Tekelec>.zip file to download the OCNADD package file.
  9. Extract the zip file to download the network function patch to the system where the network function must be installed.

To download the Oracle Communications Network Analytics Data Director package from the edelivery portal, perform the following steps:

  1. Login to the edelivery portal with your credentials. The following screen appears:

    Figure 2-1 edelivery portal


    edelivery portal

  2. Select the Download Package option, from All Categories drop down list.
  3. Enter Oracle Communications Network Analytics Data Director in the search bar.

    Figure 2-2 Search


    Search

  4. List of release packages available for download are displayed on the screen. Select the release package you want to download, the package automatically gets downloaded.
2.2.1.2 Pushing the Images to Customer and OCI Registry

Container Images

Caution:

kubectl commands might vary based on the platform deployment. Replace kubectl with Kubernetes environment-specific command line tool to configure Kubernetes resources through kube-api server. The instructions provided in this document are as per the Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) version of kube-api server.

Oracle Communications Network Analytics Data Director (OCNADD) deployment package includes ready-to-use container images and helm charts to help orchestrate containers in Kubernetes. The communication between Pods of services of OCNADD are preconfigured in the helm charts.

Following table lists the container images of OCNADD:

Table 2-6 Container Images for OCNADD

Service Name Container Image Name Image Tag
OCNADD-Configuration ocnaddconfiguration 25.1.100
OCNADD-ConsumerAdapter <app-name>-adapter 25.1.100
OCNADD-Aggregation

ocnaddnrfaggregation

ocnaddscpaggregation

ocnaddseppaggregation

ocnaddnonoracleaggregation

ocnaddpcfaggregation

ocnaddbsfaggregation

25.1.100
OCNADD-Alarm ocnaddalarm 25.1.100
OCNADD-HealthMonitoring ocnaddhealthmonitoring 25.1.100
OCNADD-Kafka kafka-broker-x 3.9.0:25.1.11
OCNADD-Admin ocnaddadminservice 25.1.100
OCNADD-UIRouter ocnadduirouter 25.1.100
OCNADD-GUI ocnaddgui 25.1.100
OCNADD-Backup-Restore ocnaddbackuprestore 2.0.11
OCNADD-Filter ocnaddfilter 25.1.100
OCNADD-Correlation ocnaddcorrelation 25.1.100
OCNADD-Redundancyagent ocnaddredundancyagent 25.1.100
OCNADD-StorageAdapter ocnaddstorageadapter 25.1.100
OCNADD-Export ocnaddexport 25.1.100
OCNADD-IngressAdapter ocnaddingressadapter 25.1.100

Note:

  • The service image names are prefixed with the OCNADD release name.
  • The above table depicts the default OCNADD microservices and their respective images. However, a few more necessary images are delivered as a part of the OCNADD package, make sure to push all the images delivered with the package.

Pushing OCNADD Images to Customer Registry

To push the images to the registry:

  1. Untar the OCNADD package zip file to retrieve the OCNADD docker image tar file:
    
    tar -xvzf ocnadd_pkg_25_1_100.tar.gz
     
    cd ocnadd_pkg_25_1_100
     
    tar -xvzf ocnadd-25.1.100.tar.gz
    The directory consists of the following:
    • OCNADD Docker Images File:
      ocnadd-images-25.1.100.tar
    • Helm File:
      ocnadd-25.1.100.tgz
    • Readme txt File:
      Readme.txt
    • Custom Templates:
      custom-templates.zip
    • ssl_certs folder:
      ssl_certs
  2. Run one of the following commands to first change the directory and then load the ocnadd-images-25.1.100.tar file:
    cd ocnadd-package-25.1.100
    docker load --input /IMAGE_PATH/ocnadd-images-25.1.100.tar
    podman load --input /IMAGE_PATH/ocnadd-images-25.1.100.tar
  3. Run one of the following commands to verify if the images are loaded:
    docker images
    podman images

    Verify the list of images shown in the output with the list of images shown in the table Table 2-6. If the list does not match, reload the image tar file.

  4. Run one of the following commands to tag each imported image to the registry:
    docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
    podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
  5. Run one of the following commands to push the image to the registry:
    docker push <docker-repo>/<image-name>:<image-tag>
    podman push <podman-repo>/<image-name>:<image-tag>

    Note:

    It is recommended to configure the docker certificate before running the push command to access customer registry through HTTPS, otherwise, docker push command may fail.
  6. Run the following command to push the helm charts to the helm repository:
    helm push <image_name>.tgz <helm_repo>
  7. Run the following command to extract the helm charts:
    tar -xvzf ocnadd-25.1.100.tgz
  8. Run the following command to unzip the custom-templates.zip file.
    unzip custom-templates.zip

Pushing OCNADD Images to OCI Registry

To push the images to the registry:

  1. Untar the OCNADD package zip file to retrieve the OCNADD docker image tar file:
    
    tar -xvzf ocnadd_pkg_25_1_100.tar.gz
     
    cd ocnadd_pkg_25_1_100
     
    tar -xvzf ocnadd-25.1.100.tar.gz
    The directory consists of the following:
    • OCNADD Docker Images File:
      ocnadd-images-25.1.100.tar
    • Helm File:
      ocnadd-25.1.100.tgz
    • Readme txt File:
      Readme.txt
    • Custom Templates:
      custom-templates.zip
    • ssl_certs folder:
      ssl_certs
  2. Run one of the following commands to first change the directory and then load the ocnadd-images-25.1.100.tar file:
    cd ocnadd-package-25.1.100
    docker load --input /IMAGE_PATH/ocnadd-images-25.1.100.tar
    podman load --input /IMAGE_PATH/ocnadd-images-25.1.100.tar
  3. Run one of the following commands to verify if the images are loaded:
    docker images
    podman images

    Verify the list of images shown in the output with the list of images shown in the table Table 2-6. If the list does not match, reload the image tar file.

  4. Run the following commands to log in to the OCI registry:
    docker login -u <REGISTRY_USERNAME> -p <REGISTRY_PASSWORD> <REGISTRY_NAME>
    podman login -u <REGISTRY_USERNAME> -p <REGISTRY_PASSWORD> <REGISTRY_NAME>

    # It will ask for password

    # Enter the password generated while creating the auth token.

    Where,

    • REGISTRY_NAME is <Region_Key>.ocir.io
    • REGISTRY_USERNAME is <Object Storage Namespace>/<identity_domain>/email_id
    • REGISTRY_PASSWORD is the Authtocken generated by the user.

    For the details about the Region Key, refer to Regions and Availability Domains.

    Identity Domain will the domain,to which the user is present.

    Object Storage Namespace is available at OCI Console> Governanace & Administration> Account Management> Tenenancy Details> Object Storage Namespace.

  5. Run one of the following commands to tag each imported image to the registry:
    docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
    podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
  6. Run one of the following commands to push the image to the registry:
    docker push <region>/<tenancy_namespace>/<repo-name>/<image-name>:<image-tag>
    podman push <region>/<tenancy_namespace>/<repo-name>/<image-name>:<image-tag>

    Note:

    It is recommended to configure the docker certificate before running the push command to access OCI registry through HTTPS, otherwise, docker push command may fail.
  7. Run the following command to push the helm charts to the helm repository:
    helm push <image_name>.tgz <helm_repo>
  8. Run the following command to extract the helm charts:
    tar -xvzf ocnadd-25.1.100.tgz
  9. Run the following command to unzip the custom-templates.zip file.
    unzip custom-templates.zip

Note:

All the image repositories must be public. Run the following steps to make all image repositories public:
  1. Goto OCI Console> Developer Services > Containers & Artifacts> Container Registry.
  2. Select the root Compartment.
  3. In the Repositories and Images Search option, the images will be listed. Select each image and click Change to Public. This step must be performed for all the images sequentially.
2.2.1.3 Creating OCNADD Namespace

This section explains how to verify or create new namespaces in the system. In this section, the namespaces for the management group and worker group should be created.

To verify if the required namespace already exists in the system, run the following command:

kubectl get namespaces

If the namespace exists, you may continue with the next steps of installation.

If the required namespace is not available, create a namespace using the following command:

Note:

This step requires the creation of hierarchical namespaces for the management group and worker group(s) in a centralized deployment with more than one worker group. If the deployment mode is centralized with only the default worker group, hierarchical namespaces are not required. In this case, all Data Director services can be deployed within the same namespace.
Run the following command to create parent namespace where all the management group services will be deployed:
kubectl create namespace <required parent-namespace>
Run the following command to create a child namespace where all the worker group services will be deployed:
kubectl hns create <required child-namespace> --namespace <parent-namespace>
For example:
kubectl create namespace dd-mgmt-group
kubectl hns create dd-worker-group1 --namespace dd-mgmt-group
Run the following command to verify the namespaces are created:
kubectl hns tree <parent-namespace>
For example:
# kubectl hns tree dd-mgmt-group
  dd-mgmt-group
  └── [s] dd-worker-group1
  
  [s] indicates subnamespaces

Naming Convention for Namespaces

While choosing the name of the namespace where you wish to deploy OCNADD, make sure the following requirements are met:

  • starts and ends with an alphanumeric character
  • contains 63 characters or less
  • contains only alphanumeric characters or '-'

Note:

It is recommended to avoid using prefix kube- when creating namespace. This is required as the prefix is reserved for Kubernetes system namespaces.
2.2.1.4 Creating Service Account, Role, and Role Binding

This section is optional and it describes how to manually create a service account, role, and rolebinding. It is required only when customer needs to create a role, rolebinding, and service account manually before installing OCNADD. Skip this if choose to create by default from helm charts.

In the case of centralized deployment, this procedure needs to be repeated for each of the management group and worker group(s).

Note:

The secret(s) should exist in the same namespace where OCNADD is getting deployed. This helps to bind the Kubernetes role with the given service account.

Creating Service Account, Role, and RoleBinding for Management Group

To create the service account, role, and rolebinding:

  1. Prepare OCNADD Management Group Resource File:
    • Run the following command to create an OCNADD resource file specifically for the management group:
      vi <ocnadd-mgmt-resource-file>.yaml

      Replace <ocnadd-mgmt-resource-file> with the required name for the management group resource file.

    • For example:
      vi ocnadd-mgmt-resource-template.yaml
  2. Update OCNADD Management Group Resource Template:
    • Update the ocnadd-mgmt-resource-template.yaml file with release-specific information.

      Note:

      Replace <custom-name> and <namespace> with their respective OCNADD management group namespace. Use a custom name preferably similar to the management namespace name to avoid upgrade issues.
    • A sample template to update the ocnadd-mgmt-resource-template.yaml file with is given below:
      # # Sample template start #
      apiVersion: v1
      kind: ServiceAccount
      metadata:
          name: < custom - name > -sa - ocnadd
      namespace: < namespace >
          automountServiceAccountToken: false
      
          -- -
          apiVersion: rbac.authorization.k8s.io / v1
      kind: Role
      metadata:
          name: < custom - name > -cr
      rules:
          -apiGroups: [""]
      resources: ["pods", "configmaps", "services", "secrets", "resourcequotas", "events", "persistentvolumes", "persistentvolumeclaims"]
      verbs: ["*"] -
          apiGroups: ["extensions"]
      resources: ["ingresses"]
      verbs: ["create", "get", "delete"] -
          apiGroups: [""]
      resources: ["nodes"]
      verbs: ["get"] -
          apiGroups: ["scheduling.volcano.sh"]
      resources: ["podgroups", "queues", "queues/status"]
      verbs: ["get", "list", "watch", "create", "delete", "update"]
      
          -- -
          apiVersion: rbac.authorization.k8s.io / v1
      kind: RoleBinding
      metadata:
          name: < custom - name > -crb
      
      roleRef:
          apiGroup: ""
      kind: Role
      name: < custom - name > -cr
      subjects:
          -kind: ServiceAccount
      name: < custom - name > -sa - ocnadd
      namespace: < namespace >
      
          -- -
          apiVersion: rbac.authorization.k8s.io / v1
      kind: RoleBinding
      metadata:
          name: < custom - name > -crb - policy
      roleRef:
          apiGroup: ""
      kind: ClusterRole
      name: psp: privileged
      subjects:
          -kind: ServiceAccount
      name: < custom - name > -sa - ocnadd
      namespace: < namespace >
          -- -
          # # Sample template end #
  3. Create Service Account, Role, and RoleBinding:
    • Run the following command to create the service account, role, and rolebinding for the management group:
      kubectl -n <dd-mgmt-group-namespace> create -f ocnadd-mgmt-resource-template.yaml

      Replace <dd-mgmt-group-namespace> with the namespace where the OCNADD management group will be deployed.

    • For example:
      $ kubectl -n dd-mgmt-group create -f ocnadd-mgmt-resource-template.yaml

Note:

  • Update the custom values file ocnadd-custom-values-25.1.100-mgmt-group.yaml created/copied from ocnadd-custom-values-25.1.100.yaml in the "Custom Templates" folder.
  • Change the following parameters to false in ocnadd-custom-values-25.1.100-mgmt-group.yaml after adding the global service account to the management group. Failing to do so might result in installation failure due to CRD creation and deletion:
    serviceAccount:
        create: false
        name: <custom-name>            ## --> Change this to <custom-name> provided in ocnadd-mgmt-resource-template.yaml above ##
        upgrade: false
    clusterRole:
        create: false
        name: <custom-name>            ## --> Change this to <custom-name> provided in ocnadd-mgmt-resource-template.yaml above ##
        clusterRoleBinding:
        create: false
        name: <custom-name>            ## --> Change this to <custom-name> provided in ocnadd-mgmt-resource-template.yaml above ##
  • Ensure the namespace used in ocnadd-mgmt-resource-template.yaml matches the below parameters in ocnadd-custom-values-25.1.100-mgmt-group.yaml:
    
         global.deployment.management_namespace
         global.cluster.nameSpace.name

Creating Service Account, Role, and RoleBinding for Worker Group

Run the following command to create the service account, role, and rolebinding:

Note:

Repeat the below procedure for each of the worker groups that needs to be added to the centralized deployment.
  1. Prepare OCNADD Worker Group Resource File:
    • Run the following command to create an OCNADD resource file specifically for the worker group:
      vi <ocnadd-wg1-resource-file>.yaml

      Replace <ocnadd-wg1-resource-file> with the required name for the worker group resource file.

    • For example:
      vi ocnadd-wg1-resource-template.yaml
  2. Update OCNADD Worker Group Resource Template:
    • Update the ocnadd-wg1-resource-template.yaml file with release-specific information.

      Note:

      Replace <custom-name> and <namespace> with their respective OCNADD worker group namespace. Use a custom name preferably similar to the worker group namespace name to avoid upgrade issues.
    • A sample template to update the ocnadd-wg1-resource-template.yaml file with is given below:
      ## Sample template start#                   
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: <custom-name>-sa-ocnadd
        namespace: <namespace>
      automountServiceAccountToken: false
       
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: <custom-name>-cr
      rules:
      - apiGroups: [""]
        resources: ["pods","configmaps","services", "secrets","resourcequotas","events","persistentvolumes","persistentvolumeclaims"]
        verbs: ["*"]
      - apiGroups: ["extensions"]
        resources: ["ingresses"]
        verbs: ["create", "get", "delete"]
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["get"]
      - apiGroups: ["scheduling.volcano.sh"]
        resources: ["podgroups", "queues", "queues/status"]
        verbs: ["get", "list", "watch", "create", "delete", "update"]
       
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: <custom-name>-crb
                                
      roleRef:
        apiGroup: ""
        kind: Role
        name: <custom-name>-cr
      subjects:
      - kind: ServiceAccount
        name: <custom-name>-sa-ocnadd
        namespace: <namespace>
       
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: <custom-name>-crb-policy
      roleRef:
        apiGroup: ""
        kind: ClusterRole
        name: psp:privileged
      subjects:
      - kind: ServiceAccount
        name: <custom-name>-sa-ocnadd
        namespace: <namespace>
      ---
      ## Sample template end#
  3. Create Service Account, Role, and RoleBinding:
    • Run the following command to create the service account, role, and rolebinding for the worker group:
      kubectl -n <dd-worker-group-namespace> create -f ocnadd-wg1-resource-template.yaml

      Replace <dd-worker-group-namespace> with the namespace where the OCNADD worker group will be deployed.

    • For example:
      $ kubectl -n dd-worker-group1 create -f ocnadd-wg1-resource-template.yaml

Note:

  • Update the custom values file ocnadd-custom-values-25.1.100-worker-group1.yaml created/copied from ocnadd-custom-values-25.1.100.yaml in the "Custom Templates" folder.
  • Change the following parameters to false in ocnadd-custom-values-25.1.100-worker-group1.yaml after adding the global service account to the worker group. Failing to do so might result in installation failure due to CRD creation and deletion:
    serviceAccount:
        create: false
        name: <custom-name> ## --> Change this to <custom-name> provided in ocnadd-wg1-resource-template.yaml above ##
        upgrade: false
    clusterRole:
        create: false
        name: <custom-name> ## --> Change this to <custom-name> provided in ocnadd-wg1-resource-template.yaml above ##
        clusterRoleBinding:
        create: false
        name: <custom-name> ## --> Change this to <custom-name> provided in ocnadd-wg1-resource-template.yaml above ##
  • Ensure the namespace used in ocnadd-wg1-resource-template.yaml matches the below parameters in ocnadd-custom-values-25.1.100-worker-group1.yaml:
    global.cluster.nameSpace.name

    The management_namespace parameter is set to the namespace used for the management group.

    global.deployment.management_namespace
2.2.1.5 Configuring OCNADD Database

OCNADD microservices use MySQL database to store the configuration and run time data.

The database is managed by the helm pre-install hook. However, OCNADD requires the database administrator to create an admin user in MySQL database and provide the necessary permissions to access the databases. Before installing OCNADD it is required to create the MySQL user and databases.

Note:

  • If the admin user is already available, then update the credentials, such as username and password (base64 encoded) in ocnadd/templates/ocnadd-secret-hook.yaml.
  • If the admin user is not available, then create it using the following procedure. Once the user is created, update the credentials for the user in ocnadd/templates/ocnadd-secret-hook.yaml.

Creating an Admin User in the Database

To create an admin user in the database:
  1. Run the following command to access the MySQL pod:

    Note:

    Use the namespace in which the cnDBTier is deployed. For example, occne-cndbtier namespace is used. The default container name is ndbmysqld-0
    kubectl -n occne-cndbtier exec -it ndbmysqld-0 -- bash
  2. Run the following command to login to MySQL server using MySQL client:
    $ mysql -h 127.0.0.1 -uroot -p $ Enter password:
  3. To create an admin user, run the following command:
    CREATE USER IF NOT EXISTS'<ocnadd admin username>'@'%' IDENTIFIED BY '<ocnadd admin user password>';
    

    Example:

    CREATE USER IF NOT EXISTS 'ocdd'@'%' IDENTIFIED BY 'ocdd';

    Where:

    <ocdd> is the admin username and <ocdd> is the password for MySQL admin user

  4. Run the following command to grant the necessary permissions to the admin user and run the FLUSH command to reload the grant table:
    GRANT ALL PRIVILEGES ON *.* TO 'ocdd'@'%' WITH GRANT OPTION;
    FLUSH PRIVILEGES;
  5. Access the ocnadd-secret-hook.yaml from the OCNADD helm files using the following path:
    ocnadd/templates/ocnadd-secret-hook.yaml
  6. Update the following parameters in the ocnadd-secret-hook.yaml with the admin user credentials:
    data:   
    MYSQL_USER: b2NkZA==    
    MYSQL_PASSWORD: b2NkZA==
    To generate the base64 encoded user and password from the terminal, run the following command:
    echo -n <string> | base64 -w 0

    Where, <string> is the admin username or password created in step3.

    For example:

    echo -n ocdd | base64 -w 0
    b2NkZA==

Update Database Name

Note:

  • By default, the database names are configuration_schema, alarm_schema, and healthdb_schema for the respective services.
  • Skip this step if you plan to use the default database names during database creation. If not, change the database names as required.

To update the database names in the Configuration Service, Alarm Service, and Health Monitoring services:

  1. Access the ocdd-db-resource.sql file from the helm chart using the following path:
    ocnadd/ocdd-db-resource.sql
  2. Update all occurrences of the database name in ocdd-db-resource.sql.

Note:

During the OCNADD reinstallation, all three application databases must be removed manually by running the drop database <dbname>; command.
2.2.1.6 Configuring Secrets for Accessing OCNADD Database

The secret configuration for OCNADD database is automatically managed during the database creation the helm preinstall procedure.

2.2.1.7 Configuring IP Network

This section defines OCNADD IP configuration for single stack (either only IPv4 or IPv6) or dual stack supported infrastructure.

  • For IPv4 network, update the following parameters in ocnadd-custom-values-25.1.100.yaml:
    
    global:
        ipConfigurations:
            ipFamilyPolicy: SingleStack
            ipFamilies: ["IPv4"]
  • For IPv6 network, update the following parameters in ocnadd-custom-values-25.1.100.yaml:
    
    global:
        ipConfigurations:
            ipFamilyPolicy: SingleStack
            ipFamilies: ["IPv6"]

Note:

  • The primary IP family remains fixed once OCNADD is deployed. To change the primary IP family, OCNADD needs to be redeployed.
  • The IPv6 support on OCI is not available in the 25.1.100 Release
2.2.1.8 Configuring SSL or TLS Certificates

Extract the Package

OCNADD supports both TLS and non-TLS communication between its microservices for internal communication. If intra-TLS is enabled, all Data Director microservices must use their own TLS certificates. However, if intra-TLS is disabled, certain microservices or jobs still require the use of a TLS certificate. Detailed information about TLS communication in the Data Director is provided in the 'TLS Configuration' section of the Oracle Communication Network Analytics Suite Security Guide.

Before proceeding with the configuration of SSL/TLS certificates for OCNADD, see the 'Certificate and Secret Generation' section in the Oracle Communication Network Analytics Suite Security Guide.

If not already done, extract the package occnadd-package-25.1.100.tgz.

Note:

  • Before configuring the SSL/TLS certificates, see "Customizing CSR and Certificate Extensions" section in the Oracle Communications Network Analytics Suite Security Guide.
  • This procedure is mandatory, perform it before proceeding with the installation.

Before generating certificates using cacert and cakey, finalize the Kafka access mode. In step 7 of section, Generate Certificates using CACert and CAKey, provide the script response "y" while running the generate_certs script to create certificates.

The following access modes are available and applicable for worker groups only:

  1. When the NF producers and OCNADD are in the same cluster with external access disabled.
  2. When the NF producers and OCNADD are in different clusters with LoadBalancer.

Note:

  • If the NF Producers and OCNADD are deployed in the same cluster, all three ports can be used: 9092 for PLAIN_TEXT, 9093 for SSL, and 9094 for SASL_SSL. However, note that the 9092 port is non-secure and is not recommended for use.
  • If the NF Producers and OCNADD are deployed in different clusters, only the 9094 (SASL_SSL) port is exposed.
  • It is recommended to use individual server IPs in the Kafka bootstrap server list instead of a single service IP like "kafka-broker:9094".

NF producers and OCNADD are in the same cluster with external access disabled

In this mode, the Kafka cluster is not exposed externally. By default, the parameters externalAccess.enabled and externalAccess.autoDiscovery are set to false, therefore no change is needed. The parameters externalAccess.enabled and externalAccess.autoDiscovery are present in the ocnadd-custom-values-25.1.100.yaml file.

The default values of bootstrap-server are given below:


kafka-broker-0.kafka-broker-headless:9093
kafka-broker-1.kafka-broker-headless:9093
kafka-broker-2.kafka-broker-headless:9093

Note:

Use the below FQDN as a bootstrap server if required when NFs deployed in same cluster:

kafka-broker-0.kafka-broker-headless.<namespace>.svc.<domain>:9093/9092

kafka-broker-1.kafka-broker-headless.<namespace>.svc.<domain>:9093/9092

kafka-broker-2.kafka-broker-headless.<namespace>.svc.<domain>:9093/9092

The NF producers and OCNADD are in different clusters with LoadBalancer

If the NF producers and OCNADD are in different Clusters, then either the LoadBalancer or NodePort Service Type can be used. In both the cases, the IP addresses are required to be updated manually in the ssl_certs/default_values/values of kafka-broker section by using the following steps:

With LoadBalancer

  1. Update the following parameters in Kafka section of the ocnadd-custom-values-25.1.100.yaml file:
    1. externalAccess.type to LoadBalancer
    2. externalAccess.enabled to true
    3. externalAccess.autoDiscovery to true
    4. If the deployment is on the OCI platform, make sure to update the following parameters:
      1. Set global.env.oci to true.
      2. Update global.env.subnetOcid to the specific <subnet ocid value>.
  2. Update based on LoadBalance IP types as follows:
    1. When Static LoadBalancer IPs are used
      1. Update the following parameters in the Kafka section of the ocnadd-custom-values-25.1.100.yaml file:
        • externalAccess.setstaticLoadBalancerIps to 'true'. Default is false.
        • Static IP list in "externalAccess.LoadBalancerIPList" separated with comma.

        For example:

        externalAccess:
                    setstaticLoadBalancerIps: true
                    LoadBalancerIPList: [10.20.30.40,10.20.30.41,10.20.30.42]
      2. During the script execution, include all static IPs under the kafka-broker section. To achieve this, respond with "y" in step 7 of section, Generate Certificates using CACert and CAKey," while running the generate_certs script for certificate creation. Subsequently, add the IPs by selecting the service and entering the required values when prompted.
        For the following services:
        1. kafka-broker
        2. zookeeper
        3. ocnaddscpaggregation
        4. ocnaddnrfaggregation
        5. ocnaddseppaggregation
        6. adapter
        7. ocnaddfilter
        8. ocnaddcorrelation
        9. ocnaddstorageadapter
        10. ocnaddingressadapter
        11. kraft-controller
        12. ocnaddnonoracleaggregation
        13. ocnaddpcfaggregation
        14. ocnaddbsfaggregation
         
        Enter the number corresponding to the service for which you want to add IP: 1
        Please enter IP for the service kafka-broker or enter "n" to exit : 10.20.30.40
        Please enter IP for the service kafka-broker or enter "n" to exit : 10.20.30.41
        Please enter IP for the service kafka-broker or enter "n" to exit : 10.20.30.42
        Please enter IP for the service kafka-broker or enter "n" to exit : n
        Do you want to add IP to any other service (y/n) : n
    2. When LoadBalancer IP CIDR block is used
      1. The LoadBalancer IP CIDR block should already be available during the site planning, if not available then contact the CNE infrastructure administrator to get the IP CIDR block for Loadbalancer IPs.
      2. Add all the available IPs under kafka-broker section while running the script. To do so, select "y" in step 7 of Generate Certificates using CACert and CAKey section while running the generate_certs script for creating certificates. Then add the IPs by selecting the service and entering the required IPs.

        For example: For the worker-group1, if the available IP CIDR block is "10.x.x.0/26" with IP range is [1-62]

        For the following services:
        1. kafka-broker
        2. zookeeper
        3. ocnaddscpaggregation
        4. ocnaddnrfaggregation
        5. ocnaddseppaggregation
        6. adapter
        7. ocnaddfilter
        8. ocnaddcorrelation
        9. ocnaddstorageadapter
        10. ocnaddingressadapter
        11. kraft-controller
        12. ocnaddnonoracleaggregation
        13. ocnaddpcfaggregation
        14. ocnaddbsfaggregation
         
        Enter the number corresponding to the service for which you want to add IP: 1
        Please enter IP for the service kafka-broker or enter "n" to exit : 10.x.x.1
        Please enter IP for the service kafka-broker or enter "n" to exit : 10.x.x.2
        .
        .
        Please enter IP for the service kafka-broker or enter "n" to exit : 10.x.x.62
        Please enter IP for the service kafka-broker or enter "n" to exit : n
        Do you want to add IP to any other service (y/n) : n 

    Note:

    The Kafka broker individual service FQDNs should be added in the DNS entry and also be used in the bootstrap server configuration for communication with Kafka.
2.2.1.8.1 Generate Certificates using CACert and CAKey

OCNADD allows the users to provide the CACert and CAKey and generate certificates for all the services by running a predefined script.

Use the ssl_cert folder to generate the certificates for Management Group or Worker Group namespaces accordingly.

To generate certificates using CACert and CAKey:
  1. Navigate to the <ssl_certs>/default_values folder.

    Note:

    <Optional> Users have the flexibility to modify the service_values_template file to add or remove specific service blocks for which certificates need to be created or removed.

    For example, to generate certificates for the management group, users can edit the "management_service_values_template" file.

    Similarly, depending on the deployment group type, users can edit the respective template file for that group.

    Global Params:
     
    [global]
    countryName=<country>
    stateOrProvinceName=<state>
    localityName=<city>
    organizationName=<org_name>
    organizationalUnitName=<org_bu_name>
    defaultDays=<days to expiry>
     
    Root CA common name (e.g. rootca common_name=*.svc.domainName)
    ##root_ca
     
    commonName=*.svc.domainName
     
    Service common name for client and server and SAN(DNS/IP entries). (Make sure to follow exact same format and provide an empty line at the end of each service block)
     
    [service-name-1]
    client.commonName=client.cn.name.svc1
    server.commonName=server.cn.name.svc1
    IP.1=127.0.0.1
    DNS.1=localhost
     
    [service-name-2]
    client.commonName=client.cn.name.svc2
    server.commonName=server.cn.name.svc2
    IP.1= 10.20.30.40 t
    DNS.1 = *.svc2.namespace.svc.domainName
    .
    .
    .
    ##end
  2. Run the generate_certs.sh script with the following command:
    ./generate_certs.sh -cacert <path to>/CAcert.pem -cakey <path to>/CAkey.pem

    Where, <path to> is the folder path where the CACert and CAKey are present.

    Note:

    In case the certificates are being generated for the worker group separately, then make sure the same CA certificate and private keys are used for generating the certificates as used for generating the management group certificates. The similar command as mentioned below can be used for the worker group certificate generation after the management group certificates have been generated:
    ./generate_certs.sh -cacert <path to>/cacert.pem -cakey <path to>/private/cakey.pem
    
  3. Select the mode of deployment:
    
    "1" for non-centralized
    "2" for upgrade from non-centralized to centralized
    "3" for centralised
    "4" for simulator
    Select the mode of deployement (1/2/3) : 3
  4. Select the namespace where you want to generate the certificates:
    Enter kubernetes namespace: <your_working_namespace>
  5. Select the service_values file you would like to apply. Below example is for Management Group:
    
    Choose the group of services:
    1. management_group_services
    2. worker_group_services
     
    Choose a file by entering its corresponding number: (1 or 2) 1
  6. Enter the domain name with which the user wants to change the default domain name(occne-ocdd) in chosen service_values file which will be used to create the certificate:
    Please enter the domain name: <domain_name>
  7. Enter SAN (DNS/IP entries) for any service if required.
    Do you want to add any IP for adding SAN entries to existing dd services (y/n): y

    If the user selects "y," a list of services will be displayed, and the user can add Subject Alternative Name (SAN) entries for any of the listed services by choosing the corresponding service number.

    In the following example, a list of management services is presented to the user for adding SAN entries. Enter the number corresponding to the service for which the user wants to input IP addresses. After selecting the service, provide the IP addresses as input. Enter "n" to exit if no further entries are needed.

    For the following services:
    1. ocnadduirouter
    2. ocnaddadminservice
    3. ocnaddalarm
    4. ocnaddconfiguration
    5. ocnaddhealthmonitoring
    6. ocnaddbackuprestore
    7. ocnaddredundancyagent
    8. ocnaddexportservice
     
    Enter the number corresponding to the service for which you want to add IP: 3
    Please enter IP for the service ocnaddalarm or enter "n" to exit : 10.20.30.40
    Please enter IP for the service ocnaddalarm or enter "n" to exit : 10.20.30.41
    Please enter IP for the service ocnaddalarm or enter "n" to exit : n
    Do you want to add IP to any other service (y/n) : n
  8. Select "y" when prompted to create CA.
    Do you want to create Certificate Authority (CA)? (y/n) y
  9. Enter the passphrase for CAkey when prompted:
    Enter passphrase for CA Key file: <passphrase>
  10. Select “y” when prompted to create CSR for each service:
    Create Certificate Signing Request (CSR) for each service? Y
  11. Select “y” when prompted to sign CSR for each service with CA Key:
    Would you like to sign CSR for each service with CA key? Y
  12. If the centralized mode of deployment is selected during the creation of management group certificates, once the management group certificate generation is completed, the user will be prompted to continue the certificate generation process for worker groups.
    Would you like to continue certificate creation for worker group? (y/n) y

    If "y" is selected, the script will execute to recreate the certificates for the worker group. The script will repeat its execution from step 4 onwards. During the worker group creation flow, choose "worker_group_service_values" in step 5 and proceed. If "n" is selected, the script completes its execution.

    Note:

    The script can be used to create both management certificates and the desired number of supported worker group certificates in a single execution.
  13. Run the following command to check if the secrets are created in the specified namespace:
    kubectl get secret -n <namespace>
  14. Run the following command to describe any secret created by script:
    kubectl describe secret <secret-name> -n <namespace>
2.2.1.8.2 Generating Certificate Signing Request (CSR)

Users can generate the certificate signing request for each of the services using the OCNADD script, and then can use the generated CSRs to generate the certificates using its own certificate signing mechanism (External CA server, Hashicorp Vault, and Venafi).

Perform the following procedure to generate the CSR:

  1. Navigate to the <ssl_certs>/default_values folder.
  2. Copy the required service_values_template (e.g., management_service_values_template or worker_service_values_template or simulator_values_template or values_template depending on the deployment mode) to another file named "backup_service_values_template."
  3. Edit the corresponding service_values_template file and update global parameters, CN, and SAN (DNS/IP entries) for each service based on the provided requirements.
  4. Change the default domain (occne-ocdd) and default namespace (ocnadd-deploy) in the corresponding service_values_template file with your cluster domain and namespace.

    Example:

    • For management group: namespace = dd-mgmt-group, clusterDomain =cluster.local.com
      
      sed -i "s/ocnadd-deploy/dd-mgmt-group/g" management_service_values_template
      sed -i "s/occne-ocdd/cluster.local.com/g" management_service_values_template
      
    • For worker group: namespace = dd-worker-group, clusterDomain = cluster.local.com
      
      sed -i "s/ocnadd-deploy/dd-worker-group/g" worker_service_values_template
      sed -i "s/occne-ocdd/cluster.local.com/g" worker_service_values_template
      

    Note:

    Edit corresponding service_values file for global parameters and RootCA common name. Add service blocks of all services for which the certificate needs to be generated.
    
    Global Params:
       
    [global]
    countryName=<country>
    stateOrProvinceName=<state>
    localityName=<city>
    organizationName=<org_name>
    organizationalUnitName=<org_bu_name>
    defaultDays=<days to expiry>
        
       
    Root CA common name (e.g. rootca common_name=*.svc.domainName)
    ##root_ca
       
    commonName=*.svc.domainName
       
       
    Service common name for client and server and SAN(DNS/IP entries). (Make sure to follow exact same format and provide an empty line at the end of each service block)
        
    [service-name-1]
    client.commonName=client.cn.name.svc1
    server.commonName=server.cn.name.svc1
    IP.1=127.0.0.1
    DNS.1=localhost
        
    [service-name-2]
    client.commonName=client.cn.name.svc2
    server.commonName=server.cn.name.svc2
    IP.1= 10.20.30.40
    DNS.1 = *.svc2.namespace.svc.domainName
    .
    .
    .
    ##end
  5. Run the generate_certs.sh script with the --gencsr or -gc flag.
    
    ./generate_certs.sh --gencsr
    
  6. Select the deployment mode.
    
    (1) non-centralized
    (2) upgrade from non-centralized to centralized
    (3) centralised
    (4) simulator
    Select the mode of deployement (1/2/3/4) : 3
  7. Select the namespace where you would like to generate the certificates:
    Enter kubernetes namespace: <your_working_namespace>
  8. Select the group of services you would like to apply.
    
    Choose the group of services:
    1. management_group_services
    2. worker_group_services
  9. Once the service CSRs are generated the demoCA folder will be created. Navigate to CSR and keys in the demoCA/dd_mgmt_worker_services/<your_namespace>/services (separate for client and server). The CSR can be signed using your own certificate signing mechanism to generate the certificates.
  10. Make sure that the certificates and key names are created in the following format based on the service is acting as a client or server.

    For Client servicename-clientcert.pem and servicename-clientprivatekey.pem

    For Server servicename-servercert.pem and servicename-serverprivatekey.pem

  11. Once above certificates are generated by signing CSR with the Certificate Authority, copy those certificates in the respective demoCA/dd_mgmt_worker_services/<your_namespace>/services folder of each services.

    Note:

    • Make sure to use the same CA key for both management group and worker group(s)
    • Make sure the certificates are copied in the respective folders for the client and the server based on their generated CSRs
  12. Run generate_certs.sh with the cacert path and --gensecret or -gs to generate secrets:
    ./generate_certs.sh -cacert <path to>/cacert.pem --gensecret
  13. Select the namespace where you would like to generate the certificates:
    Enter kubernetes namespace: <your_working_namespace>
  14. Select “y” when prompted to generate secrets for the services:
    Would you like to continue to generate secrets? (y/n) y
  15. Run the following command to check if the secrets are created in the specified namespace:
    kubectl get secret -n <namespace>
  16. Run the following command to describe any secret created by the script:
    kubectl describe secret <secret-name> -n <namespace>
  17. Remove the corresponding service_values_template file once its use is completed after creating new secrets. Rename "backup_service_values_template" back to the corresponding service_values_template file.
2.2.1.9 Kubectl HNS Installation

This procedure is required only if a centralized deployment with more than one worker group is needed or if a new worker group needs to be added to a centralized deployment with the default worker group.

Follow the steps below for HNS installation:
  1. Extract the OCNADD package if not already extracted and untar the hns_package.tar.gz:
    
    cd ocnadd-package-25.1.100
    tar -xvzf hns_package.tar.gz
    
  2. Go to the hns_package folder:
    cd hns_package
  3. Load, tag, and push the HNS image to the image repository:
    
    podman load -i hns_package/hnc-manager.tar
    podman tag localhost/k8s-staging-multitenancy/hnc-manager:v1.1.0 <image-repo>/k8s-staging-multitenancy/hnc-manager:v1.1.0
    podman push <image-repo>/k8s-staging-multitenancy/hnc-manager:v1.1.0
  4. Update the image repository in the ha.yaml file:
    
    - /manager
    image: occne-repo-host:5000/k8s-staging-multitenancy/hnc-manager:v1.1.0  ## ---> Update image to <image-repo>/k8s-staging-multitenancy/hnc-manager:v1.1.0
    
  5. Repeat the image update for the second occurrence in the ha.yaml file:
    
    - /manager
    image: occne-repo-host:5000/k8s-staging-multitenancy/hnc-manager:v1.1.0  ## ---> Update image to <image-repo>/k8s-staging-multitenancy/hnc-manager:v1.1.0
  6. Run the HNS file to create resources for kubectl hns:
    kubectl apply -f ha.yaml
  7. Copy the binary kubectl-hns to /usr/bin or any location in the user's $PATH:
    
    sudo cp --remove-destination kubectl-hns /usr/bin/
    sudo chmod +x /usr/bin/kubectl-hns
    sudo chmod g+x /usr/bin/kubectl-hns
    sudo chmod o+x /usr/bin/kubectl-hns
    
  8. Verify by creating a child namespace and deleting it:
    1. Run the following commands to create namespace:
      
      kubectl create ns test-parent
      kubectl hns create test-child -n test-parent
      
    2. Run the following commands to list the child namespaces:
      kubectl hns tree test-parent
      
      Sample output:
      
      test-parent
      └── [s] test-child
      
    3. Run the following commands to delete namespace:
      
      kubectl delete subns test-child -n test-parent
      kubectl delete ns test-parent
      
2.2.1.10 OCCM Prerequisites for Installing OCNADD

Before starting the installation of OCNADD, ensure the following conditions are met regarding the Oracle Communication Certificate Manager (OCCM) installation:

  • OCCM should be installed and must have all the necessary permissions to create secrets in OCNADD management and worker group namespaces. If OCCM is in a separate namespace, it should have at least sufficient privileges to create secrets in the OCNADD management namespace; the privilege will be automatically inherited in worker group namespaces. See Oracle Communication Certificate Manager Installation and Upgrade Guide.
  • Issuer (CA) should be configured in OCCM. If multiple OCCMs are used, each should have at least one common issuer (CA) configuration.
  • Ensure that OCCM has sufficient capacity to create the required number of certificates. If all features in OCNADD are enabled, 18 certificates are required for management services and 24 certificates are required for each worker group.
  • The cncc-api-access client should be enabled in the Oracle Communications Cloud Native Configuration Console (CNC Console). For more information, see "Generate Access Tokens" section in Oracle Communications Cloud Native Configuration Console User Guide.

OCCM Secrets

Three secrets need to be created in every OCNADD namespace to use OCCM for creating certificates.

  • occm_secret: This secret should contain the username and password of a CNC Console user with OCCM_READ and OCCM_WRITE roles. This user will be used by OCNADD to communicate with OCCM through CNC Console.
    $ kubectl create secret generic -n <ocnadd-namespace> --from-literal=username=<cncc-user>  --from-literal=password=<cncc-password> occm-secret

    Where:

    • <ocnadd-namespace>: OCNADD management or worker group namespace
    • <cncc-user>: CNC Console username of the CNC Console user
    • <cncc-password>: CNC Console password of the CNC Console user
    • occm-secret: Name of the secret storing the credentials of the CNC Console user
    For example (for management and one worker group):
    
    kubectl create secret generic -n ocnadd-deploy-mgmt --from-literal=username=occm-cncc  --from-literal=password=occm-cncc-secret occm-secret
    kubectl create secret generic -n ocnadd-deploy-wg1 --from-literal=username=occm-cncc  --from-literal=password=occm-cncc-secret occm-secret
  • truststore_keystore_secret: This secret should contain the key used for encrypting the Keystore and Truststore created by OCNADD to store the x509 certificates of each service and the CA.
    $ kubectl create secret generic -n <ocnadd-namespace> --from-literal=keystorekey=<keystore-key> --from-literal=truststorekey=<truststore-key> occm-truststore-keystore-secret

    Where:

    • <ocnadd-namespace>: OCNADD management or worker group namespace
    • <keystore-key>: Encryption key used for securing the Keystore storing a Service's certificates and private key
    • <truststore-key>: Encryption key used for securing the Keystore storing the CA certificate/certificate chain
    • occm-truststore-keystore-secret: Name of the secret containing the Keystore and Truststore key
    For example (for management and one worker group):
    
    kubectl create secret generic -n ocnadd-deploy-mgmt --from-literal=keystorekey=keystorepassword --from-literal=truststorekey=truststorepassword occm-truststore-keystore-secret
    kubectl create secret generic -n ocnadd-deploy-wg1 --from-literal=keystorekey=keystorepassword --from-literal=truststorekey=truststorepassword occm-truststore-keystore-secret
  • occm_cacert: This secret stores the CA certificate or CA certificate-chain of the Issuer configured in OCCM.
    $ kubectl create secret generic -n <ocnadd-namespace> --from-file=cacert.pem=<ca-cert-file>.pem occm-ca-secret

    Where:

    • <ocnadd-namespace>: OCNADD management or worker group namespace
    • <ca-cert-file>: Name of the PEM file containing the CA certificate or certificate chain
    • occm-ca-cert: Name of the secret storing the CA certificate or certificate chain
    For example ( for management and one worker group):
    
    kubectl create secret generic -n ocnadd-deploy-mgmt --from-file=cacert.pem=<ca-cert-file>.pem occm-ca-secret
    kubectl create secret generic -n ocnadd-deploy-wg1 --from-file=cacert.pem=<ca-cert-file>.pem occm-ca-secret

2.2.2 Installation Tasks

This section describes the tasks that the user must follow for installing OCNADD.

Note:

Before starting the installation tasks, ensure that the Prerequisites and Pre-Installation Tasks are completed.
2.2.2.1 Installing OCNADD Package

This section describes how to install the Oracle Communications Network Analytics Data Director (OCNADD) package.

To install the OCNADD package, perform the following steps:

Create OCNADD Namespace

Create the OCNADD namespace, if not already created. For more information, see Creating OCNADD Namespace.

Generate Certificates

If OCCM is used to create the certificates, follow the steps defined in OCCM Prerequisites for Installing OCNADD.

Else, perform the steps defined in Configuring SSL or TLS Certificates section to complete the certificate generation.

Update Database Parameters

To update the database parameters, see Configuring OCNADD Database.

Update ocnadd-custom-values-25.1.100.yaml file

Update the ocnadd-custom-values-25.1.100.yaml (depending on the type of deployment model) with the required parameters.

For more information on how to access and update the ocnadd-custom-values-25.1.100.yaml files, see Customizing OCNADD.

If OCCM is used to create the certificates, update the Mandatory Parameters specified in Helm Parameter Configuration for OCCM.

Install Helm Chart

OCNADD Release 25.1.100 or later release supports fresh deployment in centralized mode only.

OCNADD Installation with Default Worker Group

In 25.1.100, Data Director can be installed with default worker group in centralized mode.

Note:

No HNS package is required for this installation mode and only one worker group "default worker Group" is possible.

Deploy Centralized Site with Default Group

To set up the centralized site, create copies of the charts and custom values for both the management group and each worker group from the "ocnadd-package-25.1.100" folder. The user can create multiple copies of helm charts folder and custom-values file in the following suggested way:

  1. For Management Group: Create a copy of the following files from extracted folder:
    
    # cd ocnadd-package-25.1.100
    # cp -rf ocnadd ocnadd_mgmt
    # cp custom_templates/ocnadd-custom-values-25.1.100.yaml ocnadd-custom-values-mgmt-group.yaml
  2. For Default Worker Group: Create a copy of the following files from extracted folder:
    
    # cp -rf ocnadd ocnadd_default_wg
    # cp custom_templates/ocnadd-custom-values-25.1.100.yaml ocnadd-custom-values-default-wg-group.yaml

Installing Management Group:

  1. Create a namespace for the Management Group if it doesn't exist already. See Creating OCNADD Namespace section.
    For example:
    # kubectl create namespace ocnadd-deploy
  2. Create certificates if it was not already created for the management group. Use the option "non-centralized" for the certificate generation if "generate_certs.sh". For more information about certificate generation, see Configuring SSL or TLS Certificates.
  3. Modify the ocnadd-custom-values-mgmt-group.yaml created above and update it as below:
    
        global.deployment.centralized: true
        global.deployment.management: true
        global.deployment.management_namespace:ocnadd-deploy        ##---> update it with namespace created in Step 1
        global.cluster.namespace.name:ocnadd-deploy                 ##---> update it with namespace created in Step 1
       
        global.cluster.serviceAccount.name:ocnadd                  ## --> update the ocnadd with namespace created in Step 1
        global.cluster.clusterRole.name:ocnadd                        ## --> update the ocnadd with namespace created in Step 1  
        global.cluster.clusterRoleBinding.name:ocnadd                 ## --> update the ocnadd with namespace created in Step 1 
  4. Install using the "ocnadd_mgmt" Helm charts folder created for the management group:
    helm install <management-release-name> -f ocnadd-custom-values-<mgmt-group>.yaml --namespace <default-deploy-namespace> <helm_chart>

    Where,

    <management-release-name> release name of management group deployment

    <mgmt-group> management custom values file

    <default-deploy-namespace> namespace where management group is deployed

    <helm-chart> helm chart folder of OCNADD

    For example:
    helm install ocnadd-mgmt -f ocnadd-custom-values-mgmt-group.yaml --namespace ocnadd-deploy ocnadd_mgmt

Installing Default Worker Group:

  1. Create certificates if it was not already created for the management group. For more information about certificate generation, see Configuring SSL or TLS Certificates and Oracle Communications Network Analytics Suite Security Guide.

    Note:

    For worker group certificate creation, select same namespace as selected for management group (For example, dd-default-deploy).
  2. Modify the ocnadd-custom-values-wg-group.yaml file as follows:
    
        global.deployment.centralized: true
        global.deployment.management: false                             ##---> default is true
        global.deployment.management_namespace:ocnadd-deploy            ##---> update it with namespace created in Step 1
        global.cluster.namespace.name:ocnadd-deploy                     ##---> update it with namespace created in Step 1
     
        global.cluster.serviceAccount.create: true                     ## --> update the parameter to false
        global.cluster.clusterRole.create: true                        ## --> update the parameter to false
        global.cluster.clusterRoleBinding.create: true                     ## --> update the parameter to false
  3. Install using the "ocnadd_wg" Helm charts folder created for the default worker group:
    helm install <default-worker-group-release-name> -f ocnadd-custom-values-<default-wg-group>.yaml --namespace <default-deploy-namespace> <helm_chart>

    Where,

    <default-worker-group-release-name> release name of default worker group deployment

    <default-wg-group> default worker group custom values file

    <default-deploy-namespace> namespace where default worker group is deployed

    <helm-chart> helm chart folder of OCNADD

    For example:
    helm install ocnadd-default-wg -f ocnadd-custom-values-default-wg-group.yaml --namespace ocnadd-deploy ocnadd_default_wg

OCNADD Installation with Multiple Worker Groups

This procedure requires support for HNS. See Kubectl HNS Installation section for instructions on installing HNS before proceeding.

To set up the centralized site, create copies of the charts and custom values for both the management group and each worker group from the "ocnadd-package-25.1.100" folder. Follow the below steps:

  1. For Management Group: Create a copy of the following files from extracted folder:
    
    # cd ocnadd-package-25.1.100
    # cp -rf ocnadd ocnadd_mgmt
    # cp custom_templates/ocnadd-custom-values-25.1.100.yaml ocnadd-custom-values-mgmt-group.yaml
  2. For Worker Group: Create a copy of the following files from extracted folder :
    
    # cp -rf ocnadd ocnadd_wg1
    # cp custom_templates/ocnadd-custom-values-25.1.100.yaml ocnadd-custom-values-wg1-group.yaml

Note:

For additional worker groups, repeat this process (for example, for Worker Group 2, create "ocnadd_wg2" and "ocnadd-custom-values-wg2-group.yaml").

Installing Management Group:

  1. Create a namespace for the Management Group if it doesn't exist already. See Creating OCNADD Namespace section.
    For example:
    # kubectl create namespace dd-mgmt-group
  2. Create certificates if it was not already created for the management group. For more information about certificate generation, see Configuring SSL or TLS Certificates and Oracle Communications Network Analytics Suite Security Guide.
  3. Modify the ocnadd-custom-values-mgmt-group.yaml file as follows:
    
    global.deployment.centralized: true
    global.deployment.management: true
    global.deployment.management_namespace:ocnadd-deploy      ##---> update it with management-group namespace for example dd-mgmt-group
     
    global.cluster.namespace.name:ocnadd-deploy               ##---> update it with management-group namespace for example dd-mgmt-group
    global.cluster.serviceAccount.name:ocnadd                 ## --> update the ocnadd with the management-group namespace for example dd-mgmt-group  
    global.cluster.clusterRole.name:ocnadd                    ## --> update the ocnadd with the management-group namespace for example dd-mgmt-group
    global.cluster.clusterRoleBinding.name:ocnadd             ## --> update the ocnadd with the management-group namespace for example dd-mgmt-group
  4. Install using the "ocnadd_mgmt" Helm charts folder created for the management group:
    helm install <management-release-name> -f ocnadd-custom-values-<mgmt-group>.yaml --namespace <management-group-namespace> <helm_chart> 
    For example:
    helm install ocnadd-mgmt -f ocnadd-custom-values-mgmt-group.yaml --namespace dd-mgmt-group ocnadd_mgmt

Installing Worker Group:

  1. Create a namespace for Worker Group 1 if it doesn't exist already. See Creating OCNADD Namespace section.

    For example:

    # kubectl create namespace dd-worker-group1 -n dd-mgmt-group
  2. Create certificates if it was not already created for the management group. For more information about certificate generation, see Configuring SSL or TLS Certificates and Oracle Communications Network Analytics Suite Security Guide.
  3. Modify the ocnadd-custom-values-wg1-group.yaml file as follows:
    
        global.deployment.centralized: true
        global.deployment.management: false                             ##---> default is true
        global.deployment.management_namespace:ocnadd-deploy            ##---> update it with management-group namespace for example dd-mgmt-group
        
        global.cluster.namespace.name:ocnadd-deploy                     ##---> update it with worker-group namespace for example dd-worker-group1 
        global.cluster.serviceAccount.name:ocnadd                       ## --> update the ocnadd with the worker-group namespace for example dd-worker-group1
        global.cluster.clusterRole.name:ocnadd                          ## --> update the ocnadd with the worker-group namespace for example dd-worker-group1
        global.cluster.clusterRoleBinding.name:ocnadd                   ## --> update the ocnadd with the worker-group namespace for example dd-worker-group1
     
  4. Install using the "ocnadd_wg1" Helm charts folder created for Worker Group 1:
    helm install <worker-group1-release-name> -f ocnadd-custom-values-<wg1-group>.yaml --namespace <worker-group1-namespace> <helm_chart>
    For example:
    helm install ocnadd-wg1 -f ocnadd-custom-values-wg1-group.yaml --namespace dd-worker-group1 ocnadd_wg1

Note:

For additional worker groups, repeat the "Installing Default Worker Group:" procedure. For instance, for Worker Group 2, replicate the steps accordingly.

Caution:

Do not exit from helm install command manually. After running the helm install command, it takes some time to install all the services. In the meantime, you must not press Ctrl+C to come out from the command. It leads to some anomalous behavior.
2.2.2.2 Verifying OCNADD Installation

This section describes how to verify if Oracle Communications Network Analytics Data Director (OCNADD) is installed successfully.

To check the status of OCNADD deployment, perform the following task:
  1. In the case of Helm, run one of the following commands:
    
    helm status <helm-release> -n <namespace>
     
    Example:
    To check dd-management group
      # helm status ocnadd-mgmt -n dd-mgmt-group
     
    To check dd-worker-group
      # helm status ocnadd-wg1 -n dd-worker-group1

    The system displays the status as deployed if the deployment is successful.

  2. Run the following command to check whether all the services are deployed and active:
    To check management-group:
    watch kubectl get pod,svc -n dd-mgmt-group
    To check worker-group1:
    watch kubectl get pod,svc  -n dd-worker-group1  
    kubectl -n <namespace_name> get services

Note:

  • All microservices status must be Running and Ready.
  • Take a backup of the following files that are required during fault recovery:
    • Updated Helm charts for both management and worker group(s)
    • Updated custom-values for both management and worker group(s)
    • Secrets, certificates, and keys that are used during the installation for both management and worker group(s)
  • If the installation is not successful or you do not see the status as Running for all the pods, perform the troubleshooting steps. For more information, refer to Oracle Communications Network Analytics Data Director Troubleshooting Guide.
2.2.2.3 Creating OCNADD Kafka Topics

To create OCNADD Kakfa topics, see the "Creating Kafka Topic for OCNADD" section of Oracle Communications Network Analytics Data Director User Guide

2.2.2.4 Installing OCNADD GUI
This section describes how to install Oracle Communications Network Analytics Data Director (OCNADD) GUI using the following steps:

Install OCNADD GUI

The OCNADD GUI gets installed along with the OCNADD services.

Configure OCNADD GUI in CNCC

Prerequisite: To configure OCNADD GUI in CNC Console, you must have the CNC Console installed. For information on how to install CNC Console and configure the OCNADD instance, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.

Before installing CNC Console, ensure to update the instances parameters with the following details in the occncc_custom_values.yaml file:


  instances:
    - id: Cluster1-dd-instance1
      type: DD-UI
      owner: Cluster1
      ip: 10.xx.xx.xx    #--> give the cluster/node IP
      port: 31456        #--> give the node port of ocnaddgui
      apiPrefix: /<clustername>/<namespace>/ocnadd
    - id: Cluster1-dd-instance1
      type: DD-API
      owner: Cluster1 
      ip: 10.xx.xx.xx   #--> give the cluster/node IP
      port: 32406       #--> give the node port of ocnaddbackendrouter
      apiPrefix: /<clustername>/<namespace>/ocnaddapi

# Applicable only for Manager and Agent core. Used for Multi-Instance-Multi-Cluster Configuration Validation
  validationHook:
    enabled: false   #--> add this enabled: false to validationHook

#--> do these changes under section : 
cncc iam attributes
# If https is disabled, this Port would be HTTPS/1.0 Port (secured SSL)
    publicHttpSignalingPort: 30085  #--> CNC console nodeport

#--> add these lines under cncc-iam attributes
# If Static node port needs to be set, then set staticNodePortEnabled flag to true and provide value for staticNodePort
    # Else random node port will be assigned by K8
    staticNodePortEnabled: true
    staticHttpNodePort: 30085  #--> CNC console nodeport
    staticHttpsNodePort: 30053

#--> do these changes under section : manager cncc core attributes
#--> add these lines under mcncc-core attributes

# If Static node port needs to be set, then set staticNodePortEnabled flag to true and provide value for staticNodePort
    # Else random node port will be assigned by K8
    staticNodePortEnabled: true
    staticHttpNodePort: 30075
    staticHttpsNodePort: 30043

#--> do these changes under section : agent cncc core attributes
#--> add these lines under acncc-core attributes
# If Static node port needs to be set, then set staticNodePortEnabled flag to true and provide value for staticNodePort
    # Else random node port will be assigned by K8
    staticNodePortEnabled: true
    staticHttpNodePort: 30076
    staticHttpsNodePort: 30044
If CNC Console is already installed, ensure to upgrade it with the following parameters updated in the occncc_custom_values.yaml file:
instances:
  - id: Cluster1-dd-instance1
    type: DD-UI
    owner: Cluster1
    fqdn: ocnaddgui.<dd_mgmt_namespace>.svc.<cluster_domain>     #--> update the namespace and cluster domain.
    port: 31456        #--> ocnaddgui port
    apiPrefix: /<clustername>/<namespace>/ocnadd
  - id: Cluster1-dd-instance1
    type: DD-API
    owner: Cluster1 
    fqdn: ocnadduirouter.<dd_mgmt_namespace>.svc.<cluster_domain> #--> Update the namespace and cluster domain
    port: 32406       #--> ocnadduirouter port
    apiPrefix: /<clustername>/<namespace>/ocnaddapi

Example:

If OCNADD GUI is deployed in the occne-ocdd cluster and the ocnadd-deploy namespace, then the prefix in CNC Console occncc_custom_values.yaml will be as follows:
DD-UI apiPrefix: 
/occne-ocdd/ocnadd-deploy/ocnadd 
DD-API apiPrefix: 
/occne-ocdd/ocnadd-deploy/ocnaddapi

Access OCNADD GUI

To access OCNADD GUI, follow the procedure mentioned in the "Accessing CNC Console" section of Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.

2.2.2.5 Adding a Worker Group

Note:

HNS should be already installed for adding a new worker group. If it is not installed, see Kubectl HNS Installation.

Assumptions:

  1. Centralized Site is already deployed with at least one worker group.
  2. Management Group deployment is up and running, example namespace "dd-mgmt-group".
  3. Worker Group namespace which is being added is created, example namespace "dd-worker-group2".
  1. Create the namespace for worker-group2 if not already created. Fro more infromation, see Creating OCNADD Namespace.

    For example:

    kubectl hns create dd-worker-group2 -n dd-mgmt-group
  2. Create a copy of the following files from extracted folder:
    
    cp -rf ocnadd ocnadd_wg2
    cp custom_templates/ocnadd-custom-values-25.1.100.yaml ocnadd-custom-values-wg2-group.yaml
  3. Generate certificates for the new worker group according to the "Configuring SSL or TLS Certificates" section and the Oracle Communications Network Analytics Suite Security Guide.
  4. Modify the ocnadd-custom-values-wg2-group.yaml file as follows:
    
        global.deployment.centralized: true     
        global.deployment.management: true                              ##---> Update it to 'false'
        global.deployment.management_namespace:ocnadd-deploy            ##---> update it with management-group namespace for example dd-mgmt-group
       
        global.cluster.namespace.name:ocnadd-deploy                     ##---> update it with worker-group namespace for example dd-worker-group2
        global.cluster.serviceAccount.name:ocnadd                       ## --> update the ocnadd with the worker-group namespace for example dd-worker-group2
        global.cluster.clusterRole.name:ocnadd                          ## --> update the ocnadd with the worker-group namespace for example dd-worker-group2
        global.cluster.clusterRoleBinding.name:ocnadd                   ## --> update the ocnadd with the worker-group namespace for example dd-worker-group2
      
    1. Ensure that only the required NF aggregation is enabled on the Data Director. For example, if the customer intends to use only SCP as the source NF with Data Director, it is recommended to turn off all other NF-specific aggregation instances. The following modifications should be made in <chartpath/ocnadd/charts/ocnaddaggregation/values.yaml>.
      #The values file should be modified for the particular NF aggregation instance(s). The replica count should be set to 0
      # For example customer is only using SCP, so aggregation instance for NRF, SEPP e.t.c should be updated as suggested below:
       
      ocnaddnrfaggregation:
          autoScaling:
              enabled: true
          name: ocnaddnrfaggregation
          defineEnvVariables: true
          replicas: 1                       ==========> this should be updated to 0
          label:
              name: ocnaddnrfaggregation
       
      # the similar modification for other non-required NF aggregation instances and file should be saved.
  5. Install using the ocnadd_wg2 Helm charts folder created for the worker group:
    helm install <worker-group2-release-name> -f ocnadd-custom-values-<wg2-group>.yaml --namespace <worker-group2-namespace> <helm_chart>

    For example:

    helm install ocnadd-wg2 -f ocnadd-custom-values-wg2-group.yaml --namespace dd-worker-group2 ocnadd_wg2
  6. To verify the installation of the new worker group:
    # watch kubectl get pod,svc -n dd-worker-group2
  7. Follow the section "Creating OCNADD Kafka Topics" to create topics on newly added worker group.
  8. Ensure that only the required NF aggregation is enabled on the Data Director. For example, if the customer intends to use only SCP as the source NF with Data Director, it is recommended to turn off all other NF-specific aggregation instances. The following command should be executed to disable the unnecessary aggregation instances.
    #The custom values file should be modified for the particular NF aggregation instance(s). The min and max replicas should be set to 0 and helm upgrade of the worker group deployment should be performed.
    kubectl scale deployment -n kp-mgmt ocnaddbsfaggregation --replicas 0
2.2.2.6 Deleting a Worker Group

Assumptions:

  1. Centralized Site is already deployed with at least one worker group.
  2. Management Group deployment is up and running, example namespace "dd-mgmt-group".
  3. Worker groups "worker-group1" and "worker-group2" deployment are up and running, example namespace 'dd-worker-group1' and 'dd-worker-group2'.
  4. Worker group "worker-group2" needs to be deleted.
  1. Clean up the configurations corresponding to worker-group which is being deleted. For example, if it is 'worker-group2':
    1. Delete all the adapter feeds corresponding to worker-group2 from the UI.
    2. Delete all the filters applied to worker-group2 from the UI.
    3. Delete all the correlation applied to worker-group2 from the UI.
    4. Delete all the Kafka feeds corresponding to worker-group2 from the UI.
  2. Run the follwing command to uninstall the worker group:
    helm  uninstall <worker-group2-release-name> -n <worker-group1-namespace>
    For example:
    helm uninstall ocnadd-wg2 -n dd-worker-group2
  3. Delete the worker group namespace:
    kubectl delete subns <worker-group2-release-name> -n <management-group-namespace>
    For example:
    kubectl delete subns dd-worker-group2 -n dd-mgmt-group
2.2.2.7 Creating Alarms and Dashboard in OCI

This step is necessary only for the Data Director deployment on the OCI platform. Follow the steps explained in the section 'Creating Alarms and Dashboards in OCI' from the Oracle Communications Network Analytics Data Director User Guide.

2.2.2.8 Adding or Updating Load Balancer IPs in SAN When OCCM is Used

The certificates created by OCCM will not contain any IP values in the SAN field, except the values provided in the global.certificate.occm.san.*.ips field in ocnadd-custom-values-25.1.100.yaml for kafka-broker, ingress adapter and redundancy agent certificates.

For descriptions of the different Helm parameters, see Helm Parameter Configuration for OCCM.

To add or update the Loadbalancer IPs of these services in SAN, see the steps mentioned in the following sections:

Adding Load Balancer IPs for Kafka

  1. Update the global.certificates.occm.san.kafka.ips in ocnadd-custom-values-25.1.100.yaml of the required worker group.
    global:
      certificates:
        occm:
          san:
            kafka:
              ips: ["10.10.10.10", "10.10.10.11", "10.10.10.12", "10.10.10.13"]              # Add the loadbalancer ip of each Kafka broker services
  2. Run Helm upgrade for the worker group namespace.
    $ helm upgrade <worker-group-release-name> -f <worker-group-custom-values> -n <worker-group-ns> <ocnadd-helm-chart-location>
  3. Update the global.certificates.occm.san.kafka.update_required, global.certificates.occm.san.kafka.uuid.client, and global.certificates.occm.san.kafka.uuid.server in ocnadd-custom-values-25.1.100.yaml of the required worker group.
    global:
      certificates:
        occm:
          san:
            kafka:
              update_required: true                                           # Set to true, default is false
              uuid:
                client: 9138b974-2c89-4c9d-bc5c-0ca82752d50b                  # Provide the UUID value of the certificate KAFKABROKER-SECRET-CLIENT-<namespace> from OCCM, where <namespace> is the Worker group namespace
                server: 5e765aeb-ae1b-426b-8481-f8f3dcdd645e                  # Provide the UUID value of the certificate KAFKABROKER-SECRET-SERVER-<namespace> from OCCM, where <namespace> is the Worker group namespace
  4. Run Helm upgrade for the worker group namespace.
    $ helm upgrade <worker-group-release-name> -f <worker-group-custom-values> -n <worker-group-ns> <ocnadd-helm-chart-location>

    New certificates will be created. Verify them through the OCCM UI. Kafka brokers will also restart after the Helm upgrade is completed and will start using the newly created certificates.

Adding Load Balancer IP for Redundancy Agent

  1. Update the global.certificates.occm.san.redundancy_agent.ips in ocnadd-custom-values-25.1.100.yaml of the required management group.
    global:
      certificates:
        occm:
          san:
            redundancy_agent:
              ips: ["10.10.10.10"]  # Add the load balancer IP of the redundancy agent service
    
  2. Run Helm upgrade for the management group namespace.
    $ helm upgrade <management-group-release-name> -f <management-group-custom-values> -n <management-group-ns> <ocnadd-helm-chart-location>
  3. Update the global.certificates.occm.san.redundancy_agent.update_required, global.certificates.occm.san.redundancy_agent.uuid.client, and global.certificates.occm.san.redundancy_agent.uuid.server in ocnadd-custom-values-25.1.100.yaml of the required management group.
    global:
      certificates:
        occm:
          san:
            redundancy_agent:
              update_required: true
              uuid:
                client: 9138b974-2c89-4c9d-bc5c-0ca82752d50b  # Provide the UUID value of the certificate REDUNDANCYAGENT-SECRET-CLIENT-ocnadd-mgmt, if ocnadd-mgmt is the management group namespace
                server: 5e765aeb-ae1b-426b-8481-f8f3dcdd645e  # Provide the UUID value of the certificate REDUNDANCYAGENT-SECRET-SERVER-ocnadd-mgmt, if ocnadd-mgmt is the management group namespace
    
  4. Run Helm upgrade for the worker group namespace.
    $ helm upgrade <management-group-release-name> -f <management-group-custom-values> -n <management-group-ns> <ocnadd-helm-chart-location>

    New certificates will be created. Verify them through the OCCM UI. The Redundancy Agent will also restart after the Helm upgrade is completed and will start using the newly created certificates.

Adding Load Balancer IPs for Ingress Adapter

  1. Update the global.certificates.occm.san.ingress_adapter.ips in ocnadd-custom-values-25.1.100.yaml of the required worker group.
    global:
      certificates:
        occm:
          san:
            ingress_adapter:
              ips: ["10.10.10.10", "10.10.10.11", "10.10.10.12", "10.10.10.13"]  # Add the load balancer IP of each ingress adapter service
    
  2. Run Helm upgrade for the worker group namespace.
    $ helm upgrade -n <worker-group-ns> <worker-group-chart-name> -f <worker-group-custom-values> <ocnadd-helm-chart-location>
  3. Update the global.certificates.occm.san.ingress_adapter.update_required, global.certificates.occm.san.ingress_adapter.uuid.client, and global.certificates.occm.san.ingress_adapter.uuid.server in ocnadd-custom-values-25.1.100.yaml of the required worker group.
    global:
      certificates:
        occm:
          san:
            ingress_adapter:
              update_required: true                                   # Set to true, default is false
              uuid:
                client: 9138b974-2c89-4c9d-bc5c-0ca82752d50b          # Provide the UUID value of the certificate INGRESSADAPTER-SECRET-CLIENT-<namespace> from OCCM, where <namespace> is the Worker group namespace 
                server: 5e765aeb-ae1b-426b-8481-f8f3dcdd645e          # Provide the UUID value of the certificate INGRESSADAPTER-SECRET-SERVER-<namespace> from OCCM, where <namespace> is the Worker group namespace
  4. Run Helm upgrade for the worker group namespace.
    $ helm upgrade <worker-group-release-name> -f <worker-group-custom-values> -n <worker-group-ns> <ocnadd-helm-chart-location>

    New certificates will be created with the new/updated SAN entries. Verify them through the OCCM UI.

  5. Update the global.env.admin.OCNADD_UPGRADE_WG_NS in ocnadd-custom-values-25.1.100.yaml of the required management group with the worker group namespace.
    global:
      env:
        admin:
          OCNADD_UPGRADE_WG_NS: ocnadd-wg-1  # Where ocnadd-wg-1 is the namespace of the ingress adapter service
    
  6. Run Helm upgrade for the management group namespace.
    $ helm upgrade <management-group-release-name> -f <management-group-custom-values> -n <management-group-ns> <ocnadd-helm-chart-location> --set global.env.admin.OCNADD_INGRESS_ADAPTER_UPGRADE_ENABLE=true

2.2.3 Post-Installation Tasks

2.2.3.1 Enabling Two Site Redundancy

This feature is introduced as part of Georedundancy in OCNADD. To enable it, see 'Two Site Redundancy Enable' section in the Oracle Communications Network Analytics Data Director User Guide. It is recommended to enable this feature after completing the deployment of the 24.x.0 release.

2.2.3.2 Enabling Traffic Segregation Using CNLB

This feature is introduced as part of traffic segregation support in OCNADD. To enable it, see 'Enabling or Disabling Traffic Segregation Using CNLB in OCNADD ' section in the Oracle Communications Network Analytics Data Director User Guide. It is recommended to enable this feature after completing the deployment of the 24.x.0 release.