2 Installing OCNADD
This chapter provides information about installing Oracle Communications Network Analytics Data Director (OCNADD) on the supported platforms.
- Oracle Communications Cloud Native Core, Cloud Native Environment (CNE)
- VMware Tanzu Application Platform (TANZU)
Note:
This document describes the OCNADD installation on CNE. However, the procedure for installation on TANZU is similar to the installation on CNE. Any steps unique to TANZU platform are mentioned explicitly in the document.2.1 Prerequisites
Before installing and configuring OCNADD, make sure that the following requirements are met:
2.1.1 Software Requirements
This section lists the software that must be installed before installing OCNADD:
Table 2-1 Mandatory Software
| Software | Version |
|---|---|
| Kubernetes | 1.26.x, 1.25.x |
| Helm | 3.12.0 |
| Docker/Podman | 4.4.1 |
Note:
OCNADD 23.3.0 supports CNE 23.3.x and 23.2.x.echo $OCCNE_VERSIONkubectl versionhelm versionNote:
Starting with CNE 1.8.0, Podman is the preferred container platform instead of docker. For more information on installing and configuring Podman, see the Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) Installation Guide.If you are installing OCNADD on TANZU, the following software must be installed:
Table 2-2 Mandatory Software
| Software | Version |
|---|---|
| Tanzu | 1.4.1 |
tanzu versionNote:
Tanzu was supported in release 22.4.0. Release 23.3.0 has not been tested on Tanzu.Depending on the requirement, you may have to install additional software while deploying OCNADD. The list of additional software items, along with the supported versions and usage, is given in the following table:
Table 2-3 Additional Software
| Software | Version | Required For |
|---|---|---|
| Prometheus-Operator | 2.44.0 | Metrics |
| Metallb | 0.13.7 | LoadBalancer |
| cnDBTier | 23.3.x, 23.2.x | MySQL Database |
Note:
The software are available by default, if OCNADD is deployed in Oracle Communications Cloud Native Core, Cloud Native Environment (CNE). If you are deploying OCNADD in any other environment, for instance, TANZU, the above-mentioned software must be installed before installing OCNADD.helm ls -A2.1.2 Environment Setup Requirements
This section provides information on environment setup requirements for installing Oracle Communications Network Analytics Data Director (OCNADD).
Network Access
The Kubernetes cluster hosts must have network access to the following repositories:
- Local docker image repository – It contains the OCNADD docker images.
To check if the Kubernetes cluster hosts can access the local docker image repository, pull any image with an image-tag, using the following command:
podman pull docker-repo/image-name:image-tagwhere,
docker-repo is the IP address or hostname of the docker image repository.
image-name is the docker image name.
image-tag is the tag assigned to the docker image used for the OCNADD pod.
- Local Helm repository – It contains the OCNADD Helm charts.
To check if the Kubernetes cluster hosts can access the local Helm repository, run the following command:
helm repo update - Service FQDN or IP Addresses of the required OCNADD services, for instance, Kafka Brokers must be discoverable from outside of the cluster, which is publicly exposed so that Ingress messages to OCNADD can come from outside of Kubernetes.
Client Machine Requirements
Note:
Run all thekubectl and helm commands in this
guide on a system depending on the infrastructure and deployment. It could be a
client machine, such as a virtual machine, server, local desktop, and so on.
This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.
The client machine must meet the following requirements:
- network access to the helm repository and docker image repository.
- configured Helm repository
- network access to the Kubernetes cluster.
- required environment settings to run the
kubectl,podman, anddockercommands. The environment should have privileges to create namespace in the Kubernetes cluster. - The Helm client installed with the push plugin. Configure the environment in such a manner that the
helm installcommand deploys the software in the Kubernetes cluster.
Server or Space Requirements
- Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) Installation, Upgrade, and Fault Recovery Guide
- Oracle Communications Network Analytics Data Director Benchmarking Guide
- Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide
OCNADD GUI Requirements
- https://static.oracle.com
- https://static-stage.oracle.com
cnDBTier Requirement
OCNADD supports cnDBTier in a CNE environment. cnDBTier must be up and running in case of containerized Cloud Native Environment. For more information about the installation procedure, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
Note:
- If cnDBTier 23.2.0 or higher release is installed, set the
ndb_allow_copying_alter_tableparameter to 'ON' in the cnDBTier custom values file (occndbtier-23.x.x-custom-values.yaml) and perform cnDBTier upgrade before install, upgrade, rollback or any fault recovery procedure is performed for OCNADD. Set the parameter to its default value, 'OFF' once the activity is completed and perform the cnDBTier upgrade to apply the parameter changes. - To perform cnDBTier upgrade, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
Data Director Images
The following table lists Data Director microservices and their corresponding images:
Table 2-4 OCNADD images
| Microservices | Image | Tag |
|---|---|---|
| OCNADD-Configuration | ocnaddconfiguration | 23.3.0 |
| OCNADD-ConsumerAdapter | ocnaddconsumeradapter | 23.3.0 |
| OCNADD-Aggregation |
ocnaddnrfaggregation ocnaddscpaggregation ocnaddseppaggregation |
23.3.0 |
| OCNADD-Alarm | ocnaddalarm | 23.3.0 |
| OCNADD-HealthMonitoring | ocnaddhealthmonitoring | 23.3.0 |
| OCNADD-Kafka | ocnaddkafkahealthclient | 23.3.0 |
| OCNADD-Admin | ocnaddadminservice | 23.3.0 |
| OCNADD-UIrouter | ocnadduirouter | 23.3.0 |
| OCNADD-GUI | ocnaddgui | 23.3.0 |
| OCNADD-Cache | ocnaddcache | 23.3.0 |
Note:
The service images are prefixed with the OCNADD release name.2.1.3 Resource Requirements
This section describes the resource requirements to install and run Oracle Communications Network Analytics Data Director (OCNADD).
OCNADD supports various deployment models. Before finalizing the resource requirements, see the OCNADD Deployment Models section. The resource usage and available features vary based on the deployment model selected.
OCNADD Resource Requirements
Table 2-5 OCNADD Resource Requirements
| OCNADD Service | vCPU Req | vCPU Limit | Memory Req (Gi) | Memory Limit (Gi) | Min Replica | Max Replica | Partitions | Topic Name |
|---|---|---|---|---|---|---|---|---|
| ocnaddconfiguration | 1 | 1 | 1 | 1 | 1 | 1 | - | - |
| ocnaddalarm | 1 | 1 | 1 | 1 | 1 | 1 | - | - |
| ocnaddadmin | 1 | 1 | 1 | 1 | 1 | 1 | - | - |
| ocnaddhealthmonitoring | 1 | 1 | 1 | 1 | 1 | 1 | - | - |
| ocnaddscpaggregation | 1 | 2 | 1 | 2 | 1 | 2 | 12 | SCP |
| ocnaddnrfaggregation | 1 | 2 | 1 | 2 | 1 | 1 | 6 | NRF |
| ocnaddseppaggregation | 1 | 2 | 1 | 2 | 1 | 2 | 12 | SEPP |
| ocnaddadapter | 2 | 3 | 3 | 4 | 2 | 13 | 117 | MAIN |
| ocnaddkafka | 2 | 5 | 4 | 48 | 4 | 4 | - | - |
| zookeeper | 1 | 1 | 1 | 2 | 3 | 3 | - | - |
| ocnaddgui | 1 | 2 | 1 | 1 | 1 | 2 | - | - |
| ocnadduirouter | 1 | 2 | 1 | 1 | 1 | 2 | - | - |
| ocnaddcache | 1 | 1 | 22 | 24 | 2 | 2 | - | - |
Note:
For detailed information on the OCNADD profiles, see the "Profile Resource Requirements" section in the Oracle Communications Network Analytics Data Director Benchmarking Guide.
Ephemeral Storage Requirements
Table 2-6 Ephemeral Storage
| Service Name | Ephemeral Storage (min) in Mi | Ephemeral Storage (max) in Mi |
|---|---|---|
| <app-name>-adapter | 200 | 800 |
| ocnaddadminservice | 100 | 200 |
| ocnaddalarm | 100 | 500 |
| ocnaddhealthmonitoring | 100 | 500 |
| ocnaddscpaggregation | 100 | 500 |
| ocnaddseppaggregation | 100 | 500 |
| ocnaddnrfaggregation | 100 | 500 |
| ocnaddconfiguration | 100 | 500 |
| ocnaddcache | 100 | 500 |
2.2 Installation Sequence
This section provides information on how to install Oracle Communications Network Analytics Data Director (OCNADD). The steps are divided into two categories:
Note:
- You are recommended to follow the steps in the given sequence for preparing and installing OCNADD.
- This is the installation procedure for a standard OCNADD deployment. To install a more secure deployment (such as, adding users, changing password, enabling mTLS, and so on) see, Oracle Communications Network Analytics Suite Security Guide.
2.2.1 Pre-Installation Tasks
To install OCNADD, perform the preinstallation steps described in this section.
Note:
Thekubectl commands may vary based on the platform used for
deploying OCNADD. Users are recommended to replace kubectl with
environment-specific command line tool to configure Kubernetes resources through
kube-api server. The instructions provided in this document are as per the OCCNE’s
version of kube-api server.
2.2.1.1 Downloading OCNADD Package
To download the Oracle Communications Network Analytics Data Director (OCNADD) package from MOS, perform the following steps:
- Log in to My Oracle Support with your credentials.
- Select the Patches and Updates tab to locate the patch.
- In the Patch Search window, click Product or Family (Advanced).
- Enter "Oracle Communications Network Analytics Data Director" in the Product field, select "Oracle Communications Network Analytics Data Director 23.3.0.0.0" from Release drop-down list.
- Click Search. The Patch Advanced Search Results displays a list of releases.
- Select the required patch from the search results. The Patch Details window opens.
- Click Download. File Download window appears.
- Click the <p********_<release_number>_Tekelec>.zip file to download the OCNADD package file.
- Extract the zip file to download the network function patch to the system where the network function must be installed.
To download the Oracle Communications Network Analytics Data Director package from the edelivery portal, perform the following steps:
- Login to the edelivery portal with your credentials. The following screen appears:
Figure 2-1 edelivery portal

- Select the Download Package option, from All Categories drop down list.
- Enter Oracle Communications Network Analytics Data Director in the search bar.
Figure 2-2 Search

- List of release packages available for download are displayed on the screen. Select the release package you want to download, the package automatically gets downloaded.
2.2.1.2 Pushing the Images to Customer Docker Registry
Docker Images
Important:
kubectl commands might vary based on the platform deployment. Replace kubectl with Kubernetes environment-specific command line tool to configure Kubernetes resources through kube-api server. The instructions provided in this document are as per the Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) version of kube-api server.
Oracle Communications Network Analytics Data Director (OCNADD) deployment package includes ready-to-use docker images and helm charts to help orchestrate containers in Kubernetes. The communication between Pods of services of OCNADD are preconfigured in the helm charts.
Following table lists the Docker images of OCNADD:
Table 2-7 Docker Images for OCNADD
| Service Name | Docker Image Name | Image Tag |
|---|---|---|
| OCNADD-Configuration | ocnaddconfiguration | 2.3.32 |
| OCNADD-ConsumerAdapter | <app-name>-adapter | 2.5.6 |
| OCNADD-Aggregation |
ocnaddnrfaggregation ocnaddscpaggregation ocnaddseppaggregation |
2.4.4 |
| OCNADD-Alarm | ocnaddalarm | 2.3.6 |
| OCNADD-HealthMonitoring | ocnaddhealthmonitoring | 2.3.6 |
| OCNADD-Kafka | ocnaddkafkahealthclient | 3.5.0:2.0.14 |
| OCNADD-Admin | ocnaddadminservice | 2.6.1 |
| OCNADD-UIRouter | ocnadduirouter | 23.3.0 |
| OCNADD-GUI | ocnaddgui | 23.3.0 |
| OCNADD-Cache | ocnaddcache | 1.3.7 |
| OCNADD-Backup-Restore | ocnaddbackuprestore | 2.0.0 |
Note:
- The service image names are prefixed with the OCNADD release name.
- The above table depicts the default OCNADD microservices and their respective images. However, a few more necessary images are delivered as a part of the OCNADD package, you must push these images along with the default images.
Pushing Docker Images
To push the images to the registry:
- Untar the OCNADD package zip file to retrieve the OCNADD docker image tar file.
tar -xvzf ocnadd-pkg-23.3.0.tar.gzThe directory consists of the following:- OCNADD Docker Images
File:
ocnadd-images-23.3.0.tar - Helm
File:
ocnadd-23.3.0.tgz - Readme txt
File:
Readme.txt
- OCNADD Docker Images
File:
- Run one of the following commands to load the
ocnadd-images-23.3.0.tarfile:docker load --input /IMAGE_PATH/ocnadd-images-23.3.0.tarpodman load --input /IMAGE_PATH/ocnadd-images-23.3.0.tar - Run one of the following commands to verify if the images are loaded:
docker imagespodman imagesVerify the list of images shown in the output with the list of images shown in the table Table 2-7. If the list does not match, reload the image tar file.
- Run one of the following commands to tag each imported image to the registry:
docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag> - Run one of the following commands to push the image to the registry:
docker push <docker-repo>/<image-name>:<image-tag>podman push <docker-repo>/<image-name>:<image-tag>Note:
It is recommended to configure the docker certificate before running the push command to access customer registry through HTTPS, otherwise, docker push command may fail. - Push the helm charts to the helm repository. Run the following command:
helm push <image_name>.tgz <helm_repo>
2.2.1.3 Creating OCNADD Namespace
This section explains how to verify or create new namespace in the system.
To verify if the required namespace already exists in the system, run the following command:
kubectl get namespacesIf the namespace exists, you may continue with the next steps of installation.
If the required namespace is not available, create a namespace using the following command:
kubectl create namespace <required namespace>Example
kubectl create namespace ocnadd_namespaceNaming Convention for Namespaces
While choosing the name of the namespace where you wish to deploy OCNADD, make sure the following requirements are met:
- starts and ends with an alphanumeric character
- contains 63 characters or less
- contains only alphanumeric characters or '-'
Note:
It is recommended to avoid using prefixkube- when creating
namespace. This is required as the prefix is reserved for Kubernetes system
namespaces.
2.2.1.4 Creating Service Account, Role, and Role Binding
This section is optional and it describes how to manually create a service account, role, and rolebinding. It is required only when customer needs to create a role, rolebinding, and service account manually before installing OCNADD. Skip this if choose to create by default from helm charts.
Note:
The secret(s) should exist in the same namespace where OCNADD is getting deployed. This helps to bind the Kubernetes role with the given service account.Creating Service Account, Role, and RoleBinding
To create the service account, role, and rolebinding:
- Create an OCNADD resource file:
vi <ocnadd-resource-file>Where,
<ocnadd-resource-file>is the name of the resource file.Example:
vi ocnadd-resource-template.yaml - Update the
ocnadd-resource-template.yamlwith release specific information:Note:
Update <custom-name> and <namespace> with its respective OCNADD namespace, where <custom-name> can be given as per user preference (preferably use the namespace name to avoid issues during upgrade).A sample template to update the
ocnadd-resource-template.yamlfile with is given below:## Sample template start# apiVersion: v1 kind: ServiceAccount metadata: name: <custom-name>-sa-ocnadd namespace: <namespace> automountServiceAccountToken: false --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: <custom-name>-cr rules: - apiGroups: [""] resources: ["pods","configmaps","services", "secrets","resourcequotas","events","persistentvolumes","persistentvolumeclaims"] verbs: ["*"] - apiGroups: ["extensions"] resources: ["ingresses"] verbs: ["create", "get", "delete"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] - apiGroups: ["scheduling.volcano.sh"] resources: ["podgroups", "queues", "queues/status"] verbs: ["get", "list", "watch", "create", "delete", "update"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <custom-name>-crb roleRef: apiGroup: "" kind: Role name: <custom-name>-cr subjects: - kind: ServiceAccount name: <custom-name>-sa-ocnadd namespace: <namespace> --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <custom-name>-crb-policy roleRef: apiGroup: "" kind: ClusterRole name: psp:privileged subjects: - kind: ServiceAccount name: <custom-name>-sa-ocnadd namespace: <namespace> --- ## Sample template end# - Run the following command to create service account, role, and rolebinding:
$ kubectl -n <namespace> create -f ocnadd-resource-template.yamlWhere,
<namespace>is the namespace where OCNADD is deployed.Example:
$ kubectl -n ocnadd create -f ocnadd-resource-template.yaml
Note:
Once the global service account is added, users must update the following parameters
to false in the ocnadd-custom-values-23.3.0.yaml file; otherwise, installation may fail as a result of creating and deleting
custom resource definitions (CRD):
serviceAccount:
create: false
name: <custom-name> ## --> Change this to <custom-name> provided in ocnadd-resource-template.yaml above ##
upgrade: false
clusterRole:
create: false
name: <custom-name> ## --> Change this to <custom-name> provided in ocnadd-resource-template.yaml above ##
clusterRoleBinding:
create: false
name: <custom-name> ## --> Change this to <custom-name> provided in ocnadd-resource-template.yaml above ##2.2.1.5 Configuring OCNADD Database
OCNADD microservices use MySQL database to store the configuration and run time data.
The database is managed by the helm pre-install hook. However, OCNADD requires the database administrator to create an admin user in MySQL database and provide the necessary permissions to access the databases. Before installing OCNADD it is required to create the MySQL user and databases.
Note:
- If the admin user is already available, then update the credentials, such as username and password (base64 encoded) in
ocnadd/templates/ocnadd-secret-hook.yaml. - If the admin user is not available, then create it using the following procedure. Once the user is created, update the credentials for the user in
ocnadd/templates/ocnadd-secret-hook.yaml.
Creating Database
- Run the following command to log in to MySQL pod.
Note:
Use the namespace in which the cnDBTier is deployed. For example,occne-cndbtiernamespace is used. The default container name ismysqlndbcluster$ kubectl -n occne-cndbtier exec -it ndbmysqld-0 -- bashTo verify all the available containers in the pod, run:Use 'kubectl describe pod/ndbmysqld-0 -n occne-cndbtier' - Run the following command to login to MySQL server using MySQL
client:
$ mysql -h 127.0.0.1 -uroot -p $ Enter password: - To create a admin user, run the following command:
CREATE USER IF NOT EXISTS'<ocnadd admin username>'@'%' IDENTIFIED BY '<ocnadd admin user password>';Example:
CREATE USER IF NOT EXISTS 'ocdd'@'%' IDENTIFIED BY 'ocdd';where:
<ocdd> is the admin username and <ocdd> is the password for MySQL admin user
- Run the following command to grant the necessary permissions to the admin user and run the FLUSH command to reload the grant table:
GRANT ALL PRIVILEGES ON *.* TO 'ocdd'@'%' WITH GRANT OPTION;FLUSH PRIVILEGES; - Access the
ocnadd-secret-hook.yamlfrom the OCNADD helm files using the following path:ocnadd/templates/ocnadd-secret-hook.yaml - Update the following parameters in the
ocnadd-secret-hook.yamlwith the admin user credentials:data: MYSQL_USER: b2NkZA== MYSQL_PASSWORD: b2NkZA==To generate the base64 encoded user and password from the terminal, run the following command:echo -n <string> | base64 -w 0Where,
<string>is the admin username or password created in step3.For example:
echo -n ocdd | base64 -w 0 b2NkZA==
Update Database Name
Note:
- By default, the database names are configuration_schema, alarm_schema, and healthdb_schema for the respective services.
- Skip this step if you plan to use the default database names during database creation. If not, change the database names as required.
To update the database names in the Configuration Service, Alarm Service, and Health Monitoring services:
- Access the
ocdd-db-resource.sqlfile from the helm chart using the following path:ocnadd/ocdd-db-resource.sql - Update all occurrences of the database name in
ocdd-db-resource.sql. - Update the
ocnadd/ocdd-db-resource.sqlfile and add REFERENCES grant toocddAppUsrfor alarm andhealthdbschema. Run the below command from the folder where helm charts for OCNADD are extracted:sed -i "s/GRANT SELECT, INSERT, CREATE, LOCK TABLES, DELETE, UPDATE, EXECUTE ON healthdb_schema.* TO 'ocddAppUsr'@'%'/GRANT SELECT, INSERT, CREATE, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON healthdb_schema.* TO 'ocddAppUsr'@'%'/g" ocnadd/ocdd-db-resource.sql sed -i "s/GRANT SELECT, INSERT, CREATE, LOCK TABLES, DELETE, UPDATE, EXECUTE ON alarm_schema.* TO 'ocddAppUsr'@'%'/GRANT SELECT, INSERT, CREATE, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON alarm_schema.* TO 'ocddAppUsr'@'%'/g" ocnadd/ocdd-db-resource.sql - Verify that the file ocnadd/ocdd-db-resource.sql have the REFERENCES added to
the GRANT statement as
below:
GRANT SELECT, INSERT, CREATE, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON alarm_schema.* TO 'ocddAppUsr'@'%'; GRANT SELECT, INSERT, CREATE, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON healthdb_schema.* TO 'ocddAppUsr'@'%';
Note:
During the OCNADD re-installation, all three application databases must be removed manually by running thedrop database <dbname>; command.
2.2.1.6 Configuring Secrets for Accessing OCNADD Database
The secret configuration for OCNADD database is automatically managed during the database creation the helm pre-install procedure.
2.2.1.7 Configuring IP Network
This section defines OCNADD IP configuration for single stack (either only IPv4 or IPv6) or dual stack supported infrastructure.
- For IPv4 network, update the following parameters in
ocnadd-custom-values.yaml:global: ipConfigurations: ipFamilyPolicy: SingleStack ipFamilies: ["IPv4"] - For IPv6 network, update the following parameters in
ocnadd-custom-values.yaml:global: ipConfigurations: ipFamilyPolicy: SingleStack ipFamilies: ["IPv6"]
Note:
The primary IP family remains fixed once OCNADD is deployed. To change the primary IP family, OCNADD needs to be redeployed.2.2.1.8 Configuring SSL or TLS Certificates
Note:
Before configuring the SSL/TLS certificates see, the "Customizing CSR and Certificate Extensions" section in the Oracle Communications Network Analytics Suite Security Guide.Note:
This is a mandatory procedure, perform this procedure before you proceed with installation.Caution:
Users should provide their ownCAcert.pem and
CAkey.pem for generating certificates for the
OCNADD SSL or TLS support.
For HTTPs, the certificates must be created before creating secret files for Keys and MySQL database credentials.
Before generating certificates using CACert and CAKey, the Kafka access mode needs to be finalized, and the ssl_certs/default_values/values to be updated accordingly.
The following access modes are available:
- When the NF producers and OCNADD are in same cluster
- with external access disabled
- with external access enabled
- The NF producers and OCNADD are in different cluster
- with LoadBalancer
- with NodePort
Note:
- If the NF Producers and OCNADD are deployed in same cluster, all three ports can be used that is, 9092 for PLAIN_TEXT, 9093 for SSL, and 9094 for SASL_SSL. However, the 9092 port is non-secure and hence not recommended to use.
- If the NF Producers and OCNADD are deployed in different cluster, only the 9094 (SASL_SSL) port is exposed
- It is recommended to use the individual server IPs in the Kafka bootstrap server list instead of single service IP like "kafka-broker:9094".
The NF producers and OCNADD are in the same cluster
- With external access disabled
In this mode, the Kafka cluster is not exposed externally. By
default, the parameters externalAccess.enabled and
externalAccess.autoDiscovery are set to false, therefore no change is
needed. The parameters externalAccess.enabled and
externalAccess.autoDiscovery are present in the ocnadd-custom-values-23.3.0.yaml file.
The default values of bootstrap-server are given below:
kafka-broker-0.kafka-broker:9093
kafka-broker-1.kafka-broker:9093
kafka-broker-2.kafka-broker:9093 - With external access enabled
Note:
Ensure the procedure mentioned in the section "Helm Install/Upgrade Failure" in "External Kafka Access Enabled" in Oracle Communications Network Analytics Data Director Troubleshooting Guide is performed before configuring the Kafka external access. These changes are mandatory for the external access to work and without these the installation results in failure.This mode can be chosen when the customer wants to have different service names for each Kafka broker. This requires an update of the following parameters of Kafka section inocnadd-custom-values-23.3.0.yamlfile:- externalAccess.type to ClusterIP
- externalAccess.enabled to true
- externalAccess.autoDiscovery to true
The AdvertiseListeners in Kafka will be updated with <kafka-broker-0-external>:<Port>,<kafka-broker-1-external>:<Port> for each respective brokers.
Based on the AdvertisedListeners, the client bootstrap server list should be updated. Examples are given below:
kafka-broker-0-external:9094 kafka-broker-1-external:9094 kafka-broker-2-external:9094
The NF producers and OCNADD are in different clusters
Note:
Ensure the procedure mentioned in the section "Helm Install/Upgrade Failure" in "External Kafka Access Enabled" in Oracle Communications Network Analytics Data Director Troubleshooting Guide is performed before configuring the Kafka external access. These changes are mandatory for the external access to work and without these the installation results in failure.If the NF producers and OCNADD are in different Clusters, then either the LoadBalancer or NodePort Service Type can be used. In both the cases, the IP addresses are required to be updated manually in the ssl_certs/default_values/values of kafka-broker section by using the following steps:
With LoadBalancer
- Update the following parameters in Kafka section of the
ocnadd-custom-values-23.3.0.yamlfile:- externalAccess.type to LoadBalancer
- externalAccess.enabled to true
- externalAccess.autoDiscovery to true
- Update based on LoadBalance IP types as follows
- When Static LoadBalancer IPs are
used
- Update the following parameters
in the Kafka section of the
ocnadd-custom-values-23.3.0.yamlfile:- externalAccess.setstaticLoadBalancerIps to 'true'. Default is false.
- Static IP list in "externalAccess.LoadBalancerIPList" separated with comma.
For example:
externalAccess: setstaticLoadBalancerIps: true LoadBalancerIPList: [10.20.30.40,10.20.30.41,10.20.30.42] - Add all the static IP's in
ssl_certs/default_values/values under kafka-broker
section.
Example: for the static IP list "10.20.30.40,10.20.30.41,10.20.30.42"
[kafka-broker] client.commonName=kafka-broker-zk server.commonName=kafka-broker DNS.1=*.kafka-broker.<nameSpace>.svc.<Cluster-Domain> DNS.2=kafka-broker DNS.3=*.kafka-broker IP.1=10.20.30.40 IP.2=10.20.30.41 IP.3=10.20.30.42
- Update the following parameters
in the Kafka section of the
- When LoadBalancer IP CIDR block is used
- Get the available LoadBalancer IP CIDR block using the following command
kubectl describe cm -n occne-infra occne-metallbFor example:
address-pools: - addresses: - 10.20.30.0/26 - Add all the available IP's in
ssl_certs/default_values/values under kafka-broker
section.
Example: for the "10.x.x.0/26" IP range
[kafka-broker] client.commonName=kafka-broker-zk server.commonName=kafka-broker DNS.1=*.kafka-broker.<nameSpace>.svc.<Cluster-Domain> DNS.2=kafka-broker DNS.3=*.kafka-broker IP.1=10.x.x.1 IP.2=10.x.x.2 . . IP.4=10.x.x.63
- Get the available LoadBalancer IP CIDR block using the following command
- When Static LoadBalancer IPs are
used
With NodePort
Note:
Not Supported for CNE 22.4.0 and later releases.
From CNE 22.4.0 onwards, the worker node External IP is not exposed. If user has them explicitly exposed, use the Node Port option else, use the LoadBalancer for external access.
- Update the following parameters in the Kafka section of the
ocnadd-custom-values-23.3.0.yamlfile:- externalAccess.type to NodePort
- externalAccess.enabled to true
- externalAccess.autoDiscovery to true
- To get the available list of EXTERNAL-IP Node IPs, run the
following command:
kubectl get no -o wide - Add all the Node IP's in ssl_certs/default_values/values under kafka-broker section.
Example: For Node IPs 10.20.30.40, 10.20.30.50, and so on.
[kafka-broker] client.commonName=kafka-broker-zk server.commonName=kafka-broker DNS.1=*.kafka-broker.<nameSpace>.svc.<Cluster-Domain> DNS.2=kafka-broker DNS.3=*.kafka-broker IP.1=10.20.30.40 IP.2=10.20.30.50
2.2.1.8.1 Generate Certificates using CACert and CAKey
OCNADD allows the users to provide the CACert and CAKey and generate certificates for all the services by running a predefined script.
- Navigate to the
ssl_certs/default_values/valuesfile. - In the
valuesfile, edit the global parameters, CN, and SAN for each service based on the requirement as follows:Note:
Edit only values for global parameters and RootCA common name, and add service blocks for all the services for which certificate needs to be generated. Thevalueswill be available once the OCNADD helm files are extracted.Global Params: [global] countryName=<country> stateOrProvinceName=<state> localityName=<city> organizationName=<org_name> organizationalUnitName=<org_bu_name> defaultDays=<days to expiry> Root CA common name (e.g., *.namespace.svc.domainName) ##root_ca commonName=<rootca_common_name> Service common name for client and server and SAN. (Make sure to follow exact same format and provide an empty line at the end of each service block) [service-name-1] client.commonName=client.cn.name.svc1 server.commonName=server.cn.name.svc1 IP=127.0.0.1 DNS.1=localhost [service-name-2] client.commonName=client.cn.name.svc2 server.commonName=server.cn.name.svc2 IP = 10.20.30.40 DNS.1 = *.svc2.namespace.svc.domainName [service-name-3] client.commonName=client.cn.name.svc3 [service-name-4] server.commonName=server.cn.name.svc4 IP.1 = 10.20.30.41 IP.2 = 127.0.0.1 DNS.1 = *.svc4.namespace.svc.domainName DNS.2 = *.svc44.namespace.svc.domainName ##end - Run the
generate_certs.shscript with the following command:./generate_certs.sh -cacert <path to>/CAcert.pem -cakey <path to>/CAkey.pem - Select “n” when prompted for create Certificate Authority
(CA).
Do you want to create Certificate Authority (CA)? n - Copy CA Certificate pem file (as
cacert.pem) to “demoCA” folder and CA certificate key file (ascakey.pem) to “demoCA/private” if the paths to cacert and cakey are not provided through flags.(The demoCA folder is created by script in the same path where script exist.)
cp /path/to/CAcert.pem /path/to/generate_certs_script/demoCA/cacert.pem cp /path/to/CAkey.pem /path/to/generate_certs_script/demoCA/private/cakey.pemNote:
Perform this step only if you have not provided the paths to cacert and cakey. - Select “y” when prompted to use the existing CA to sign CSR for each
service.
Would you like to use existing CA to sign CSR for services? Y - Enter the password for your CA
key.
password: <enter your ca key password> - Select “y” when prompted to create CSR for each
service.
Create Certificate Signing Request (CSR) for each service? Y - Select “y” when prompted for signing CSR for each service with CA
Key.
Would you like to sign CSR for each service with CA key? Y - Select “y” if you would like to create secrets for each service in
existing namespace or “n” if you want to create secrets in a new
namespace.
If “n” a. Would you like to choose any above namespace for creating secrets (y/n) n b. Enter new Kubernetes Namespace to create: <name of new ns to create> If “y” c. Would you like to choose any above namespace for creating secrets (y/n) y d. Enter new Kubernetes Namespace to create: <name of existing ns>The certificates are generated for each service and are available in the
demoCA/servicesfolder. The secret is created in the namespace, which is specified during the secret creation process. - Run the following command to check if the secrets are created in the
specified
namespace.
kubectl get secret -n <namespace> - Run the following command to describe any secret created by
script.
kubectl describe secret <secret-name> -n <namespace>
2.2.1.8.2 Generate Certificate Signing Request (CSR)
Users can generate the certificate signing request for each of the services using the OCNADD script, and then can use the generated CSRs to generate the certificates using its own certificate signing mechanism (External CA server, Hashicorp Vault, and Venafi).
Perform the following procedure to generate the CSR:
- Navigate to the
ssl_certs/default_values/valuesfile. - Edit global parameters, CN, and SAN for each service based on the
requirement.
Note:
Edit only values for global parameters and RootCA common name, and add service blocks for all the services for which certificate needs to be generated.a. Global Params: [global] countryName=<country> stateOrProvinceName=<state> localityName=<city> organizationName=<org_name> organizationalUnitName=<org_bu_name> defaultDays=<days to expiry> b. Root CA common name (e.g., *.namespace.svc.domainName) ##root_ca commonName=<rootca_common_name> c. Service common name for client and server and SAN. (Make sure to follow exact same format and provide an empty line at the end of each service block) [service-name-1] client.commonName=client.cn.name.svc1 server.commonName=server.cn.name.svc1 IP=127.0.0.1 DNS.1=localhost [service-name-2] client.commonName=client.cn.name.svc2 server.commonName=server.cn.name.svc2 IP = 10.20.30.40 DNS.1 = *.svc2.namespace.svc.domainName [service-name-3] client.commonName=client.cn.name.svc3 [service-name-4] server.commonName=server.cn.name.svc4 IP.1 = 10.20.30.41 IP.2 = 127.0.0.1 DNS.1 = *.svc4.namespace.svc.domainName DNS.2 = *.svc44.namespace.svc.domainName ##end - Run the
generate_certs.shscript with the --gencsr or -gc flag../generate_certs.sh --gencsr - Navigate to CSR and keys in the
demoCA/services(separate for client and server). The CSR can be signed using your own certificate signing mechanism and certificates should be generated. - Make sure that the certificates and keys naming is in the following
format if the service is acting as client or server, or
both.
For Client servicename-clientcert.pem and servicename-clientprivatekey.pem For Server servicename-servercert.pem and servicename-serverprivatekey.pem - Copy the certificates in the respective
demoCA/servicesfolder after certificates are generated for each service by signing CSR with your own CA key.The certificates should be separate for client and server as their CSR are generated.
- Run
generate_certs.shwith thecacertpath and --gensecret or -gs to generate secrets../generate_certs.sh -cacert /path/to/cacert.pem --gensecret - Enter "y" to continue generating
secrets.
Would you like to continue to generate secrets? (y/n) y - Select “y” if you want to create secrets for each service in the
existing namespace or “n” if you want to create secrets in a new
namespace.
If “n” > Would you like to choose any above namespace for creating secrets (y/n) n > Enter new Kubernetes Namespace to create: <name of new ns to create> If “y” > Would you like to choose any above namespace for creating secrets (y/n) y > Enter new Kubernetes Namespace to create: <name of existing ns>The secret is created in the namespace, which is specified during the secret creation process.
- Run the following command to check if the secrets are created in the
specified
namespace:
kubectl get secret -n <namespace> - Run the following command to describe any secret created by
script:
kubectl describe secret <secret-name> -n <namespace>
2.2.1.8.3 Generate Certificates and Private Keys
Users can generate the certificates and private keys for all the required services, and then create Kubernetes secrets without using the OCNADD script.
Perform the following procedure to generate the certificates and private keys:
- Run the
opensslcommand to generate CSR for each service (separate for client and server if required).- Run the following command to generate private
key:
openssl req -x509 -nodes -sha256 -days 365 -newkey rsa:2048 -keyout rsa_private_key -out rsa_certificate.crt - Run the following command to convert private key to
pem:
openssl rsa -in rsa_private_key -outform PEM -out rsa_private_key_pkcs1.pem - Update CN, SAN, and global parameters for each service in the
openssl.cnffile. - Run the following command to generate CSR for each service
using private
key:
openssl req -new -key rsa_private_key -out service_name.csr -config ssl.conf
- Run the following command to generate private
key:
- Sign each service CSR with Root CA private key to generate certificates.
- Generate Secrets using each service certificates and keys.
- Run the following command to create truststore and keystore
password
files:
echo "<password>" >> trust.txt echo "<password>" >> key.txt - Run the following command to create secrets using client and
server certificates and
cacert:
kubectl create secret generic <service_name>-secret --from-file=path/to/cert/<service_name>-clientprivatekey.pem --from-file=path/to/cert/<service_name>-clientcert.pem --from-file=path/to/cacert/cacert.pem --from-file=path/to/cert/<service_name>-serverprivatekey.pem --from-file=path/to/cert/<service_name>-servercert.pem --from-file=trust.txt --from-file=key.txt --from-literal=javakeystorepass=changeit -n <namespace>
Note:
Repeat Step 1 and 2 for all services (separate for client and server). - Run the following command to create truststore and keystore
password
files:
2.2.2 Installation Tasks
Note:
Before starting the installation tasks, ensure that the Prerequisites and Pre-Installation Tasks are completed.2.2.2.1 Installing OCNADD Package
This section describes how to install the Oracle Communications Network Analytics Data Director (OCNADD) package.
To install the OCNADD package, perform the following steps:
Create OCNADD Namespace
kubectl create ns <dd-namespace-name>For more
information, see Creating OCNADD Namespace.
Generate Certificates
- Run the following commands to generate
certificates:
Change directory to <chart_path>/ssl_certs, and update the file permission as below $ chmod 775 generate_certs.sh $ chmod 775 generate_secrets.sh (optional) Clean up the EOF encoding if copied from windows. sed -i -e 's/\r$//' default_values/values sed -i -e 's/\r$//' template/ca_openssl.cnf sed -i -e 's/\r$//' template/services_server_openssl.cnf sed -i -e 's/\r$//' template/services_client_openssl.cnf sed -i -e 's/\r$//' generate_certs.sh sed -i -e 's/\r$//' generate_secrets.shNote:
Make sure that changes made indefault_valuesreflect the namespace and cluster as described in Configuring SSL or TLS Certificates section. For more information on the certificate generation process, see Configuring SSL or TLS Certificates. - Perform the steps defined in Configuring SSL or TLS Certificates section to complete the certificate generation.
Update Database Parameters
To update the database parameters, see Configuring OCNADD Database.
Update ocnadd-custom-values-23.3.0.yaml file
Update the ocnadd-custom-values-23.3.0.yaml (depending on the type of deployment model) with the required
parameters. For more information on how to access and update the ocnadd-custom-values-23.3.0.yaml files, see Customizing OCNADD.
Configure OCNADD Backup Cronjob
- Configure the mysqlNameSpace and
storageClass details in
ocnadd-custom-values-23.3.0.yaml.cluster: secret: name: db-secret mysqlNameSpace: name: occne-cndbtierone #---> the namespace in which the cndbtier is deployed mysqlPod: ndbmysqld-0 #---> the pod can be ndbmysqld-0 or ndbmysqld-1 based on the cndbTier deployment storageClass: standard #---> Update the "storageClassName" with the respective storage class name in the case if deployment on Tanzu platform. For example "zfs-storage-policy" - Configure the following parameters in the
ocnadd-custom-values-23.3.0.yamlfile.The values for BACKUP_DATABASES can be set to ALL, which includes healthdb_schema, configuration_schema, and alarm_schema, or to the individual database names. The values for BACKUP_ARG can be set to ALL, DB or KAFKA. By default, the value is as ALL. PURGE_DAYS sets the backup retention period. The default value is 7 days.
Example:
ocnaddbackuprestore: ocnaddbackuprestore: name: ocnaddbackuprestore env: BACKUP_STORAGE: 20Gi BACKUP_CRONEXPRESSION: "0 8 * * *" BACKUP_DATABASES: ALL BACKUP_ARG: ALLOnce the deployment is successful, the cronjob is spawned based on the CRONEXPRESSION mentioned in the
For more information on backup and restore, see Fault Recovery.ocnadd-custom-values-23.3.0.yamlfile.
Install Helm Chart
Run any of the following helm install commands:
- In the case of Helm
2:
helm install <helm-repo> --name <deployment_name> -f ocnadd-custom-values-23.3.0.yaml --namespace <namespace_name> --version <helm_version> -
In the case of Helm 3:
helm3 install <release name> -f ocnadd-custom-values-23.3.0.yaml --namespace <namespace> <helm-repo>/chart_name --version <helm_version> -
In case charts are extracted and Helm is used:
helm install <release name> -f ocnadd-custom-values-23.3.0.yaml --namespace <namespace> <helm_chart>
Where:
helm_chart is the location of the Helm chart extracted from
ocnadd-23.3.0.tgz file.
Note:
The release_name should not exceed 63 character limit.namespace is the deployment namespace used by Helm command.
helm install ocnadd-23.3.0 -f ocnadd-custom-values-23.3.0.yaml --namespace ocnadd-deploy ocnaddCaution:
Do not exit fromhelm install command manually. After running the helm install command, it takes some time to install all the services. In the meantime, you must not press Ctrl+C to come out from the command. It leads to some anomalous behavior.
Note:
You can verify the installation while running the install command by entering this command on a separate terminal:watch kubectl get jobs,pods -n release_namespace2.2.2.2 Verifying OCNADD Installation
This section describes how to verify if Oracle Communications Network Analytics Data Director (OCNADD) is installed successfully.
- In the case of Helm, run one of the following
commands:
helm status <helm-release> -n <namespace>Example:helm list -n ocnaddThe system displays the status as deployed if the deployment is successful.
- Run the following command to check whether all the services are deployed and
active:
kubectl -n <namespace_name> get servicesRun the following command to check whether all the pods are up and active:
kubectl -n <namespace_name> get podsExample:
kubectl -n ocnadd get podskubectl -n ocnadd get services
Note:
- All microservices status must be Running and Ready.
- Take a backup of the following files that are required during fault
recovery:
- Updated Helm charts
- Secrets, certificates, and keys that are used during the installation
- If the installation is not successful or you do not see the status as Running for all the pods, perform the troubleshooting steps. For more information, refer to Oracle Communications Network Analytics Data Director Troubleshooting Guide.
2.2.2.3 Creating OCNADD Kafka Topics
To create OCNADD Kakfa topics, see the "Creating Kafka Topic for OCNADD" section of Oracle Communications Network Analytics Data Director User Guide
2.2.2.4 Installing OCNADD GUI
Install OCNADD GUI
The OCNADD GUI gets installed along with the OCNADD services.
Configure OCNADD GUI in CNCC
Prerequisite: To configure OCNADD GUI in CNC Console, you must have the CNC Console installed. For information on how to install CNC Console and configure the OCNADD instance, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.
Before installing CNC Console, ensure to update the instances parameters with the following details in the occncc_custom_values.yaml file:
instances:
- id: Cluster1-dd-instance1
type: DD-UI
owner: Cluster1
ip: 10.xx.xx.xx #--> give the cluster/node IP
port: 31456 #--> give the node port of ocnaddgui
apiPrefix: /<clustername>/<namespace>/ocnadd
- id: Cluster1-dd-instance1
type: DD-API
owner: Cluster1
ip: 10.xx.xx.xx #--> give the cluster/node IP
port: 32406 #--> give the node port of ocnaddbackendrouter
apiPrefix: /<clustername>/<namespace>/ocnaddapi
# Applicable only for Manager and Agent core. Used for Multi-Instance-Multi-Cluster Configuration Validation
validationHook:
enabled: false #--> add this enabled: false to validationHook
#--> do these changes under section :
cncc iam attributes
# If https is disabled, this Port would be HTTPS/1.0 Port (secured SSL)
publicHttpSignalingPort: 30085 #--> CNC console nodeport
#--> add these lines under cncc-iam attributes
# If Static node port needs to be set, then set staticNodePortEnabled flag to true and provide value for staticNodePort
# Else random node port will be assigned by K8
staticNodePortEnabled: true
staticHttpNodePort: 30085 #--> CNC console nodeport
staticHttpsNodePort: 30053
#--> do these changes under section : manager cncc core attributes
#--> add these lines under mcncc-core attributes
# If Static node port needs to be set, then set staticNodePortEnabled flag to true and provide value for staticNodePort
# Else random node port will be assigned by K8
staticNodePortEnabled: true
staticHttpNodePort: 30075
staticHttpsNodePort: 30043
#--> do these changes under section : agent cncc core attributes
#--> add these lines under acncc-core attributes
# If Static node port needs to be set, then set staticNodePortEnabled flag to true and provide value for staticNodePort
# Else random node port will be assigned by K8
staticNodePortEnabled: true
staticHttpNodePort: 30076
staticHttpsNodePort: 30044
occncc_custom_values.yaml file:instances:
- id: Cluster1-dd-instance1
type: DD-UI
owner: Cluster1
ip: 10.xx.xx.xx #--> update the cluster/node IP
port: 31456 #--> ocnaddgui port
apiPrefix: /<clustername>/<namespace>/ocnadd
- id: Cluster1-dd-instance1
type: DD-API
owner: Cluster1
ip: 10.xx.xx.xx #--> update the cluster/node IP
port: 32406 #--> ocnaddbackendrouter port
apiPrefix: /<clustername>/<namespace>/ocnaddapiExample:
occncc_custom_values.yaml will be as follows:DD-UI apiPrefix:
/occne-ocdd/ocnadd-deploy/ocnadd
DD-API apiPrefix:
/occne-ocdd/ocnadd-deploy/ocnaddapiAccess OCNADD GUI
To access OCNADD GUI, follow the procedure mentioned in the "Accessing CNC Console" section of Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.