3 Installing ATS for Different Network Analytics Suite Products

This section describes how to install ATS for different Network Analytics Suite products. It includes:

3.1 Installing ATS for NWDAF

This section describes the resource requirements and ATS installation procedures for NWDAF:

3.1.1 Software Requirements

This section describes the software requirements to install ATS for NWDAF. Install the following software bearing the versions mentioned in the table below:

Table 3-1 Software Requirements

Software Version
Kubernetes 1.25.6, 1.26.5, 1.27.5
Helm 3.10.3, 3.12.0, 3.12.3
Podman 4.4.1, 4.2.0, 4.6.1

Supported CNE versions are: Release 24.1.x, 23.4.x, and 23.3.x.

To verify the CNE version, run the following command:

echo $OCCNE_VERSION

To verify the Helm and Kubernetes versions installed in the CNE, run the following commands:

  • Verify Kubernetes version:
    kubectl version
  • Verify Helm version:
    helm version

3.1.2 Environment Setup

This section describes steps to ensure the environment setup facilitates the correct installation of ATS for NWDAF.

Network Access

The Kubernetes cluster hosts must have network access to the following:

  • Local docker image repository, where the OCATS NWDAF images are available.

    To verify if the Kubernetes cluster hosts have network access to the local docker image repository, retrieve any image with tag name to check connectivity by running the following command:

    docker pull <docker-repo>/<image-name>:<image-tag>

    Where, docker-repo is the IP address or host name of the repository, image-name is the docker image name and image-tag is the tag the image used for the NWDAF pod.

  • Local helm repository, where the OCATS NWDAF helm charts are available.

    To verify if the Kubernetes cluster hosts have network access to the local helm repository, run the following command:

    helm repo update

Client Machine Requirement

Listed below are the Client Machine requirements for a successful ATS installation for NWDAF:

  • Network access to the Helm repository and Docker image repository.
  • Helm repository must be configured on the client.
  • Network access to the Kubernetes cluster.
  • The environment settings to run the kubectl and docker commands. The environment should have privileges to create a namespace in the Kubernetes cluster.
  • The Helm client must be installed. The environment should be configured such that the Helm install command deploys the software in the Kubernetes cluster.

cnDBTier Requirement

NWDAF supports cnDBTier in a vCNE environment. cnDBTier must be up and active in case of containerized CNE. For more information, see Oracle Communications cnDBTier Installation, Upgrade, and Fault Recovery Guide.

Note:

If the environment has cnDBTier 23.2.0 installation, follow the instruction below:

  • If cnDBTier 23.2.0 release is installed, set the ndb_allow_copying_alter_table parameter to 'ON' in the cnDBTier custom values dbtier_23.2.0_custom_values_23.2.0.yaml file and perform cnDBTier upgrade before install, upgrade, or any fault recovery procedure is performed for OCNWDAF. Set the parameter to its default value, 'OFF' once the activity is completed and perform the cnDBTier upgrade to apply the parameter changes.
  • To perform cnDBTier upgrade, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

Oracle Communications Network Analytics Data Director (OCNADD) Requirements

Oracle Communications Network Analytics Data Director (OCNADD) serves as one of the data sources for the OCNWDAF. If OCNADD is configured as a data source, ensure the following prerequisites are met before OCNWDAF installation:

  • OCNADD is setup and running.
  • ACL feed is enabled on OCNADD as the required data source.
  • Run OCNWDAF gen_certs script under /scripts/gen_certs.sh.

Note:

Configure the ACL topic certificate from the OCNADD Kafka Cluster in the OCNWDAF Kafka Cluster to enable secure data flow between OCNADD and OCNWDAF.

For more information, see Oracle Communications Network Analytics Data Director Installation, Upgrade, and Fault Recovery Guide and Oracle Communications Networks Data Analytics Function Installation and Fault Recovery Guide.

Analytics Database

This database is based on MySQL cluster and stores relational and time-series data. The relational data represents all the objects within the telecommunication network, such as UEs, slices, cells, NFs, and so on and their relationships with each other. The time-series data represents all the KPIs, measurements, and event data collected over time and used in streaming analytics and training ML models.

Note:

The deployment of the Mysql Innodb cluster is based on the variable dbConfigStatus present in the values.yaml file under /helmchart.

For more information, see Oracle Communications Network Analytics Data Director Installation, Upgrade, and Fault Recovery Guide and Oracle Communications Networks Data Analytics Function Installation and Fault Recovery Guide.

3.1.3 Resource Requirements

This section describes the ATS resource requirements for NWDAF.

NWDAF Pods Resource Requirements Details

This section describes the resource requirements, which are needed to deploy NWDAF ATS successfully. The following table describes the total resource usage for:
  • NWDAF Suite
  • NWDAF Notification Consumer Simulator

Table 3-2 NWDAF Pods Resource Requirements Details

Microservice vCPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
ocn-ats-nwdaf-service 4 8 0 1 1 4 8 0
ocn-ats-nwdaf-notify-service 2 2 0 1 1 2 2 0

3.1.4 Downloading the ATS Package

Locating and Downloading ATS Images

To locate and download the ATS image from MOS:

  1. Log in to My Oracle Support using the appropriate credentials.
  2. Select the Patches & Updates tab.
  3. In the Patch Search Window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core Network Data Analytics Function in the Product field.
  5. From the Release drop-down, select "Oracle Communications Cloud Native Core Network Data Analytics Function <release_number>" where, <release_number> indicates the required release number of OCNWDAF.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required ATS patch from the list. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to download the OCNWDAF ATS package file.
  10. Extract the release package ZIP file. The package is named as nwdaf-pkg-<marketing-release-number>.tgz. For example, nwdaf-pkg-24.2.0.0.tgz.
  11. Untar the OCNWDAF package file to the specific directory, tar -xvf nwdaf-pkg-<marketing-release-number>.tgz. The OCNWDAF directory has the following package structure:
    # Root
    - images
      - tar of images
      - sha 256 of images
    - troubleshooting/
        - nfDataCapture.sh
    - ocn-nwdaf-helmChart/
        - helmChart
            - templates
            - charts
            - values.yaml
            - charts.yaml
            - nwdaf-pre-installer.tar.gz
        - simulator-helmChart
            - templates
            - charts
            - values.yaml
            - charts.yaml
     - nwdaf-ats/
        - ocn-ats-nwdaf-tool
            -templates
        - ocnwdaf_tests
            -data
                - kafka_topic_template
                - perfgo_data
            -features
                - perfgo
                - regression
                - quality of service
            -steps

    Figure 3-1 OCNWDAF Folder Structure


    OCNWDAF Folder Structure

3.1.5 Pushing the Images to Customer Docker Registry

Follow the procedure described below to push the NWDAF ATS docker images to the docker repository:

Prerequisites

  • Oracle Linux 8 environment
  • NWDAF 24.2.0.0.0 package

Note:

The NWDAF deployment package includes:
  • Ready-to-use docker images in the images tar or zip file.
  • Helm charts to help orchestrate Containers in Kubernetes.

The communication between NWDAF service pods is preconfigured in the Helm charts. The NWDAF ATS uses the following services:

Table 3-3 NWDAF ATS Services

Service Name Docker Image Name Image Tag
ocats-nwdaf ocats-nwdaf 24.2.0.0.0
ocats-nwdaf-notify ocats-nwdaf-notify 24.2.0.0.0
  1. Verify the checksums of tarballs mentioned in file Readme.txt.
  2. Run the following command to extract the contents of the tar file:
    tar -xvf nwdaf-pkg-<marketing-release-number>.tgz

    Or

    To extract the files, run the command:

    unzip  nwdaf-pkg-<marketing-release-number>.zip

    The nwdaf-pkg-<marketing-release-number>.tar file contains the following NWDAF ATS images:

    • ocats-nwdaf-notify
    • ocats-nwdaf
  3. Navigate to the folder path ./installer, and extract the zip file. Run the following command:
    unzip nwdaf-ats.zip
    The zip folder contains the following files:
    ocn-ats-nwdaf-tool
                ngnix
                templates
            ocnwdaf_tests
                data        
                kafka_topic_template
                perfgo_data
                features
                    perfgo
                    regression
                    quality_of_service
                steps
    
  4. Run the following command to push the Docker images to the Docker Repository:
    docker load --input <image_file_name.tar>

    Example:

    docker load --input images
  5. Run the following command to push the NWDAF ATS Docker files to the Docker Registry:
    docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
    docker push <docker_repo>/<image_name>:<image-tag>

    Where, <docker-repo> indicates the repository where the downloaded images can be pushed.

  6. Run the following command to verify if the images are loaded:
    docker images
  7. Run the following command to push the Helm charts to the Helm repository:
    helm cm-push --force <chart name>.tgz <helm repo>

3.1.6 Configuring ATS

This section describes how to configure ATS for NWDAF.

3.1.6.1 Creating and Verifying NWDAF Console Namespaces

This section explains how to create a new namespace or verify an existing namespace in the system.

Run the following command to verify if the required namespace exists in the system:

$ kubectl get namespaces

If the namespace exists, continue with the NWDAF ATS installation. If the required namespace is not available, create a namespace using the following command:

$ kubectl create namespace <required namespace>

For example:

$ kubectl create namespace ocats-nwdaf

Naming Convention for Namespaces:

The naming convention for namespaces must:

  • start and end with an alphanumeric character
  • contain a maximum of "63" characters
  • contain only alphanumeric characters or '-'
3.1.6.2 Updating values.yaml File
Update the values.yaml file before you deploy NWDAF ATS. To update the values.yaml file:
  1. In the installation package, navigate to *root/installer/nwdaf-ats/ocn-ats-nwdaf-tool
  2. Edit the values.yaml file.

    For example:

    vim values.yaml

    Update the following parameters in the values.yaml file:
    • imageRegistry: Provide the registry where the images are located.
    • imageVersion: Verify the value is 24.1.0.0.0.

Note:

Ensure that the image registry path is correct, and is pointing to the docker registry where the ATS docker images are located.
3.1.6.3 Deploying NWDAF ATS in the Kubernetes Cluster

To deploy ATS, perform the following steps:

  1. The values.yaml file is located inside ocn-ats-nwdaf-tool directory. The namespace, docker image or tag can be updated in the values.yaml file.
  2. Run the following command to deploy the NWDAF ATS and its consumers in the same namespace where NWDAF suite is installed:
    helm install <installation name> <path to the chart directory> -n $K8_NAMESPACE

    For example:

    helm install ocnwdaf-ats ocn-ats-nwdaf-tool/

    Note:

    Ensure NWDAF ATS is installed in the same namespace where the NWDAF suite is installed.
  3. To perform Helm installation with proxy command, run the following command:
    helm install <installation name> <path to the chart directory> -n $K8_NAMESPACE
    --set ocatsNwdaf.config.env.JAVA_OPTS="\\-Dhttps\\.proxyHost\\=<proxy_domain>\\ \\-Dhttps\\.proxyPort\\=<proxy_port>\\ \\-Dhttp\\.nonProxyHosts\\=localhost\\,127.0.0.1\\,\\<no_proxy_host>"
     

    For example:

    helm install ocnwdaf-ats ocn-ats-nwdaf-tool/ --set ocatsNwdaf.config.env.JAVA_OPTS="\\-Dhttps\\.proxyHost\\=www-proxy.us.oracle.com\\ \\-Dhttps\\.proxyPort\\=80\\ \\-Dhttp\\.nonProxyHosts\\=localhost\\,127.0.0.1\\,\\.oracle.com\\,\\.oraclecorp\\.com"

    Note:

    Provide the ocatsNwdaf.config.env.JAVA_OPTS field in the proxy configuration. This allows access to the internet to download the required plugins and components required for a successful ATS installation.
  4. Run the following command to verify the ATS deployment status:
    kubectl get deployments -n <namespace_name>
3.1.6.4 Verifying ATS Deployment

Run the following command to verify ATS deployment:

helm status <release_name>

Once ATS is deployed, run the following commands to check the pod and service deployment:

To check pod deployment, run the command:

kubectl get pod –n <namespace_name>

To check service deployment, run the command:

kubectl get service -n <namespace_name>

Once the installation (service and pod deployment) is successfully running, track the progress of the ATS Jenkins preconfiguration process, run the following command:

kubectl exec -it <ats_pod_name> -- tail -f /var/lib/jenkins/.jenkins/jenkins-configure.log

For example:

kubectl exec -it ocats-nwdaf-deploy-787d4f5f84-5vmv5 -- tail -f /var/lib/jenkins/.jenkins/jenkins-configure.log

Wait for the preconfiguration process to complete, the following message is displayed:

Jenkins configuration finish successfully

3.2 Installing ATS for OCNADD

The Oracle Communications Network Analytics Data Director (OCNADD) can be installed with or without TLS (Transport Layer Security) Support feature. To enable TLS feature support, see Enabling TLS Support on ATS and Enable TLS Support on OCNADD ATS before proceeding with ATS installation.

Installing ATS for OCNADD

Following are the ATS installation procedures for Oracle Communications Network Analytics Data Director (OCNADD):
  1. Resource Requirements
  2. Downloading the ATS Package
  3. Pushing the Images to Customer Docker Registry
  4. Configuring ATS
  5. Deploying ATS and Stub in Kubernetes Cluster
  6. Verifying ATS Deployment

3.2.1 Resource Requirements

This section describes the ATS resource requirements for OCNADD.

Overview - Total Number of Resources

The following table describes the overall resource usage in terms of CPUs and memory for the following:
  • OCNADD SUT
  • ATS

Table 3-4 OCNADD - Total Number of Resources

Resource Name CPU Memory (GB)
OCNADD SUT Totals 62 340
ATS Totals 14 22
Grand Total OCNADD ATS 76 362

OCNADD Pods Resource Requirement Details

This section describes the resource requirements, which are needed to deploy OCNADD ATS successfully.

Table 3-5 OCNADD Pods Resource Requirement Details

OCNADD Service CPUs Required per Pod Memory Required per Pod (GB) # Replicas (regular deployment) # Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB)
ocnaddconfiguration 1 1 1 1 1 1
ocnaddalarm 1 1 1 1 1 1
ocnaddadmin 1 1 1 1 1 1
ocnaddhealthmonitoring 1 1 1 1 1 1
ocnadduirouter 1 1 1 1 1 1
ocnaddscpaggregation 2 2 1 1 2 2
ocnaddnrfaggregation 2 2 1 1 2 2
ocnaddseppaggregation 2 2 1 1 2 2
ocnaddadapter 3 4 2 2 6 8
ocnaddkafka 6 64 4 4 24 256
zookeeper 1 2 3 3 3 6
ocnaddgui 2 1 1 1 2 1
ocnaddfilter 3 3 1 1 3 3
ocnaddcorrelation 3 24 1 1 3 24
ocnaddredundancyagent 2 3 1 1 3 3
ocnaddstorageadapter 3 24 1 1 3 24
ocnaddpcfaggregation 2 2 1 1 2 2
ocnaddbsfaggregation 2 2 1 1 2 2
OCNADD SUT Totals 62 CPU 340 GB

For more information about OCNADD Pods Resource Requirements, see the "Resource Requirements" section in Oracle Communications Network Analytics Data Director Installation, Upgrade, and Fault Recovery Guide.

ATS Resource Requirement details for OCNADD

This section describes the ATS resource requirements, which are needed to deploy OCNADD ATS successfully.

Table 3-6 ATS Resource Requirement Details

Microservice CPUs Required per Pod Memory Required per Pod (GB) # Replicas (regular deployment) # Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB)
ATS Behave 2 1 1 1 2 1
OCNADD Producer Stub (SCP, NRF, SEPP,PCF,BSF) 10 20 1 1 10 20
OCNADD Consumer Stub 2 1 1 1 2 1
ATS Totals 14 22

3.2.2 Downloading the ATS Package

Locating and Downloading ATS Images

To locate and download the ATS Image from MOS:

  1. Log in to My Oracle Support using the appropriate credentials.
  2. Select the Patches & Updates tab.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Network Analytics Data Director in the Product field.
  5. Select Oracle Communications Network Analytics Data Director <release_number> from the Release drop-down.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required ATS patch from the list. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to download the OCNADD ATS package file.
  10. Untar the zip file to access all the ATS Images. The <p********_<release_number>_Tekelec>.zip directory has following files:
    ocats-ocnadd-tools-pkg-24.3.0.tgz
    ocats-ocnadd-tools-pkg-24.3.0-README.txt
    ocats-ocnadd-tools-pkg-24.3.0.tgz.sha256
    ocats-ocnadd-custom-configtemplates-24.3.0.zip
    ocats-ocnadd-custom-configtemplates-24.3.0-README.txt
    The ocats-ocnadd-tools-pkg-24.3.0-README.txt file has all the information required for the package. The ocats-ocnadd-tools-pkg-24.3.0.tgz file has the following images and charts packaged as tar files:
    ocats-ocnadd-tools-pkg-24.3.0.tgz
    
    | |_ _ _ocats-ocnadd-pkg-24.3.0.tgz
    
    | |_ _ _ _ _ _ ocats-ocnadd-24.3.0.tgz (Helm Charts)
    
    | |_ _ _ _ _ _ ocats-ocnadd-image-24.3.0.tar (Docker Images)
    
    | |_ _ _ _ _ _ OCATS-ocnadd-Readme.txt
    
    | |_ _ _ _ _ _ ocats-ocnadd-24.3.0.tgz.sha256
    
    | |_ _ _ _ _ _ ocats-ocnadd-image-24.3.0.tar.sha256
    
    | |_ _ _ _ _ _ ocats-ocnadd-data-24.3.0.tgz (ATS test scripts and Jenkins data)
    
    | |_ _ _ _ _ _ ocats-ocnadd-data-24.3.0.tgz.sha256
    
    | | |_ _ _ocstub-ocnadd-pkg-24.3.0.tgz
    
    |_ _ _ _ _ _ ocstub-ocnadd-24.3.0.tgz (Helm Charts)
    
    |_ _ _ _ _ _ ocstub-ocnadd-image-24.3.0.tar (Docker Images)
    
    |_ _ _ _ _ _ OCSTUB-ocnadd-Readme.txt
    
    |_ _ _ _ _ _ ocstub-ocnadd-24.3.0.tgz.sha256
    
    |_ _ _ _ _ _ ocstub-ocnadd-image-24.3.0.tar.sha256
    In addition to the above images and charts, there is an ocats-ocnadd-custom-configtemplates-24.3.0.zip file in the package file.
    ocats-ocnadd-custom-configtemplates-24.3.0.zip
    
          |
    
          |_ _ _ocats-ocnadd-custom-values_24.2.0.yaml (Custom values file for installation)    
              
          |
    
          |_ _ _ocats_ocnadd_custom_serviceaccount_24.2.0.yaml (Template to create custom service account)
  11. Copy the tar file to the CNE, OCI, or Kubernetes cluster where you want to deploy ATS.

3.2.3 Pushing the Images to Customer Docker Registry

Preparing to deploy ATS and Stub Pod in Kubernetes Cluster

To deploy ATS and Stub Pod in Kubernetes Cluster:

  1. Run the following command to extract tar file content.
    tar -xvf ocats-ocnadd-tools-pkg-24.3.0.tgz 

    The output of this command is:

    ocats-ocnadd-pkg-24.3.0.tgz
    ocstub-ocnadd-pkg-24.3.0.tgz
    ocats-ocnadd-custom-configtemplates-24.3.0.zip
  2. Run the following command to extract the helm charts and docker images of ATS.
    tar -xvf ocats-ocnadd-pkg-24.3.0.tgz 

    The output of this command is:

    ocats-ocnadd-24.3.0.tgz
    
    ocats-ocnadd-24.3.0.tgz.sha256
    ocats-ocnadd-data-24.3.0.tgz
    ocats-ocnadd-data-24.3.0.tgz.sha256
    ocats-ocnadd-image-24.3.0.tar
    ocats-ocnadd-image-24.3.0.tar.sha256
    OCATS-ocnadd-Readme.txt

    Note:

    The ocats-ocnadd-Readme.txt file has all the information required for the package.
  3. Run the following command to untar the ocstub package.
    tar -xvf ocstub-ocnadd-pkg-24.3.0.tgz 

    The output of this command is:

    ocstub-ocnadd-image-24.3.0.tar
    ocstub-ocnadd-24.3.0.tgz.sha256
    ocstub-ocnadd-image-24.3.0.tar.sha256
    ocstub-ocnadd-24.3.0.tgz
    OCSTUB-ocnadd-Readme.txt
    OCSTUB_OCNADD_Installation_Readme.txt
  4. Run the following command to extract the content of the custom configuration templates:
    unzip ocats-ocnadd-custom-configtemplates-24.3.0.zip

    The output of this command is:

    ocats-ocnadd-custom-values_24.2.0.yaml (Custom yaml file for deployment of OCATS-OCNADD)
    ocats_ocnadd_custom_serviceaccount_24.2.0.yaml (Custom yaml file for service account creation to help the customer if required)
  5. Run the following commands in your cluster to load the ATS docker image, 'ocats-ocnadd-image-24.3.0.tar:
    $ podman load -i ocats-ocnadd-image-24.3.0.tar
  6. Run the following commands to tag the imported image and push it to the local registry:
    $ podman tag docker.io/<image-tag> <local_registry>/<image-name>:<image-tag>
    $ podman push <local_registry>/<image-name>:<image-tag>
  7. Run the following commands in your cluster to load the Stub docker images ocstub-ocnadd-image-24.3.0.tar:
    $ podman load -i ocstub-ocnadd-image-24.3.0.tar
  8. Run the following commands to tag each imported image and push it to the local registry:
    $ podman tag docker.io/<image-tag> <local_registry>/<image-name>:<image-tag> 
    $ podman push <local_registry>/<image-name>:<image-tag> 
  9. Update the image name and tag in the ocats-ocnadd-custom-values.yaml and ocnaddsimulator/values.yaml files of simulator Helm as required. For ocats-ocnadd-custom-values.yaml update the 'image.repository' with respective local_registry.

    For ocnaddsimulator/values.yaml update the 'repo.REPO_HOST_PORT' and 'initContainers.repo.REPO_HOST_PORT' with respective local_registry.

3.2.4 Configuring ATS

3.2.4.1 Enabling Static Port
  1. To enable static port:
    • In the ocats-ocnadd-custom-values.yaml file under service section, set the staticNodePortEnabled parameter value to 'true' and staticNodePort parameter value with valid nodePort.
      service:
        customExtension:
          labels: {}
          annotations: {}
        type: LoadBalancer
        ports:
          https:
            port: "8443"
            staticNodePortEnabled: false
            staticNodePort: ""
          http:
            port: "8080"
            staticNodePortEnabled: false
            staticNodePort: ""
        staticLoadBalancerIPEnabled: false
        staticLoadBalancerIP: ""

    Note:

    ATS supports static port. By default, this feature is not enabled.

    Note:

    To enable staticLoadBalancerIP, set the staticLoadBalancerIPEnabled parameter value to 'true' and staticLoadBalancerIP parameter value with valid LoadBalancer IP value. By default, this is set to false.
3.2.4.2 Enabling TLS Support on ATS

If user wish to enable TLS support on ATS deployment (this is irrespective of OCNADD deployment), update the following parameters to "true" in the ocats-ocnadd-custom-values.yaml file. Else, this step can be skipped.

  • atsGuiTLSEnabled
  • atsCommunicationTLSEnabled

For example:

atsGuiTLSEnabled: false               ## --> updated it with 'true'
atsCommunicationTLSEnabled: false     ## --> updated it with 'true' 

3.2.5 Deploying ATS and Stub in Kubernetes Cluster

Note:

It is important to ensure that the ATS and Stub are deployed as follows:
  • In Centralized Deployment mode (fresh deployments), the ATS and Stub must be deployed in the Worker Group namespace.
  • In Non-Centralized Deployment mode (when the OCNADD is upgraded from any older release), the ATS and Stub should be deployed in the default OCNADD namespace.
  • When OCNADD is deployed as a two-site georedundant deployment, ATS and Stub should be deployed on both Primary and Secondary sites.
  • OCNADD-ATS suite supports mTLS. To use this feature, ensure that mTLS is set to "true" during OCNADD deployment.

For the test cases to run successfully, ensure the intraTLSEnalbled parameter value in the Jenkins pipeline script is identical to the value in the OCNADD deployment. The Alert Manager does not support HTTPS connections. When intraTLSEnalbled is set to true, the following Alert Manager test cases may fail:

  • Verify the OCNADD_CONSUMER_ADAPTER_SVC_DOWN alert, this alert is raised when deployments of OCNADD are broken.
  • Verify the OCNADD_ADMIN_SVC_DOWN alert, this alert is raised when deployments of OCNADD are broken.
  • Verify the OCNADD_NRF_AGGREGATION_SVC_DOWN alert, this alert is raised when deployments of OCNADD are broken.
  • Verify the OCNADD_SCP_AGGREGATION_SVC_DOWN alert, this alert is raised when deployments of OCNADD are broken.
  • Verify the OCNADD_ALARM_SVC_DOWN alert, this alert is raised when deployments of OCNADD are broken.
  • Verify the OCNADD_HEALTH_MONITORING_SVC_DOWN alert, this alert is raised when deployments of OCNADD are broken.

ATS and Stub Support Helm Deployment

Choose the namespace to deploy the ATS and Stub based on the OCNADD deployment mode, see Note.

If the namespace does not exist, run the following command to create a namespace:
kubectl create namespace <namespace_name>

Note:

  • It is recommended to use the <release_name> as ocnadd-sim while installing stubs.
  • The ATS deployment with OCNADD does not support the Persistent Volume (PV) feature. Therefore, the default value of the deployment.PVEnabled parameter in ocats-ocnadd-custom-values.yaml must not be changed. By default, the parameter value is set to false.

Deploying ATS:

Run the following command:

helm install -name <release_name> ocats-ocnadd-24.3.0.tgz --namespace <namespace_name> -f <values-yaml-file>
Example:

helm install -name ocats ocats-ocnadd-24.3.0.tgz --namespace ocnadd-deploy -f ocats-ocnadd-custom-values.yaml
Deploying Stubs:

Note:

Before you deploy stubs, update the parameter oraclenfproducer.REST_BASED_TRAFFIC to true in the ocnaddsimulator/values.yaml file. The default value of the parameter is false.
helm install -name <release_name> <ocstub-ocnadd-chart> --namespace <namespace_name> 

Note:

For more details about installing the stub, refer to the OCSTUB_OCNADD_Installation_Readme.txt file.

Example:

helm install -name ocnadd-sim ocnaddsimulator --namespace ocnadd-deploy

3.2.6 Verifying ATS Deployment

Run the following command to verify ATS deployment.

helm status <release_name> -n <namespace>

Once ATS and Stub are deployed, run the following commands to check the pod and service deployment:

To check pod deployment:

kubectl get pod -n ocnadd-deploy

To check service deployment:

kubectl get service -n ocnadd-deploy

3.2.7 Enable TLS Support on OCNADD ATS

If user has enabled TLS support on ATS deployment as mentioned in Enabling TLS Support on ATS section, follow the steps listed below:

Generate .jks File for the Jenkins Server

For Jenkins to support GUI access through HTTPS, a .jks file has to be created. Follow the steps below to create the .jks file:

  1. Generate the Root Certificate (caroot.cer)

    If you have CA signed root certificate and key or your own root certificates, use the same certificates.

    If the root certificate (caroot.cer) is not there, follow the steps below to create the certificate to create self-signed certificates. The root certificate is added to the Trust Store (of the browser). However, this step is not required for CA signed certificates.

    1. Follow any browser's documentation, for example:
      • Mozilla Firefox

        • Navigate to the Settings and search for Certificate.
        • Click View Certificates in the search results.
        • The Certificate Manager window appears on the screen.
        • Navigate to the Authorities section and click Import. Upload the caroot certificate.
        • A new window appears on the screen, click Trust Options and then click OK.
        • Save the changes and restart the browser.
      • Google Chrome

        • Navigate to the Settings and search for Security.
        • Click Security in the search results.
        • Locate Manage Device Certificates and click to select the option.
        • The Certificates window appears on the screen. Click Trusted Root Certification Authorities bar.
        • Import the caroot certificate.
        • Save the changes and restart the browser.
    2. Use the Root Certificate to sign the Application or ATS certificate.
    3. To generate the key caroot.key, run the command openssl genrsa 2048 > caroot.key.
    4. Generate the caroot certificate, for example:
      [cloud-user@star23-bastion-1 edo]$ openssl req -new -x509 -nodes -days 1000 -key caroot.key > caroot.cer
      You are about to be asked to enter information that will be incorporated
      into your certificate request.
      What you are about to enter is what is called a Distinguished Name or a DN.
      There are quite a few fields but you can leave some blank
      For some fields there will be a default value,
      If you enter '.', the field will be left blank.
      -----
      Country Name (2 letter code) [XX]:IN
      State or Province Name (full name) []:KA
      Locality Name (eg, city) [Default City]:BLR
      Organization Name (eg, company) [Default Company Ltd]:ORACLE
      Organizational Unit Name (eg, section) []:CGBU
      Common Name (eg, your name or your server's hostname) []:ocats
      Email Address []:
      [cloud-user@star23-bastion-1 edo]$
  2. Generate Application or Client Certificate

    Follow the steps below to generate an application or client certificate:

    1. Create and edit the ssl.conf file as below:

      In the [alt_names] section, list the IPs through which the ATS GUI is opened. User can add multiple IPs such as IP.1, IP.2, and so on.

      [ req ]
      default_bits       = 4096
      distinguished_name = req_distinguished_name
      req_extensions     = req_ext
       
      [ req_distinguished_name ]
      countryName                 = Country Name (2 letter code)
      countryName_default         = IN
      stateOrProvinceName         = State or Province Name (full name)
      stateOrProvinceName_default = KN
      localityName                = Locality Name (eg, city)
      localityName_default        = BLR
      organizationName            = Organization Name (eg, company)
      organizationName_default    = ORACLE
      commonName                  = Common Name (e.g. server FQDN or YOUR name)
      commonName_max              = 64
      commonName_default          = ocats.scpsvc.svc.cluster.local
       
      [ req_ext ]
      keyUsage = critical, digitalSignature, keyEncipherment
      extendedKeyUsage = serverAuth, clientAuth
      basicConstraints = critical, CA:FALSE
      subjectAltName = critical, @alt_names
       
      [alt_names]
      IP.1 = 127.0.0.1
      IP.2 = 10.75.217.5
      IP.3 = 10.75.217.76
      DNS.1 = localhost
      DNS.2 = ocats.scpsvc.svc.cluster.local

      To access the GUI with DNS ensure that the commonName_default value is the same as the configured DNS name and the format is <service_name>.<namespace>.svc.cluster.local. User can add multiple DNS'es like DNS.1,DNS.2, and so on.

      To support ATS API's, add 127.0.0.1 as IP.1.

    2. Create a Certificate Signing Request or CSR file, as follows:
      [cloud-user@star23-bastion-1 ocats]$ openssl req -config ssl.conf -newkey rsa:2048 -days 1000 -nodes -keyout rsa_private_key_pkcs1.key > ssl_rsa_certificate.csr
      Ignoring -days; not generating a certificate
      Generating a RSA private key
      ...+++++
      ........+++++
      writing new private key to 'rsa_private_key_pkcs1.key'
      -----
      You are about to be asked to enter information that will be incorporated
      into your certificate request.
      What you are about to enter is what is called a Distinguished Name or a DN.
      There are quite a few fields but you can leave some blank
      For some fields there will be a default value,
      If you enter '.', the field will be left blank.
      -----
      Country Name (2 letter code) [IN]:
      State or Province Name (full name) [KA]:
      Locality Name (eg, city) [BLR]:
      Organization Name (eg, company) [ORACLE]:
      Common Name (e.g. server FQDN or YOUR name) [ocats]:
      [cloud-user@star23-bastion-1 ocats]$
    3. To view all the components of the file, run the command openssl req -text -noout -verify -in ssl_rsa_certificate.csr. Verify all the configurations in the file.
    4. Sign the .csr file with the root certificate, as below:
      [cloud-user@star23-bastion-1 ocats]$ openssl x509 -extfile ssl.conf -extensions req_ext -req -in ssl_rsa_certificate.csr -days 1000 -CA ../caroot.cer -CAkey ../caroot.key -set_serial 04 > ssl_rsa_certificate.crt
      Signature ok
      subject=C = IN, ST = KA, L = BLR, O = ORACLE, CN = ocats
      Getting CA Private Key
      [cloud-user@star23-bastion-1 ocats]$
    5. Verify if the certificate is correctly signed by root certificate, as below:
      [cloud-user@star23-bastion-1 ocats]$ openssl verify -CAfile caroot.cer ssl_rsa_certificate.crt
      ssl_rsa_certificate.crt: OK
  3. Save the application certificate and the root certificates for future use.
  4. Add the caroot.cer certificate as a trusted author in the browser.
  5. The generated application or client certificates cannot be provided directly to the Jenkins server. Follow the steps below:
    • Generate the .p12 keystore file, run the following command:
      [cloud-user@star23-bastion-1 ocats]$ openssl pkcs12 -inkey rsa_private_key_pkcs1.key -in ssl_rsa_certificate.crt -export -out certificate.p12
      Enter Export Password:
      Verifying - Enter Export Password:

      You will receive a password prompt, note the password provided for future use.

      • Convert .p12 file to .jks file, run the following command:
        [cloud-user@star23-bastion-1 ocats]$ keytool -importkeystore -srckeystore ./certificate.p12 -srcstoretype pkcs12 -destkeystore jenkinsserver.jks -deststoretype JKS
        Importing keystore ./certificate.p12 to jenkinsserver.jks...
        Enter destination keystore password:
        Re-enter new password:
        Enter source keystore password:
        Entry for alias 1 successfully imported.
        Import command completed:  1 entries successfully imported, 0 entries failed or cancelled

        You will receive a password prompt, provide the same password used for creating .p12 file.

      • Ensure that both p12 and jks files have the same password.
      • Provide the generate jenkinserver.jks file to the Jenkins Server.

Enable the ATS GUI with HTTPS during Installation

Follow the procedure below to enable the ATS GUI with HTTPS during installation:

  1. To create the .jks file, see Generate .jks File for the Jenkins Server. You can choose to use the either a CA signed certificate or a self signed certificate.

  2. Create a Kubernetes secret by using the files created in the Generate .jks File for the Jenkins Server procedure, run the following command:
    kubectl create secret generic ocats-tls-secret --from-file=jenkinsserver.jks --from-file=ssl_rsa_certificate.crt --from-file=rsa_private_key_pkcs1.key --from-file=caroot.cer -n scpsvc

    View the K8s secret, for example:

    [cloud-user@star23-bastion-1 ~]$ kubectl describe secret ocats-tls-secret -n scpsvc
    Name:         ocats-tls-secret
    Namespace:    scpsvc
    Labels:       <none>
    Annotations:  <none>
     
    Type:  Opaque
     
    Data
    ====
    caroot.cer:                        1147 bytes
    ssl_rsa_certificate.crt:           1424 bytes
    jenkinsserver.jks:                 2357 bytes
    rsa_private_key_pkcs1.key:         1675 bytes
  3. After creating the secret, follow the Installing ATS for OCNADD procedure to install the ATS. Once the installation is successfully completed, proceed to the next step.
  4. The ATS GUI opens with HTTPS protocol. The link to open the GUI appears as https://<IP>:<port>.

    For example:

    Figure 3-2 HTTPS Address


    HTTPS Address

  5. In Mozilla Firefox browser, click the Lock symbol in the address bar, the page information is displayed as below:

    Figure 3-3 Page Information


    Page Information