2 CNC Console Installation Preparation

Prerequisites

Before installing Oracle Communications CNC Console, make sure that the following requirements are met:

Software Requirements

This section lists the minimum software requirements to install Oracle Communications CNC Console.

The CNC Console software includes:

  • CNC Console Helm charts
  • CNC Console docker images

Install the following software before installing CNC Console:

Table 2-1 Pre-installed Software

Software Version
Kubernetes 1.23.x, 1.22.x, 1.21.x
HELM v3.1.2, v3.5.0, v3.6.3, v3.8.0
Podman v2.2.1, v3.2.3, v3.3.1

Oracle Communications Cloud Native Environment Requirement

CNC Console supports Oracle Communications Cloud Native Environment (OCCNE). To check the OCCNE version, run the following command:
echo $OCCNE_VERSION
To check the current helms and Kubernetes version installed in the OCCNE, run the following commands:
 kubectl version
 helm version 

Following are the common services that must be deployed as per the requirement:

Table 2-2 Common Services

Software Chart Version Required For
elasticsearch 7.9.3 Logging
elastic-curator 5.5.4 Logging
elastic-exporter 1.1.0 Logging
elastic-master 7.9.3 Logging
logs 3.1.0 Logging
kibana 7.9.3 Logging
grafana 7.5.11 Metrics
prometheus 2.32.1 Metrics
prometheus-kube-state-metrics 1.9.7 Metrics
prometheus-node-exporter 1.0.1 Metrics
metallb 0.12.1 External IP
metrics-server 0.3.6 Metric Server
tracer 1.12.0 Tracing

Note:

The common services are available if the CNC Console is deployed in the Oracle Communications Cloud Native Environment (OCCNE). If you are deploying CNC Console in any other environment, the common services must be installed before installing the CNC Console.
To check the installed software items, run the following command:
helm3 ls -A

Environment Setup Requirements

This section provides information on environment setup requirements for installing CNC Console.

Network Access

The Kubernetes cluster hosts must have network access to:

  • Local docker image repository, where the CNC Console images are available.
    To check if the Kubernetes cluster hosts have network access to the local docker image repository, try to retrieve any image with tag name to check connectivity by running the following command:
    docker pull <docker-repo>/<image-name>:<image-tag>
    where:

    docker-repo is the IP address or host name of the repository.

    image-name is the docker image name.

    image-tag is the tag the image used for the CNC Console pod.

  • Local helm repository, where the CNC Console helm charts are available.
    To check if the Kubernetes cluster hosts have network access to the local helm repository, run the following command:
    helm repo update

Note:

All the kubectl and helm related commands that are used in this document must be run on a system depending on the infrastructure of the deployment. It could be a client machine such as a VM, server, local desktop, and so on.

Client Machine Requirement

Client machine must meet the following requirements:
  • Network access to the helm repository and docker image repository.
  • Helm repository must be configured on the client.
  • Network access to the Kubernetes cluster.
  • Necessary environment settings to run the kubectl and docker commands. The environment should have privileges to create a namespace in the Kubernetes cluster.
  • Helm client must be installed. The environment should be configured so that the helm install command deploys the software in the Kubernetes cluster.

Server or Space Requirement

For information on the server or space requirements, see the Oracle Communications Cloud Native Environment (OCCNE) Installation Guide.

OSO Requirement

CNC Console supports OSO for common operation services (Prometheus and components such as alertmanager, pushgateway) for Kubernetes cluster which does not have these common services. For more information on installation procedure, see Oracle Communications OSO Installation Guide.

cnDBTier Requirement

CNC Console supports cnDBTier in a vCNE environment. cnDBTier must be up and active in case of containerized CNE. For more information on installation procedure, see Oracle Communications cnDBTier Installation guide.

CNC Console Resource Requirement

This section includes information about CNC Console Resource Requirement.

CNC Console Deployment Resource Usage

Resource usage for CNC Console Single Cluster and Multi Cluster deployment is listed in the following tables.

CNC Console Single Cluster Deployment Resource Usage

Single Cluster Deployment will include M-CNCC IAM, M-CNCC Core and A-CNCC Core components.

CNC Console Common Resource is a common resource needed for manager or agent deployment.

Table 2-3 CNC Console Single Cluster Deployment Resource Usage

Component Limits Requests
CPU Memory (Gi) CPU Memory (Gi)
M-CNCC IAM 7.5 7.5 3.8 3.8
M-CNCC Core 7 7 3.5 3.5
A-CNCC Core 7 7 3.5 3.5
CNCC Common Resource 3 4 1.5 2
Total 24.5 25.5 12.3 12.8

Formula

Total Resource = M-CNCC IAM Resource + M-CNCC Core Resource + A-CNCC Core Resource + CNCC Common Resource

CNC Console Multi Cluster Deployment Resource Usage

Multi Cluster Deployment will include M-CNCC IAM and M-CNCC Core components in Manager cluster. A-CNCC Core component shall be deployed in Manager cluster if there is a local NF.

A-CNCC Core is needed in each Agent cluster for managing local NF. CNCC Common Resource is a common resource needed for manager or agent deployment.

Table 2-4 CNC Console Multi Cluster Deployment Resource Usage

Component Limits Requests
CPU Memory (Gi) CPU Memory (Gi)
M-CNCC IAM 7.5 7.5 3.8 3.8
M-CNCC Core 7 7 3.5 3.5
A-CNCC Core 7 7 3.5 3.5
CNCC Common Resource 3 4 1.5 2
*No Of Agents In Other Clusters 2
Total 37.5 40.5 18.8 20.3

* Assumed number of Agents (A-CNCC Core deployments) for the calculation

Formula

Total Resource = M-CNCC IAM Resource + M-CNCC Core Resource + Common Resources + (No Of Agents In Other Clusters * (CNCC Common Resource + A-CNCC Core Resource))

CNC Console Manager Only Deployment

The following table shows resource requirement for manager only deployment. In this case, agent will be deployed in separate cluster.

Component Limits Requests
CPU Memory (Gi) CPU Memory (Gi)
M-CNCC IAM 7.5 7.5 3.8 3.8
M-CNCC Core 7 7 3.5 3.5
A-CNCC Core 0 0 0 0
CNCC Common Resource 3 4 1.5 2
Total 17.5 18.5 8.8 9.3
CNC Console Agent Only Deployment

The following table shows resource requirement for agent only deployment, in this case manager will be deployed in separate cluster.

Table 2-5 CNC Console Agent Only Deployment

Component Limits Requests
CPU Memory (Gi) CPU Memory (Gi)
M-CNCC IAM 0 0 0 0
M-CNCC Core 0 0 0 0
A-CNCC Core 7 7 3.5 3.5
CNCC Common Resource 3 4 1.5 2
Total 10 11 5 5.5
CNC Console Manager with Agent Deployment

The following table shows resource requirement for manager with agent deployment, in this case agent will be deployed along with manager to manage local NF.

This manager can manage agents deployed in other clusters.

Table 2-6 CNC Console Manager with Agent Deployment

Component Limits Requests
CPU Memory (Gi) CPU Memory (Gi)
M-CNCC IAM 7.5 7.5 3.8 3.8
M-CNCC Core 7 7 3.5 3.5
A-CNCC Core 7 7 3.5 3.5
CNCC Common Resource 3 4 1.5 2
Total 24.5 25.5 12.3 12.8
CNC Console Component wise Resource Usage

Table 2-7 CNCC Common Resource Usage

Microservice Name Containers Limits Requests Comments
CPU Memory CPU Memory
debug_tools tools 1 2 0.5 1 Applicable when debug_tool is enabled
hookJobResources   2 2 1 1 Common Hook Resource
helm test cncc-test         Uses hookJobResources
Total   3 4 1.5 2  

Table 2-8 M-CNCC IAM Resource Usage

Microservice Name Containers Limits Requests Comments
CPU Memory CPU Memory
cncc-iam-ingress-gateway ingress-gateway 2 2 1 1  
  init-service 1 1 0.5 0.5 Applicable when HTTPS is enabled
  update-service 1 1 0.5 0.5 Applicable when HTTPS is enabled
  common_config_hook         common_config_hook not used in IAM
cncc-iam-kc-http kc 2 2 1 1
  init-service 1 1 0.5 0.5 Optional, used for enabling LDAPS
  healthcheck 0.5 0.5 0.3 0.3  
  cnnc-iam--pre-install         Uses hookJobResources
  cnnc-iam-pre-upgrade         Uses hookJobResources
  cnnc-iam-post-install         Uses hookJobResources
  cnnc-iam-post-upgrade         Uses hookJobResources
Total   7.5 7.5 3.8 3.8  

Table 2-9 M-CNCC Core Resource Usage

Microservice Name Containers Limits Requests Comments
CPU Memory CPU Memory
cncc-mcore-ingress-gateway ingress-gateway 2 2 1 1  
  init-service 1 1 0.5 0.5 Applicable when HTTPS is enabled
  update-service 1 1 0.5 0.5 Applicable when HTTPS is enabled
  common_config_hook 1 1 0.5 0.5 Common Configuration Hook container creates databases which are used by Common Configuration Client
cncc-mcore-cmservice cmservice 2 2 1 1  
  validation-hook         Uses common hookJobResources
Total   7 7 3.5 3.5  

Table 2-10 A-CNCC Core Resource Usage

Microservice Name Containers Limits Requests Comments
CPU Memory CPU Memory
cncc-acore-ingress-gateway ingress-gateway 2 2 1 1  
  init-service 1 1 0.5 0.5 Applicable when HTTPS is enabled
  update-service 1 1 0.5 0.5 Applicable when HTTPS is enabled
  common_config_hook 1 1 0.5 0.5 Common Configuration Hook container creates databases which are used by Common Configuration Client
cncc-acore-cmservice cmservice 2 2 1 1  
  validation-hook         Uses common hookJobResources
Total   7 7 3.5 3.5  
CNC Console Microservices Resource Requirement

Table 2-11 CNC Console Microservices Resource Requirement

Microservice Name CPU Per Pod Memory Per Pod (GB) Pod count CPU All Pods - Maximum Memory All Pods - Maximum (GB)
Maximum Maximum Maximum
#cncc-iam-kc-http &$!2.5 &$!2.5 1 2.5 2.5
#cncc-iam-ingress-gateway ^$2 ^$2 1 2 2
#cncc-core-cmservice $2 $2 1 2 2
#cncc-core-ingress-gateway ^$2 ^$2 1 2 2
Total       8.5 8.5
  • #: <helm release name>→ will be prefixed in each Microservice name. Example: if helm release name is "cncc-iam", then ingress-gateway Microservice name will be "cncc-iam-ingress-gateway"
  • ^: CPU Limit/Request Per Pod and Memory Limit/Request Per Pod needs to added as additional resources for init-service and update-service container if TLS needs to be enabled.

    Init-service container's and Common Configuration Client Hook's resources are not counted because the container gets terminated after initialization completes.

    Container Name CPU Request and Limit Per Container Memory Request and Limit Per Container Kubernetes Init Container (Job)
    init-service 1 cpu 1 gb Yes
    update-service 1 cpu 1 gb No
    common_config_hook 1 cpu 1 gb No
    • Update Container service
    Ingress Gateway: For periodically refreshing CNCC Private Key/ Certificate and CA Root Certificate for TLS
    • Init Container service

      Ingress Gateway: To get CNCC Private Key or Certificate and CA Root Certificate for TLS during start up
    • Common Configuration Hook

      CNCC Core Common configuration hook container creates databases which are used by Common Configuration Client
  • &: Helm Hooks Jobs - These jobs are pre and post jobs which are invoked during installation, upgrade, rollback, and uninstallation of the deployment. These are short-lived jobs which get terminated after the work is done. So, they are not part of Active deployment Resource but need to be considered only during installation, upgrade, rollback, and uninstallation procedures.

Container Type CPU Request and Limit Per Container Memory Request and Limit Per Container
Helm Hooks Request - 1 cpu, Limit - 2 cpu Request - 1 gb, Limit - 2 gb

! : Healthcheck Container Service- For monitoring health of the db, extra container is added to the kc pod. This is part of active deployment so additional resource of 0.5 is considered as part of kc pod.

Container Type CPU Request and Limit Per Container Memory Request and Limit Per Container
Health Check Request - 0.3 cpu, Limit - 0.5 cpu Request - 0.3 gb, Limit - 0.5 gb
  • Helm Test Job - This job is run on demand when helm test command is executed. It executes the helm test and stops after completion. These are short-lived jobs which gets terminated after the work is done. So, they are not part of active deployment Resource, but needs to be considered only during helm test procedures.
Container Type CPU Request and Limit Per Container Memory Request and Limit Per Container
Helm Test Request - 1 cpu, Limit - 2 cpu Request - 1 gb, Limit - 2 gb

$ - Troubleshooting Tool Container - If Troubleshooting Tool Container Injection is enabled during CNCC deployment/upgrade, this container will be injected to each CNCC pod (or selected pod, depends on option chosen during deployment/upgrade). These containers will stay till pod/deployment exists.

Container Name CPU Request and Limit Per Container Memory Request and Limit Per Container ephemeral-storage Request and Limit Per Container
ocdebug-tools Request - 0.5 cpu, Limit - 1 cpu Request - 1 gb, Limit - 2 gb Request - 2 gb, Limit - 4 gb

Downloading CNC Console Package

Perform the following procedure to download the CNC Console release package from My Oracle Support:

  1. Log in to MOS using the appropriate credentials.
  2. Click the Patches & Updates tab to locate the patch.
  3. In the Patch Search console, click the Product or Family (Advanced) option.
  4. Enter Oracle Communications Cloud Native Core - 5G in Product field and select the product from the Product drop-down.
  5. Select Oracle Communications Cloud Native Core <release number> in Release field.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required patch from the list. The Patch Details window appears.
  8. Click Download. File Download window appears.
  9. Click the <p******_<release number>_Tekelec>.zip file.
  10. Extract the release package zip file.

    Package is named as follows:

    occncc_pkg_<marketing-release-number>.tgz

    Example:

    occncc_pkg_22_3_2_0_0.tgz

    Note:

    The user must have their own repository for storing the CNC Console images and repository which must be accessible from the Kubernetes cluster.

  11. Untar the CNCC package file:

    Untar the CNCC package file to the specific repository:

    tar -xvf occncc_pkg_<marketing-release-number>.tgz

    The package file consists of the following:

    1. CNCC Docker Images File: occncc-images-<tag>.tar
    2. Helm Chart of CNCC: The tar ball contains Helm Chart and templates occncc-<tag>.tgz
    3. Readme txt File Readme.txt

    Example :

    List of contents in occncc_pkg_22_3_2_0_0.tgz :

    occncc_pkg_22_3_2_0_0.tgz

    |_ _ _ _ _ _ occncc-22.3.2 tgz

    |_ _ _ _ _ _ occncc-images-22.3.2 tar

    |_ _ _ _ _ _ Readme.txt

  12. Verify the checksums of tarballs mentioned in Readme.txt
  13. Run the following command to push the Docker images to docker repository:
    docker load --input <image_file_name.tar> 
    Example
     docker load --input occncc-images-22.3.2.tar
    
    Sample Output:
    Loaded image: occncc/apigw-configurationinit:22.3.2
    Loaded image: occncc/apigw-configurationupdate:22.3.2
    Loaded image: occncc/apigw-common-config-hook:22.3.2
    Loaded image: occncc/cncc-apigateway:22.3.2
    Loaded image: occncc/cncc-cmservice:22.3.2
    Loaded image: occncc/cncc-iam:22.3.2
    Loaded image: occncc/debug_tools:22.3.2
    Loaded image: occncc/nf-test:22.3.2
    Loaded image: occncc/cncc-iam/hook:22.3.2
    Loaded image: occncc/cncc-iam/healthcheck:22.3.2
    Loaded image: occncc/cncc-core/validationhook:22.3.2
    
    				
  14. Run the following command to push the docker files to docker registry:
    docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
    docker push <docker_repo>/<image_name>:<image-tag> 
  15. Run the following command to check whether all the images are loaded:
    docker images
  16. Run the following command to push the helm charts to helm repository:
    
    helm cm-push --force <chart name>.tgz <helm repo>
    Example:
    helm cm-push --force occncc-22.3.2.tgz ocspf-helm-repo
  17. Run the following command to download the CNCC custom templates package file from MOS:
    occncc_custom_configtemplates_<marketing-release-number>.zip
    Example:
    
    occncc_custom_configtemplates_22_3_2_0_0.zip
  18. Unzip the cncc custom templates package as follows:

    unzip occncc_custom_configtemplates_<marketing-release-number>.zip

    The package file consists of the following:

    
    M-CNCC IAM custom values file : occncc_custom_values_<version>.yaml
    CNCC IAM Schema file for rollback to previous version : occncc_rollback_iam_schema_<version>.sql
    CNCC Metric Dashboard file : occncc_metric_dashboard_<version>.json
    CNCC Metric Dashboard file for CNE supporting Prometheus HA (CNE 1.9.x onwards): occncc_metric_dashboard_promha_<version>.json
    CNCC Alert Rules file: occncc_alertrules_<version>.yaml
    CNCC Alert Rules file for CNE supporting Prometheus HA (CNE 1.9.x onwards): occncc_metric_dashboard_promha_<version>.json
    CNCC MIB files: occncc_mib_<version>.mib, occncc_mib_tc_<version>.mib
    Example:
    unzip occncc_custom_configtemplates_<marketing-release-number>.zip
    
      Archive:  occncc_custom_configtemplates_22_3_2_0_0.zip
       creating: occncc_custom_configtemplates_22_3_2_0_0/
      inflating: occncc_custom_configtemplates_22_3_2_0_0/occncc_mib_tc_22.3.2.mib
      inflating: occncc_custom_configtemplates_22_3_2_0_0/occncc_custom_values_22.3.2.yaml
      inflating: occncc_custom_configtemplates_22_3_2_0_0/occncc_mib_22.3.2.mib
      inflating: occncc_custom_configtemplates_22_3_2_0_0/occncc_alerting_rules_promha_22.3.2.yaml
      inflating: occncc_custom_configtemplates_22_3_2_0_0/occncc_metric_dashboard_promha_22.3.2.json
      inflating: occncc_custom_configtemplates_22_3_2_0_0/occncc_alertrules_22.3.2.yaml
      inflating: occncc_custom_configtemplates_22_3_2_0_0/occncc_metric_dashboard_22.3.2.json
      inflating: occncc_custom_configtemplates_22_3_2_0_0/occncc_rollback_iam_schema_22.3.2.sql