2 Installing OCCM

This chapter provides information about installing OCCM in a cloud native environment.

2.1 Prerequisites

Before installing and configuring OCCM, ensure that the following prerequisites are met.

2.1.1 Software Requirements

This section lists the added or updated software required to install OCCM release 24.3.x. For more information about software requirements, see Oracle Communications Cloud Native Core Certificate Management Installation, Upgrade, and Fault Recovery Guide.

Install the following software before installing OCCM:

Table 2-1 Software Requirements

Software Version
Helm 3.13.2
Kubernetes 1.30.x, 1.29.x, 1.28.x
Podman 4.4.1
To check the current Helm and Kubernetes version installed in the CNE, run the following commands:
 kubectl version
 helm version 
The following software are available if OCCM is deployed in CNE. If you are deploying OCCM in any other cloud native environment, these additional software must be installed before installing OCCM. To check the installed software, run the following command:
helm ls -A

The following table lists the versions of additional software along with the usage:

Table 2-2 Additional Software

Software Version
containerd 1.7.13
Calico 3.26.3
MetalLB 0.14.4
Prometheus 2.51.1
Grafana 9.5.3
Jaeger 1.52.0
Istio 1.18.2
Kyverno 1.9.0
cert-manager 1.12.4
Oracle OpenSearch 2.11.0
Oracle OpenSearch Dashboard 2.11.0
Fluentd OpenSearch 1.16.2
Velero 1.12.0

2.1.2 Environment Setup Requirements

This section provides information on environment setup requirements for installing OCCM.

Client Machine Requirement

This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.

Client machine must have:
  • Helm repository configured.
  • network access to the Helm repository and Docker image repository.
  • network access to the Kubernetes cluster.
  • required environment settings to run the kubectl, docker, and podman commands. The environment must have privileges to create a namespace in the Kubernetes cluster.
  • Helm client installed with the push plugin. Configure the environment in such a manner that the helm install command deploys the software in the Kubernetes cluster.

Network Access Requirement

The Kubernetes cluster hosts must have network access to the following repositories:

  • Local helm repository, where the OCCM helm charts are available.
    To check if the Kubernetes cluster hosts have network access to the local helm repository, run the following command:
    helm repo update
  • Local docker image repository: It contains the OCCM Docker images.
    To check if the Kubernetes cluster hosts can access the the local Docker image repository, try to retrieve any image with tag name to check connectivity by running the following command:
    docker pull <docker-repo>/<image-name>:<image-tag>
    podman pull <Podman-repo>/<image-name>:<image-tag>
    where:
    • <docker-repo> is the IP address or host name of the repository.
    • <Podman-repo> is the IP address or host name of the Podman repository.
    • <image-name> is the docker image name.
    • <image-tag> is the tag the image used for the OCCM pod.

Note:

Run the kubectl and helm commands on a system based on the deployment infrastructure. For instance, they can be run on a client machine such as VM, server, local desktop, and so on.

Server or Space Requirement

For information about the server or space requirements, see the Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

CNE Requirement

This section is applicable only if you are installing OCCM on Cloud Native Environment (CNE).

To check the CNE version, run the following command:
echo $OCCNE_VERSION

For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

CNC Console Requirements

OCCM supports CNC Console.

For more information about CNC Console, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide and Oracle Communications Cloud Native Configuration Console User Guide.

2.1.3 Resource Requirements

This section lists the resource requirements to install and run OCCM.

Resource Profile

Table 2-3 Resource Profile

Microservice Name Pod Replica Limits Requests
CPU Memory (Gi) Ephimeral Storage (Mi) CPU Memory (Gi) Ephimeral Storage (Mi)
OCCM 1 2 2 1102 2 2 57
Helm Test 1 0.5 0.5 1102 0.5 0.5 57
Total 2.5 2.5 2204 2.5 2.5 114

Note:

  • Helm Test Job - This job is implemented on demand when Helm test command is run. It will run the Helm test and stops after completion. These are short-lived jobs that get terminated after the work is done. So, they are not part of active deployment resource but need to be considered only during Helm test procedures.
  • Troubleshooting Tool (Debug tool) Container - If Troubleshooting Tool Container Injection is enabled during OCCM deployment or upgrade, this container will be injected to OCCM pod. These containers will stay till pod or deployment exists. Debug tool resources are not considered in the calculation. Debug tool resources usage is per pod. If debug tool is enabled, then max 0.5vCPU and 0.5Gi memory per OCCM pod is needed.

2.2 Installation Sequence

This section describes preinstallation, installation, and postinstallation tasks for OCCM.

Note:

In case of fresh installation of OCCM and NF on a new cluster, you must follow the following sequence of installation:
  1. OCCM
  2. cnDBTier
  3. CNC Console
  4. NF

2.2.1 Preinstallation Tasks

Before installing OCCM perform the tasks described in this section.

2.2.1.1 Downloading the OCCM Package

To download the OCCM package from My Oracle Support (MOS), perform the following steps:

  1. Log in to My Oracle Support using your login credentials.
  2. Click the Patches & Updates tab to locate the patch.
  3. In the Patch Search Certificate Management, and click the Product or Family (Advanced) option.
  4. Enter Oracle Communications Cloud Native Core - 5G in Product field and select the product from the Product drop-down list.
  5. From the Release drop-down list, select "Oracle Communications Cloud Native Core Certificate Management <release_number>".

    Where, <release_number> indicates the required release number of OCCM.

  6. Click Search.

    The Patch Advanced Search Results list appears.

  7. Select the required patch from the list.

    The Patch Details window appears.

  8. Click Download.

    The File Download window appears.

  9. Click on the <p********_<release_number>_Tekelec>.zip file.
  10. Extract the release package zip file to download the network function patch to the system where network function must be installed.
  11. Extract the release package zip file.

    Package is named as follows:

    occm_csar_<marketing-release-number>.zip

    Example:

    occm_csar_24_3_0_0_0.zip
2.2.1.2 Pushing the Images to Customer Docker Registry
To push the images to the registry:
  1. Untar the OCCM package to the specific repository: tar -xvf occm_csar_<marketing-release-number>.zip

    For example: tar -xvf occm_csar_24_3_0_0_0.zip

    The package consists of the following files and folders:
    1. Files: OCCM Docker Images file and OCCM Helm Chart
      • OCCM Helm chart: occm-24.3.0.tgz
      • OCCM Network Policy Helm Chart: occm-network-policy-24.3.0.tgz
      • Images: occm-24.3.0.tar, nf_test-24.3.0.tar, ocdebug-tools-24.3.0.tar
    2. Scripts: Custom values and Alert files:
      • OCCM Custom Values File: occm_custom_values_24.3.0.yaml
      • Sample Grafana Dashboard file: occm_metric_dashboard_promha_24.3.0.json
      • Sample Alert File: occm_alerting_rules_promha_24.3.0.yaml
      • MIB files: occm_mib_24.3.0.mib, occm_mib_tc_24.3.0.mib, toplevel.mib
      • Configuration Open API specification: occm_configuration_openapi_24.3.0.json
      • Network Policy Custom values file: occm_network_policy_custom_values_24.3.0.yaml
    3. Definitions: Definitions folder contains the CNE Compatibility and definition files:
      • occm_cne_compatibility.yaml
      • occm.yaml
    4. TOSCA-Metadata: TOSCA.meta
  2. Verify the checksum of tar balls mentioned in the Readme.txt.
  3. Run the following command to push the Docker images to docker registry:
    docker load --input <image_file_name.tar>
  4. Run the following command to push the docker files to docker registry:
    docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
    docker push <docker_repo>/<image_name>:<image-tag>
  5. Run the following command to check if all the images are loaded:
    docker images
  6. Run the following command to push the Helm charts to Helm repository:
    helm cm-push --force <chart name>.tgz <helm repo>
    For example:
    helm cm-push --force occm-24.3.0.tgz occm-helm-repo
2.2.1.3 Verifying and Creating Namespace

This section explains how to verify or create a namespace in the system. If the namespace does not exist, the user must create it.

To verify and create OCCM namespace, perform the following steps:

  1. Run the following command to verify if the required namespace already exists in the system:
    $ kubectl get namespace
  2. In the output of the above command, check if the required namespace is available. If not available, create the namespace using the following command:
    $ kubectl create namespace <required namespace>

    For example, the following kubectl command creates the namespace occm-ns:

    $ kubectl create namespace occm-ns

Naming Convention for Namespaces

The namespace must meet the following requirements:

  • start and end with an alphanumeric character
  • contain 63 characters or less
  • contain only alphanumeric characters or '-'

Note:

It is recommended to avoid using the prefix kube- when creating a namespace. The prefix is reserved for Kubernetes system namespaces.
2.2.1.4 Creating Service Account, Role, and Rolebinding
2.2.1.4.1 Global Service Account Configuration

This section is optional and it describes how to manually create a service account, role, and rolebinding.

A custom service account can be provided for OCCM deployment in global.serviceAccountName of occm_custom_values_<version>.yaml.

A custom service account can be provided for helm in global.serviceAccountName:
global:  
  dockerRegistry: cgbu-occncc-dev-docker.dockerhub-phx.oci.oraclecorp.com
  serviceAccountName: ""

Configuring Global Service Account to Manage NF Certificates with OCCM and NF in the Same Namespace

A sample OCCM service account yaml file to create custom service account is as follows:
## Service account yaml file for occm-sa
apiVersion: v1
kind: ServiceAccount
metadata:
  name: occm-sa
  namespace: occm
  annotations: {}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: occm-role
  namespace: occm
rules:
- apiGroups:
  - "" # "" indicates the core API group
  resources:
  - services
  - configmaps
  - pods
  - secrets
  - endpoints
  - namespaces
  verbs:
  - get
  - watch
  - list
  - create
  - delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: occm-rolebinding
  namespace: occm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: occm-role
subjects:
- kind: ServiceAccount
  name: occm-sa
  namespace: occm

Configuring Global Service Account to Manage NF Certificates with OCCM and NF in Separate Namespaces

OCCM provides support for key and certificate management in multiple namespaces.

In this deployment model, OCCM is deployed in namespace different from the components’ namespaces managed by it. It needs privileges to read, write, and delete Kubernetes secrets in the managed namespaces.

This is achieved by creating multiple namespace specific roles and binding them to the service account for OCCM.
  • AUTOMATIC Service Account Configuration: Roles and role bindings are created for each namespace specified using the occmAccessedNamespaces field in occm_custom_values.yaml. A service account for OCCM is created automatically and the roles created are assigned using the corresponding role binding.
    Namespaces managed by OCCM service account:
    occmAccessedNamespaces:
      - ns1
      - ns2

    Note:

    Automatic Service Account Configuration is applicable for Single Namespace Management as well
  • Custom Service Account Configuration: A custom service account can also be configured against the serviceAccountName field in occm_custom_values.yaml. If this is provided, automatic service account creation doesn't get triggered. The occmManagedNamespaces field doesn't need to be configured.
    A sample OCCM service account yaml file for creating a custom service account is as follows:
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: occm-sa
      namespace: occm
      annotations: {}
    ---
       
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      namespace: ns1
      name: occm-secret-writer-role
    rules:
    - apiGroups:
      - "" # "" indicates the core API group
      resources:
      - secrets
      - namespaces
      verbs:
      - get
      - watch
      - list
      - create
      - update
      - delete
    ---
       
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      namespace: ns2
      name: occm-secret-writer-role
    rules:
    - apiGroups:
      - "" # "" indicates the core API group
      resources:
      - secrets
      - namespaces
      verbs:
      - get
      - watch
      - list
      - create
      - update
      - delete
       
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: occm-secret-writer-rolebinding
      namespace: ns1
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: occm-secret-writer-role
    subjects:
    - kind: ServiceAccount
      name: occm-sa
      namespace: occm
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: occm-secret-writer-rolebinding
      namespace: ns2
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: occm-secret-writer-role
    subjects:
    - kind: ServiceAccount
      name: occm-sa
      namespace: occm
      
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: occm-role
      namespace: occm
    rules:
    - apiGroups:
      - "" # "" indicates the core API group
      resources:
      - services
      - configmaps
      - pods
      - secrets
      - endpoints
      - namespaces
      verbs:
      - get
      - watch
      - list
      - create
      - delete
      - update
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: occm-rolebinding
      namespace: occm
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: occm-role
    subjects:
    - kind: ServiceAccount
      name: occm-sa
      namespace: occm
2.2.1.4.2 Helm Test Service Account Configuration
helmTestServiceAccountName is an optional field in the occm_custom_values_<version>.yaml file. Custom service account can be provided for helm in global.helmTestServiceAccountName::
global:
  helmTestServiceAccountName: occm-helmtest-serviceaccount
A sample helm test service account yaml file is as follows:
helm test service account apiVersion: v1
kind: ServiceAccount
metadata:
  name: occm-helmtest-serviceaccount
  namespace: occm
  annotations: {}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: occm-helmtest-role
  namespace: occm
rules:
- apiGroups:
  - "" # "" indicates the core API group
  resources:
  - services
  - configmaps
  - pods
  - secrets
  - endpoints
  - serviceaccounts
  - namespaces
  verbs:
  - get
  - watch
  - list
- apiGroups:
  - policy
  resources:
  - poddisruptionbudgets
  verbs:
  - get
  - watch
  - list
  - update
- apiGroups:
  - apps
  resources:
  - deployments
  - statefulsets
  verbs:
  - get
  - watch
  - list
  - update
- apiGroups:
  - autoscaling
  resources:
  - horizontalpodautoscalers
  verbs:
  - get
  - watch
  - list
  - update
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - roles
  - rolebindings
  verbs:
  - get
  - watch
  - list
  - update
- apiGroups:
  - monitoring.coreos.com
  resources:
  - prometheusrules
  verbs:
  - get
  - watch
  - list
  - update
   
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: occm-helmtest-rolebinding
  namespace: occm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: occm-helmtest-role
subjects:
- kind: ServiceAccount
  name: occm-helmtest-serviceaccount
  namespace: occm
2.2.1.5 Configuring Network Policies

Kubernetes network policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application. These network policies enforce access restrictions for all the applicable data flows except communication from Kubernetes node to pod for invoking container probe.

Note: Configuring network policies is an optional step. Based on the security requirements, network policies may or may not be configured.

For more information on network policies, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.

Configuring Network Policies

Following are the various operations that can be performed for network policies:

2.2.1.5.1 Installing Network Policies

Prerequisite

Network Policies are implemented by using the network plug-in. To use network policies, you must be using a networking solution that supports Network Policy.

Note:

For a fresh installation, it is recommended to install Network Policies before installing OCCM. However, if OCCM is already installed, you can still install the Network Policies.

To install network policy

  1. Open the occm_network_policy_custom_values_24.3.0.yaml file provided in the release package zip file. For downloading the file, see Downloading the OCCM Package and Pushing the Images to Customer Docker Registry.
  2. The file is provided with the default network policies. If required, update the occm_network_policy_custom_values_24.3.0.yaml file. For more information on the parameters, see theTable 2-5 table
    • To connect with CNC Console, update the below parameter in the allow-ingress-from-console network policy in the occm_network_policies_custom_values_24.3.0.yamlfile:
    • In allow-ingress-prometheus policy, kubernetes.io/metadata.name parameter must contain the value for the namespace where Prometheus is deployed, and app.kubernetes.io/name parameter value should match the label from Prometheus pod.
  3. Run the following command to install the network policies:
    helm install <release_name> -f <occm_network_policy_custom_values_<version>.yaml> --namespace
          <namespace> <chartpath>./<chart>.tgz

    For example:

    helm install occm-network-policy -f occm_network_policy_custom_values_24.3.0.yaml --namespace occm occm-network-policy-24.3.0.tgz
    Where,
    • helm-release-name: occm-network-policy Helm release name.
    • custom-value-file: occm-network-policy custom value file.
    • namespace: OCCM namespace.
    • network-policy: location where the network-policy package is stored.

Note:

  • Connections that were created before installing network policy and still persist are not impacted by the new network policy. Only the new connections would be impacted.
2.2.1.5.2 Upgrading Network Policies

To add, delete, or update network policy

  1. Modify the occm_network_policy_custom_values_24.3.0.yaml file to update, add, or delete the network policy.
  2. Run the following command to upgrade the network policies:
    helm upgrade <release_name> -f occm_network_policy_custom_values_<version>.yaml --namespace
          <namespace> <chartpath>./<chart>.tgz
    For example:
    helm upgrade occm-network-policy -f occm_network_policy_custom_values_24.3.0.yaml --namespace occm occm-network-policy-24.3.0.tgz
    Where,
    • helm-release-name: occm-network-policy Helm release name.
    • custom-value-file: occm-network-policy custom value file.
    • namespace: OCCM namespace.
    • network-policy: location where the network-policy package is stored.
2.2.1.5.3 Verifying Network Policies

Run the following command to verify if the network policies are deployed successfully:

kubectl get networkpolicy -n <namespace>

For Example:

kubectl get networkpolicy -n occm

Where,

  • helm-release-name: occm-network-policy Helm release name.
  • namespace: OCCM namespace.
2.2.1.5.4 Uninstalling Network Policies

Run the following command to uninstall all the network policies:

helm uninstall <release_name> --namespace <namespace>

For Example:

helm uninstall occm-network-policy --namespace occm
2.2.1.5.5 Configuration Parameters for Network Policies

Table 2-4 Supported Kubernetes Resource for Configuring Network Policies

Parameter Description Details
apiVersion

This is a mandatory parameter.

Specifies the Kubernetes version for access control.

Note: This is the supported api version for network policy. This is a read-only parameter.

Data Type: string

Default Value: networking.k8s.i o/v1

kind

This is a mandatory parameter.

Represents the REST resource this object represents.

Note: This is a read-only parameter.

Data Type: string

Default Value: NetworkPolicy

Table 2-5 Supported parameters for Configuring Network Policies

Parameter Description Details
metadata.name

This is a mandatory parameter.

Specifies a unique name for the network policy.

{{ .metadata.name }}
spec.{}

This is a mandatory parameter.

This consists of all the information needed to define a particular network policy in the given namespace.

Note: NRF supports the spec parameters defined in Kubernetes Resource Category.

Default Value: NA

For more information about this functionality, see the Network Policies section in Oracle Communications Cloud Native Core, Certificate Management User Guide

2.2.2 Installation Tasks

This section explains how to install Oracle Communications Cloud Native Core, Certificate Management.

Note:

  • Before installing OCCM, you must complete the Prerequisites and Preinstallation Tasks.
  • In a georedundant deployment, perform the steps explained in this section on all the georedundant sites.
2.2.2.1 Installing OCCM Package

This section includes information about OCCM deployment.

OCCM Deployment on Kubernetes

To install OCCM using Comand Line Interface (CLI):
  1. Execute the following command to check the version of the helm chart installation:
    helm search repo <release_name> -l
    For example:
    helm search repo occm -l
    
    NAME                            CHART VERSION   APP VERSION     DESCRIPTION
    cgbu-occm-dev-helm/occm            24.3.0         24.3.0           A helm chart for OCCM deployment
  2. Prepare occm_custom_values_24.3.0.yaml file with the required parameter information. For more information on these parameters, see Configuration Options.

    Note:

    Metrics scrapping on CNE
    To enable metrics scrapping at CNE, add the following annotation under global.nonlbDeployments section:
    nonlbDeployments:
          labels: {}
          annotations:
            oracle.com/cnc: "true"

    Note:

    CNE LB VM Support
    To support the CNE LB VM selection based on destination IP address with OCCM, add the following annotation under global.nonlbDeployments section:
    nonlbDeployments:
          labels: {}
          annotations:
            oracle.com.cnc/egress-network: oam

    Note:

    Traffic Segregation
    To enable egress network attachment for OCCM to CA communication, add the following annotation with the appropriate value:
    annotations:
      k8s.v1.cni.cncf.io/networks: ""

    Where k8s.v1.cni.cncf.io/networks: contains the network attachment information that the pod uses for network segregation. For more information about Traffic Segregation, see Oracle Communications Cloud Native Core, Certificate Management User Guide.

    Example:
      nonlbDeployments:
          labels: {}
          annotations:
            k8s.v1.cni.cncf.io/networks: default/nf-oam-egr1@nf-oam-egr1
  3. Deploy OCCM:
    1. Verify Deployment:
      To verify the deployment status, open a new terminal and run the following command:
      kubectl get pods -n <namespace_name> -w
      For example:
      kubectl get pods -n occm-ns -w

      The pod status gets updated on a regular interval. When helm install command is run and exits with the status, you may stop watching the status of kubernetes pods.

    2. Installation using helm tar:
      To install using Helm tar, run the following command:
      helm install <release_name> -f occm_custom_values_<version>.yaml -n <namespace_name>
          occm-<version>.tgz 
      For example:
      helm install occm -f occm_custom_values_24.3.0.yaml --namespace occm-ns occm-24.3.0.tgz
       
      Sample Output
      
      # Example:
      NAME: occm
      LAST DEPLOYED: Wed Nov 15 10:11:17 2023
      NAMESPACE: occm-ns
      STATUS: deployed
      REVISION: 1
      TEST SUITE: None
    3. Installation using helm repository:
      To install using Helm repository, run the following command:
      helm install <release_name> <helm-repo> -f occm_custom_values_<version>.yaml --namespace <namespace_name> --version <helm_version>
      For example:
      
      helm install cgbu-occm-dev-helm/occm -f occm_custom_values_24.3.0.yaml --namespace occm-ns --version 24.3.0

    where,

    release_name and namespace_name: depends on customer configuration

    helm-repo: is the repository name where the helm images, charts are stored

    values: is the helm configuration file which needs to be updated based on the docker registry

  4. Run following command to check the deployment status:
    helm status <release_name>
  5. Run the following command to check if all the services are deployed and running:
    kubectl get svc -n occm-ns

    For example:

    
    NAME           TYPE         CLUSTER-IP      EXTERNAL-IP    PORT(S)       AGE
    service/occm   ClusterIP    10.233.26.68    <none>         8989/TCP      21m
  6. Run the following command to check if the pods are up and running:
    kubectl get po -n occm-ns

    For example:

    
    NAME                                READY   STATUS    RESTARTS      AGE
    pod/occm-occm-5fb5557f75-mjhcbh     1/1     Running   0             21m

2.2.3 Postinstallation Tasks

This section explains the postinstallation tasks for OCCM.

2.2.3.1 Verifying Installation

To verify if OCCM is installed:

  1. Run the following command to verify the installation status:
    <helm-release> -n <namespace>
    For example:
    helm status occm -n occm

    In the output, if STATUS is showing as deployed, then the installation is successful.

  2. Run the following command to verify if the pods are up and active:
    kubectl get jobs,pods -n release_namespace
    For example:
    kubectl get pods -n occm
  3. Run the following command to verify if the services are deployed and active:
    kubectl get services -n release_namespace
    For example:
    kubectl get services -n occm

Note:

Take a backup of the following files that are required during fault recovery:

  • Updated occm_custom_values_24.3.0.yaml file.

If the installation is not successful or you do not see the status as Running for all the pods, perform the troubleshooting steps provided in the Oracle Communications cloud Native Core, Certificate Management Troubleshooting Guide.

2.2.3.2 Performing Helm Test

Helm Test is a feature which will validate OCCM Successful Installation along with readiness (Readiness probe url configured will be checked for success) of the pod. The pod to be checked will be based on the namespace and label selector configured for the helm test configurations.

Note:

Helm3 is mandatory for the Helm Test feature to work.

You must follow the following instructions to run the Helm test functionality. The configurations mentioned in the following step must be done before executing the helm install command.

  1. Configure the helm test configurations that are under global sections in values.yaml file.
    global:
      # Helm test related configurations
      test:
        nfName: occm
        image:
          name: occm/nf_test
          tag: <version>
          imagePullPolicy: IfNotPresent
        config:
          logLevel: WARN
          timeout: 240
  2. Run the following helm test command:
    helm test <helm_release_name> -n <k8s namespace>
    For example:
    helm test occm -n occmdemo
    Pod occm-test pending
    Pod occm-test pending
    Pod occm-test pending
    Pod occm-test pending
    Pod occm-test running
    Pod occm-test succeeded
    NAME: occm
    LAST DEPLOYED: Wed Nov 08 02:38:25 2023
    NAMESPACE: occmdemo
    STATUS: deployed
    REVISION: 1
    TEST SUITE:     occm-test
    Last Started:   Wed Nov 08 02:38:25 2023
    Last Completed: Wed Nov 08 02:38:25 2023
    Phase:          Succeeded
    NOTES:
    # Copyright 2020 (C), Oracle and/or its affiliates. All rights reserved.
     
    Thank you for installing occm.
     
    Your release is named occm , Release Revision: 1.
    To learn more about the release, try:
     
      $ helm status occm
      $ helm get occm
  3. Wait for the helm test job to complete. Check if the test job is successful in the output.
2.2.3.3 CNC Console Configuration

CNC Console instance configuration must be updated to enable access of OCCM.

For information on how to enable OCCM, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.

2.2.3.4 OCCM Configuration

For information on the features supported by OCCM, issuer and certificate configurations, see OCCM Supported Features in the Oracle Communications Cloud Native Core, Certificate Management User Guide.

For OCCM REST API details, see Oracle Communications Cloud Native Core, Certificate Management REST Specifications Guide.