2 Installing OCCM

This chapter provides information about installing OCCM in a cloud native environment.

2.1 Prerequisites

Before installing and configuring OCCM, ensure that the following prerequisites are met.

2.1.1 Software Requirements

This section lists the added or updated software required to install OCCM release 25.1.2xx.

Note:

Table 2-1 and Table 2-2 in this section offer a comprehensive list of software necessary for the proper functioning of an NF during deployment. However, these tables are indicative, and the software used can vary based on the customer's specific requirements and solution.

The Software Requirement column in Table 2-1 and Table 2-2 indicates one of the following:

  • Mandatory: Absolutely essential; the software cannot function without it.
  • Recommended: Suggested for optimal performance or best practices but not strictly necessary.
  • Conditional: Required only under specific conditions or configurations.
  • Optional: Not essential; can be included based on specific use cases or preferences.

This section lists the software that must be installed before installing OCCM:

Table 2-1 Preinstalled Software Versions

Software 25.1.2xx 25.1.1xx 24.3.x Software Requirement Usage Description
Kubernetes 1.32.0 1.31.0 1.30.0 Mandatory

Kubernetes orchestrates scalable, automated network function (NF) deployments for high availability and efficient resource utilization.

Impact:

Without orchestration capabilities, deploying and managing network functions (NFs) can become complex, leading to inefficient resource utilization and potential downtime.

Helm 3.17.1 3.16.2 3.15.2 Mandatory

Helm, a package manager, simplifies deploying and managing network functions (NFs) on Kubernetes with reusable, versioned charts for easy automation and scaling.

Impact:

Pre-installation is required. Not using this capability may result in error-prone and time-consuming management of NF versions and configurations, impacting deployment consistency.

Podman 4.9.4 4.9.4 4.6.1 Recommended

Podman manages and runs containerized NFs without requiring a daemon, offering flexibility and compatibility with Kubernetes.

Impact:

Podman is part of Oracle Linux. Without efficient container management, the development and deployment of NFs could become cumbersome, impacting agility.

To check the current Helm and Kubernetes version installed in the CNE, run the following commands:
 kubectl version
 helm version 

Note:

This guide covers the installation instructions for OCCM when Podman is the container platform with Helm as the Packaging Manager. For non-CNE, the operator can use commands based on their deployed Container Runtime Environment, see the Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade and Fault Recovery Guide.

Note:

podman version or docker version based on the container engine installed.
The following software are available if OCCM is deployed in OCCNE. If you are deploying OCCM in any other cloud native environment, these additional software must be installed before installing OCCM. To check the installed software, run the following command:
helm ls -A

The following table lists the versions of additional software along with the usage:

Table 2-2 Additional Software Versions

Software 25.1.2xx 25.1.1xx 24.3.x Software Requirement Usage Description
AlertManager 0.28.0 0.27.0 0.27.0 Recommended

Alertmanager is a component that works in conjunction with Prometheus to manage and dispatch alerts. It handles the routing and notification of alerts to various receivers.

Impact:

Not implementing alerting mechanisms can lead to delayed responses to critical issues, potentially resulting in service outages or degraded performance.

Calico 3.29.1 3.28.1 3.27.3 Recommended

Calico provides networking and security for NFs in Kubernetes with scalable, policy-driven connectivity.

Impact:

CNI is mandatory for the functioning of 5G NFs. Without CNI and proper plugin , the network could face security vulnerabilities and inadequate traffic management, impacting the reliability of NF communications.

cinder-csi-plugin 1.32.0 1.31.1 1.30.0 Mandatory

Cinder CSI (Container Storage Interface) plugin is used for provisioning and managing block storage in Kubernetes. It is often used in OpenStack environments to provide persistent storage for containerized applications

Impact:

This is used in openstack vCNE Solution. Without this integration, provisioning block storage for NFs could be manual and inefficient, complicating storage management.

containerd 1.7.24 1.7.22 1.7.16 Mandatory

Containerd manages container lifecycles for running NFs efficiently in Kubernetes.

Impact:

A lack of a reliable container runtime could lead to performance issues and instability in NF operations.

CoreDNS 1.11.13 1.11.1 1.11.1 Recommended

CoreDNS is the DNS server in Kubernetes, which provides DNS resolution services within the cluster.

Impact:

DNS is an essential part of deployment. Without proper service discovery, NFs would struggle to communicate with each other, leading to connectivity issues and operational failures.

Fluentd 1.17.1 1.17.1 1.16.2 Recommended

Fluentd is an open-source data collector that streamlines data collection and consumption, allowing for improved data utilization and comprehension.

Impact:

Not utilizing centralized logging can hinder the ability to track NF activity and troubleshoot issues effectively, complicating maintenance and support.

Grafana 9.5.3 9.5.3 9.5.3 Recommended

Grafana is a popular open-source platform for monitoring and observability. It provides a user-friendly interface for creating and viewing dashboards based on various data sources.

Impact:

Without visualization tools, interpreting complex metrics and gaining insights into NF performance would be cumbersome, hindering effective management.

Jaeger 1.65.0 1.60.0 1.60.0 Recommended

Jaeger provides distributed tracing for 5G NFs, enabling performance monitoring and troubleshooting across microservices.

Impact:

Not utilizing distributed tracing may hinder the ability to diagnose performance bottlenecks, making it challenging to optimize NF interactions and user experience.

Kyverno 1.13.4 1.12.5 1.12.0 Recommended

Kyverno is a Kubernetes policy engine that helps manage and enforce policies for resource configurations within a Kubernetes cluster.

Impact:

Failing to implement policy enforcement could lead to misconfigurations, resulting in security risks and instability in NF operations, affecting reliability.

MetalLB 0.14.4 0.14.4 0.14.4 Recommended

MetalLB provides load balancing and external IP management for 5G NFs in Kubernetes environments.

Impact:

Used as LB solution in CNE. LB is mandatory for the solution to work. Without load balancing, traffic distribution among NFs may be inefficient, leading to potential bottlenecks and service degradation.

metrics-server 0.7.2 0.7.2 0.7.1 Recommended

Metrics Server is used in Kubernetes for collecting resource usage data from pods and nodes.

Impact:

Without resource metrics, auto-scaling and resource optimization would be limited, potentially leading to resource contention or underutilization.

Multus 4.1.3 3.8.0 3.8.0 Recommended

Multus enables multiple network interfaces in Kubernetes pods, allowing custom configurations and isolated paths for advanced use cases like NF deployments, ultimately supporting traffic segregation.

Impact:

Without this capability, connecting NFs to multiple networks could be limited, impacting network performance and isolation.

OpenSearch 2.15.0 2.11.0 2.11.0 Recommended OpenSearch provides scalable search and analytics for 5G NFs, enabling efficient data exploration and visualization.

Without a robust analytics solution, there would be difficulties in identifying performance issues and optimizing NF operations, affecting overall service quality.

Oracle OpenSearch Dashboard 2.15.0 2.11.0 2.11.0 Recommended

OpenSearch Dashboard visualizes and analyzes data for 5G NFs, offering interactive insights and custom reporting.

Impact:

Without visualization capabilities, understanding NF performance metrics and trends would be difficult, limiting informed decision-making.

Prometheus 3.2.0 2.52.0 2.52.0 Mandatory

Prometheus is a popular open-source monitoring and alerting toolkit. It collects and stores metrics from various sources and allows for alerting and querying.

Impact:

Not employing this monitoring solution could result in a lack of visibility into NF performance, making it difficult to troubleshoot issues and optimize resource usage.

prometheus-kube-state-metric 2.15.0 2.13.0 2.13.0 Recommended

Kube-state-metrics is a service that generates metrics about the state of various resources in a Kubernetes cluster. It's commonly used for monitoring and alerting purposes.

Impact:

Without these metrics, monitoring the health and performance of NFs could be challenging, making it harder to proactively address issues.

prometheus-node-exporter 1.8.2 1.8.2 1.8.2 Recommended

Node Exporter is a Prometheus exporter for collecting hardware and OS-level metrics from Linux hosts.

Impact:

Without node-level metrics, visibility into infrastructure performance would be limited, complicating the identification of resource bottlenecks.

Prometheus Operator 0.80.1 0.76.0 0.76.0 Recommended

The Prometheus Operator is used for managing Prometheus monitoring systems in Kubernetes. Prometheus Operator, simplifies the configuration and management of Prometheus instances.

Impact:

Not using this operator could complicate the setup and management of monitoring solutions, increasing the risk of missed performance insights.

rook 1.16.6 1.15.2 1.33.3 Mandatory

Rook is the Ceph orchestrator for Kubernetes that provides storage solutions. It is used in bm CNE solution.

Impact:

CSI is mandatory for the solution to work. Not utilizing Rook could increase the complexity of deploying and managing Ceph, making it difficult to scale storage solutions in a Kubernetes environment.

snmp-notifier 1.6.1 1.5.0 1.4.0 Recommended

snmp-notifier sends SNMP alerts for 5G NFs, providing real-time notifications for network events.

Impact:

Without SNMP notifications, proactive monitoring of NF health and performance could be compromised, delaying response to critical issues.

Velero 1.13.2 1.13.2 1.12.0 Recommended

Velero backs up and restores Kubernetes clusters for 5G NFs, ensuring data protection and disaster recovery.

Impact:

Without backup and recovery capabilities, customers would risk data loss and extended downtime, needing a full cluster reinstall in case of failure or upgrade.

2.1.2 Environment Setup Requirements

This section provides information on environment setup requirements for installing OCCM.

Client Machine Requirement

This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.

Client machine must have:
  • Helm repository configured.
  • network access to the Helm repository and Docker image repository.
  • network access to the Kubernetes cluster.
  • required environment settings to run the kubectl, docker, and podman commands. The environment must have privileges to create a namespace in the Kubernetes cluster.
  • Helm client installed with the push plugin. Configure the environment in such a manner that the helm install command deploys the software in the Kubernetes cluster.

Network Access Requirement

The Kubernetes cluster hosts must have network access to the following repositories:

  • Local Helm repository, where the OCCM Helm charts are available.
    To check if the Kubernetes cluster hosts have network access to the local Helm repository, run the following command:
    helm repo update
  • Local docker image repository: It contains the OCCM Docker images.
    To check if the Kubernetes cluster hosts can access the the local Docker image repository, try to retrieve any image with tag name to check connectivity by running the following command:
    podman pull <Podman-repo>/<image-name>:<image-tag>
    where:
    • <Podman-repo> is the IP address or host name of the Podman repository.
    • <image-name> is the docker image name.
    • <image-tag> is the tag the image used for the OCCM pod.

Note:

Run the kubectl and helm commands on a system based on the deployment infrastructure. For instance, they can be run on a client machine such as VM, server, local desktop, and so on.

Server or Space Requirement

For information about the server or space requirements, see the Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

CNE Requirement

This section is applicable only if you are installing OCCM on Cloud Native Environment (CNE).

To check the CNE version, run the following command:
echo $OCCNE_VERSION

For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

CNC Console Requirements

OCCM supports CNC Console.

For more information about CNC Console, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide and Oracle Communications Cloud Native Configuration Console User Guide.

2.1.3 Resource Requirements

This section lists the resource requirements to install and run OCCM.

Resource Profile

Table 2-3 Resource Profile

Microservice Name Pod Replica Limits Requests
CPU Memory (Gi) Ephimeral Storage (Mi) CPU Memory (Gi) Ephimeral Storage (Mi)
OCCM 1 2.5 2.5 1102 2.5 2.5 57
Helm Test 1 0.5 0.5 1102 0.5 0.5 57
Total 3.0 3.0 2204 3.0 3.0 114

Note:

  • Helm Test Job - This job is implemented on demand when Helm test command is run. It will run the Helm test and stops after completion. These are short-lived jobs that get terminated after the work is done. So, they are not part of active deployment resource but need to be considered only during Helm test procedures.
  • Troubleshooting Tool (Debug tool) Container - If Troubleshooting Tool Container Injection is enabled during OCCM deployment or upgrade, this container will be injected to OCCM pod. These containers will stay till pod or deployment exists. Debug tool resources are not considered in the calculation. Debug tool resources usage is per pod. If debug tool is enabled, then max 0.5vCPU and 0.5Gi memory per OCCM pod is needed.

OCCM Microservice to Port Mapping

Table 2-4 OCCM Microservice to Port Mapping

Service Nature of Port Nature of IP Network Type Port Traffic Type IPs Required External IP
oc-certificate-manager Internal Cluster IP Internal/Kubernetes 8989/TCP Configuration No No

2.2 Installation Sequence

This section describes preinstallation, installation, and postinstallation tasks for OCCM.

Note:

In case of fresh installation of OCCM and NF on a new cluster, you must follow the following sequence of installation:
  1. OCCM
  2. cnDBTier
  3. CNC Console
  4. NF

2.2.1 Preinstallation Tasks

Before installing OCCM perform the tasks described in this section.

2.2.1.1 Downloading the OCCM Package

To download the OCCM package from My Oracle Support (MOS), perform the following steps:

  1. Log in to My Oracle Support using your login credentials.
  2. Click the Patches & Updates tab to locate the patch.
  3. In the Patch Search Certificate Management, and click the Product or Family (Advanced) option.
  4. Enter Oracle Communications Cloud Native Core - 5G in Product field and select the product from the Product drop-down list.
  5. From the Release drop-down list, select "Oracle Communications Cloud Native Core Certificate Management <release_number>".

    Where, <release_number> indicates the required release number of OCCM.

  6. Click Search.

    The Patch Advanced Search Results list appears.

  7. Select the required patch from the list.

    The Patch Details window appears.

  8. Click Download.

    The File Download window appears.

  9. Click on the <p********_<release_number>_Tekelec>.zip file.
  10. Extract the release package zip file to download the network function patch to the system where network function must be installed.
  11. Extract the release package zip file.

    Package is named as follows:

    occm_csar_<marketing-release-number>.zip

    Example:

    occm_csar_25_1_201_0_0.zip
2.2.1.2 Pushing the Images to Customer Docker Registry
To push the images to the registry:
  1. Untar the OCCM package to the specific repository: tar -xvf occm_csar_<marketing-release-number>.zip

    For example: tar -xvf occm_csar_25_1_201_0_0.zip

    The package consists of the following files and folders:
    1. Files: OCCM Docker Images file and OCCM Helm Chart
      • OCCM Helm chart: occm-25.1.201.tgz
      • OCCM Network Policy Helm Chart: occm-network-policy-25.1.201.tgz
      • Images: occm-25.1.201.tar, nf_test-25.1.201.tar, ocdebug-tools-25.1.201.tar
    2. Scripts: Custom values and Alert files:
      • OCCM Custom Values File: occm_custom_values_25.1.201.yaml
      • Sample Grafana Dashboard file: occm_metric_dashboard_promha_25.1.201.json
      • Sample Alert File: occm_alerting_rules_promha_25.1.201.yaml
      • MIB files: occm_mib_25.1.201.mib, occm_mib_tc_25.1.201.mib, toplevel.mib
      • Configuration Open API specification: occm_configuration_openapi_25.1.201.json
      • Network Policy Custom values file: occm_network_policy_custom_values_25.1.201.yaml
    3. Definitions: Definitions folder contains the CNE Compatibility and definition files:
      • occm_cne_compatibility.yaml
      • occm.yaml
    4. TOSCA-Metadata: TOSCA.meta
  2. Verify the checksum of tar balls mentioned in the Readme.txt.
  3. Run the following command to push the Docker images to docker registry:
    podman load --input <image_file_name.tar>
  4. Run the following command to push the docker files to docker registry:
    podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
    podman push <docker_repo>/<image_name>:<image-tag>
  5. Run the following command to check if all the images are loaded:
    podman images
  6. Run the following command to push the Helm charts to Helm repository:
    helm cm-push --force <chart name>.tgz <helm repo>
    For example:
    helm cm-push --force occm-25.1.201.tgz occm-helm-repo
2.2.1.3 Verifying and Creating Namespace

This section explains how to verify or create a namespace in the system. If the namespace does not exist, the user must create it.

To verify and create OCCM namespace, perform the following steps:

  1. Run the following command to verify if the required namespace already exists in the system:
    $ kubectl get namespace
  2. In the output of the above command, check if the required namespace is available. If not available, create the namespace using the following command:
    $ kubectl create namespace <required namespace>

    For example, the following kubectl command creates the namespace occm-ns:

    $ kubectl create namespace occm-ns

Naming Convention for Namespaces

The namespace must:

  • start and end with an alphanumeric character.
  • contain 63 characters or less.
  • contain only alphanumeric characters or '-'.

Note:

It is recommended to avoid using the prefix kube- when creating a namespace. The prefix is reserved for Kubernetes system namespaces.
2.2.1.4 Creating Service Account, Role, and Rolebinding
2.2.1.4.1 Global Service Account Configuration

This section is optional and it describes how to manually create a service account, role, and rolebinding.

A custom service account can be provided for OCCM deployment in global.serviceAccountName of occm_custom_values_<version>.yaml.

A custom service account can be provided for Helm in global.serviceAccountName:
global:  
  dockerRegistry: cgbu-occncc-dev-docker.dockerhub-phx.oci.oraclecorp.com
  serviceAccountName: ""

Configuring Global Service Account to Manage NF Certificates with OCCM and NF in the Same Namespace

A sample OCCM service account yaml file to create custom service account is as follows:
## Service account yaml file for occm-sa
apiVersion: v1
kind: ServiceAccount
metadata:
  name: occm-sa
  namespace: occm
  annotations: {}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: occm-role
  namespace: occm
rules:
- apiGroups:
  - "" # "" indicates the core API group
  resources:
  - services
  - configmaps
  - pods
  - secrets
  - endpoints
  - namespaces
  verbs:
  - get
  - watch
  - list
  - create
  - delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: occm-rolebinding
  namespace: occm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: occm-role
subjects:
- kind: ServiceAccount
  name: occm-sa
  namespace: occm

Configuring Global Service Account to Manage NF Certificates with OCCM and NF in Separate Namespaces

OCCM provides support for key and certificate management in multiple namespaces.

In this deployment model, OCCM is deployed in namespace different from the components’ namespaces managed by it. It needs privileges to read, write, and delete Kubernetes secrets in the managed namespaces.

This is achieved by creating multiple namespace specific roles and binding them to the service account for OCCM.
  • AUTOMATIC Service Account Configuration: Roles and role bindings are created for each namespace specified using the occmAccessedNamespaces field in occm_custom_values.yaml. A service account for OCCM is created automatically and the roles created are assigned using the corresponding role binding.
    Namespaces managed by OCCM service account:
    occmAccessedNamespaces:
      - ns1
      - ns2

    Note:

    • occmAccessedNamespaces must include all hierarchical and any other namespaces where access is required.
    • Automatic Service Account Configuration is applicable for Single Namespace Management as well.
  • Custom Service Account Configuration: A custom service account can also be configured against the serviceAccountName field in occm_custom_values.yaml. If this is provided, automatic service account creation doesn't get triggered. The occmManagedNamespaces field doesn't need to be configured.

    Note:

    When custom service account is used, all the namespaces listed in the YAML file must be added to occmAccessedNamespaces, including hierarchical namespaces. This is needed for namespace validation.
    A sample OCCM service account yaml file for creating a custom service account is as follows:
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: occm-sa
      namespace: occm
      annotations: {}
    ---
       
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      namespace: ns1
      name: occm-secret-writer-role
    rules:
    - apiGroups:
      - "" # "" indicates the core API group
      resources:
      - secrets
      - namespaces
      verbs:
      - get
      - watch
      - list
      - create
      - update
      - delete
    ---
       
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      namespace: ns2
      name: occm-secret-writer-role
    rules:
    - apiGroups:
      - "" # "" indicates the core API group
      resources:
      - secrets
      - namespaces
      verbs:
      - get
      - watch
      - list
      - create
      - update
      - delete
       
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: occm-secret-writer-rolebinding
      namespace: ns1
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: occm-secret-writer-role
    subjects:
    - kind: ServiceAccount
      name: occm-sa
      namespace: occm
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: occm-secret-writer-rolebinding
      namespace: ns2
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: occm-secret-writer-role
    subjects:
    - kind: ServiceAccount
      name: occm-sa
      namespace: occm
      
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: occm-role
      namespace: occm
    rules:
    - apiGroups:
      - "" # "" indicates the core API group
      resources:
      - services
      - configmaps
      - pods
      - secrets
      - endpoints
      - namespaces
      verbs:
      - get
      - watch
      - list
      - create
      - delete
      - update
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: occm-rolebinding
      namespace: occm
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: occm-role
    subjects:
    - kind: ServiceAccount
      name: occm-sa
      namespace: occm
2.2.1.4.2 Helm Test Service Account Configuration
helmTestServiceAccountName is an optional field in the occm_custom_values_<version>.yaml file. Custom service account can be provided for Helm in global.helmTestServiceAccountName::
global:
  helmTestServiceAccountName: occm-helmtest-serviceaccount
A sample Helm test service account yaml file is as follows:
helm test service account apiVersion: v1
kind: ServiceAccount
metadata:
  name: occm-helmtest-serviceaccount
  namespace: occm
  annotations: {}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: occm-helmtest-role
  namespace: occm
rules:
- apiGroups:
  - "" # "" indicates the core API group
  resources:
  - services
  - configmaps
  - pods
  - secrets
  - endpoints
  - serviceaccounts
  - namespaces
  verbs:
  - get
  - watch
  - list
- apiGroups:
  - policy
  resources:
  - poddisruptionbudgets
  verbs:
  - get
  - watch
  - list
  - update
- apiGroups:
  - apps
  resources:
  - deployments
  - statefulsets
  verbs:
  - get
  - watch
  - list
  - update
- apiGroups:
  - autoscaling
  resources:
  - horizontalpodautoscalers
  verbs:
  - get
  - watch
  - list
  - update
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - roles
  - rolebindings
  verbs:
  - get
  - watch
  - list
  - update
- apiGroups:
  - monitoring.coreos.com
  resources:
  - prometheusrules
  verbs:
  - get
  - watch
  - list
  - update
   
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: occm-helmtest-rolebinding
  namespace: occm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: occm-helmtest-role
subjects:
- kind: ServiceAccount
  name: occm-helmtest-serviceaccount
  namespace: occm
2.2.1.5 Configuring Network Policies

Kubernetes network policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application. These network policies enforce access restrictions for all the applicable data flows except communication from Kubernetes node to pod for invoking container probe.

Note: Configuring network policies is an optional step. Based on the security requirements, network policies may or may not be configured.

For more information on network policies, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.

Configuring Network Policies

Following are the various operations that can be performed for network policies:

2.2.1.5.1 Installing Network Policies

Prerequisite

Network Policies are implemented by using the network plug-in. To use network policies, you must be using a networking solution that supports Network Policy.

Note:

For a fresh installation, it is recommended to install Network Policies before installing OCCM. However, if OCCM is already installed, you can still install the Network Policies.

To install network policy:

  1. Open the occm_network_policy_custom_values_25.1.201.yaml file provided in the release package zip file. For downloading the file, see Downloading the OCCM Package and Pushing the Images to Customer Docker Registry.
  2. The file is provided with the default network policies. If required, update the occm_network_policy_custom_values_25.1.201.yaml file. For more information on the parameters, see Table 2-6 table.
    • To connect with CNC Console, update the below parameter in the allow-ingress-from-console network policy in the occm_network_policies_custom_values_25.1.201.yamlfile:
      • kubernetes.io/metadata.name: <namespace in which CNC Console is deployed>
    • In allow-ingress-prometheus policy, kubernetes.io/metadata.name parameter must contain the value for the namespace where Prometheus is deployed, and app.kubernetes.io/name parameter value should match the label from Prometheus pod.
  3. Run the following command to install the network policies:
    helm install <release_name> -f <occm_network_policy_custom_values_<version>.yaml> --namespace
    <namespace> <chartpath>./<chart>.tgz

    For example:

    helm install occm-network-policy -f occm_network_policy_custom_values_25.1.201.yaml --namespace occm occm-network-policy-25.1.201.tgz
    Where,
    • helm-release-name: occm-network-policy Helm release name.
    • custom-value-file: occm-network-policy custom value file.
    • namespace: OCCM namespace.
    • network-policy: location where the network-policy package is stored.

Note:

Connections that were created before installing network policy and still persist are not impacted by the new network policy. Only the new connections would be impacted.
2.2.1.5.2 Upgrading Network Policies

To add, delete, or update network policy

  1. Modify the occm_network_policy_custom_values_25.1.201.yaml file to update, add, or delete the network policy.
  2. Run the following command to upgrade the network policies:
    helm upgrade <release_name> -f occm_network_policy_custom_values_<version>.yaml --namespace
          <namespace> <chartpath>./<chart>.tgz
    For example:
    helm upgrade occm-network-policy -f occm_network_policy_custom_values_25.1.201.yaml --namespace occm occm-network-policy-25.1.201.tgz
    Where,
    • helm-release-name: occm-network-policy Helm release name.
    • custom-value-file: occm-network-policy custom value file.
    • namespace: OCCM namespace.
    • network-policy: location where the network-policy package is stored.
2.2.1.5.3 Verifying Network Policies

Run the following command to verify if the network policies are deployed successfully:

kubectl get networkpolicy -n <namespace>

For Example:

kubectl get networkpolicy -n occm

Where,

  • helm-release-name: occm-network-policy Helm release name.
  • namespace: OCCM namespace.
2.2.1.5.4 Uninstalling Network Policies

Run the following command to uninstall all the network policies:

helm uninstall <release_name> --namespace <namespace>

For Example:

helm uninstall occm-network-policy --namespace occm
2.2.1.5.5 Configuration Parameters for Network Policies

Table 2-5 Supported Kubernetes Resource for Configuring Network Policies

Parameter Description Details
apiVersion

This is a mandatory parameter.

Specifies the Kubernetes version for access control.

Note: This is the supported API version for network policy. This is a read-only parameter.

Data Type: string

Default Value: networking.k8s.i o/v1

kind

This is a mandatory parameter.

Represents the REST resource this object represents.

Note: This is a read-only parameter.

Data Type: string

Default Value: NetworkPolicy

Table 2-6 Supported Parameters for Configuring Network Policies

Parameter Description Details
metadata.name

This is a mandatory parameter.

Specifies a unique name for the network policy.

{{ .metadata.name}}
spec.{}

This is a mandatory parameter.

This consists of all the information needed to define a particular network policy in the given namespace.

Note: NRF supports the specification parameters defined in Kubernetes Resource Category.

Default Value: NA

For more information about this functionality, see the Network Policies section in Oracle Communications Cloud Native Core, Certificate Management User Guide.

2.2.2 Installation Tasks

This section explains how to install Oracle Communications Cloud Native Core, Certificate Management.

Note:

  • Before installing OCCM, you must complete the Prerequisites and Preinstallation Tasks.
  • In a georedundant deployment, perform the steps explained in this section on all the georedundant sites.
2.2.2.1 Installing OCCM Package

This section includes information about OCCM deployment.

OCCM Deployment on Kubernetes

To install OCCM using Comand Line Interface (CLI):
  1. Run the following command to check the version of the Helm chart installation:
    helm search repo <release_name> -l
    For example:
    helm search repo occm -l
    
    NAME                            CHART VERSION   APP VERSION     DESCRIPTION
    cgbu-occm-dev-helm/occm            25.1.201         25.1.201           A helm chart for OCCM deployment
  2. Prepare occm_custom_values_25.1.201.yaml file with the required parameter information. For more information on these parameters, see Configuration Options.

    Note:

    Metrics scrapping on CNE
    To enable metrics scrapping at CNE, add the following annotation under global.nonlbDeployments section:
    nonlbDeployments:
          labels: {}
          annotations:
            oracle.com/cnc: "true"

    Note:

    CNE LB VM Support
    To support the CNE LB VM selection based on destination IP address with OCCM, add the following annotation under global.nonlbDeployments section:
    nonlbDeployments:
          labels: {}
          annotations:
            oracle.com.cnc/egress-network: oam

    Note:

    Traffic Segregation
    To enable egress network attachment for OCCM to CA communication, add the following annotation with the appropriate value:
    annotations:
      k8s.v1.cni.cncf.io/networks: ""

    Where k8s.v1.cni.cncf.io/networks: contains the network attachment information that the pod uses for network segregation. For more information about Traffic Segregation, see Oracle Communications Cloud Native Core, Certificate Management User Guide.

    Example:
      nonlbDeployments:
          labels: {}
          annotations:
            k8s.v1.cni.cncf.io/networks: default/nf-oam-egr1@nf-oam-egr1
  3. Deploy OCCM:
    1. Verify Deployment:
      To verify the deployment status, open a new terminal and run the following command:
      kubectl get pods -n <namespace_name> -w
      For example:
      kubectl get pods -n occm-ns -w

      The pod status gets updated on a regular interval. When Helm install command is run and exits with the status, you may stop watching the status of kubernetes pods.

    2. Installation using Helm tar:
      To install using Helm tar, run the following command:
      helm install <release_name> -f occm_custom_values_<version>.yaml -n <namespace_name>
          occm-<version>.tgz 
      For example:
      helm install occm -f occm_custom_values_25.1.201.yaml --namespace occm-ns occm-25.1.201.tgz
       
      Sample output
      
      # Example:
      NAME: occm
      LAST DEPLOYED: Wed Nov 15 10:11:17 2023
      NAMESPACE: occm-ns
      STATUS: deployed
      REVISION: 1
      TEST SUITE: None
    3. Installation using Helm repository:
      To install using Helm repository, run the following command:
      helm install <release_name> <helm-repo> -f occm_custom_values_<version>.yaml --namespace <namespace_name> --version <helm_version>
      For example:
      
      helm install cgbu-occm-dev-helm/occm -f occm_custom_values_25.1.201.yaml --namespace occm-ns --version 25.1.201

    where,

    release_name and namespace_name depend on customer configuration.

    helm-repo is the repository name where the Helm images, charts are stored.

    values is the Helm configuration file which needs to be updated based on the docker registry.

  4. Run the following command to check the deployment status:
    helm status <release_name>
  5. Run the following command to check if all the services are deployed and running:
    kubectl get svc -n occm-ns

    For example:

    
    NAME           TYPE         CLUSTER-IP      EXTERNAL-IP    PORT(S)       AGE
    service/occm   ClusterIP    10.233.26.68    <none>         8989/TCP      21m
  6. Run the following command to check if the pods are up and running:
    kubectl get po -n occm-ns

    For example:

    
    NAME                                READY   STATUS    RESTARTS      AGE
    pod/occm-occm-5fb5557f75-mjhcbh     1/1     Running   0             21m

2.2.3 Postinstallation Tasks

This section explains the postinstallation tasks for OCCM.

2.2.3.1 Verifying Installation

To verify if OCCM is installed:

  1. Run the following command to verify the installation status:
    <helm-release> -n <namespace>
    For example:
    helm status occm -n occm

    In the output, if STATUS is showing as deployed, then the installation is successful.

  2. Run the following command to verify if the pods are up and active:
    kubectl get jobs,pods -n release_namespace
    For example:
    kubectl get pods -n occm
  3. Run the following command to verify if the services are deployed and active:
    kubectl get services -n release_namespace
    For example:
    kubectl get services -n occm

Note:

Take a backup of the following files that are required during fault recovery:

  • Updated occm_custom_values_25.1.201.yaml file.

If the installation is not successful or you do not see the status as Running for all the pods, perform the troubleshooting steps provided in the Oracle Communications cloud Native Core, Certificate Management Troubleshooting Guide.

2.2.3.2 Performing Helm Test

The Helm Test feature validates the successful installation and readiness of OCCM pod (the configured readiness probe URL is checked for success). The pod to be checked is based on the namespace and label selector configured for the Helm test configurations.

Note:

Helm3 is mandatory for the Helm Test feature to work.

You must follow the following instructions to run the Helm test functionality. The configurations mentioned in the following step must be done before running the Helm install command.

  1. Configure the Helm test configurations that are under global sections in values.yaml file.
    global:
      # Helm test related configurations
      test:
        nfName: occm
        image:
          name: occm/nf_test
          tag: <version>
          imagePullPolicy: IfNotPresent
        config:
          logLevel: WARN
          timeout: 240
  2. Run the following Helm test command:
    helm test <helm_release_name> -n <k8s namespace>
    For example:
    helm test occm -n occmdemo
    Pod occm-test pending
    Pod occm-test pending
    Pod occm-test pending
    Pod occm-test pending
    Pod occm-test running
    Pod occm-test succeeded
    NAME: occm
    LAST DEPLOYED: Wed Nov 08 02:38:25 2023
    NAMESPACE: occmdemo
    STATUS: deployed
    REVISION: 1
    TEST SUITE:     occm-test
    Last Started:   Wed Nov 08 02:38:25 2023
    Last Completed: Wed Nov 08 02:38:25 2023
    Phase:          Succeeded
    NOTES:
    # Copyright 2020 (C), Oracle and/or its affiliates. All rights reserved.
     
    Thank you for installing occm.
     
    Your release is named occm , Release Revision: 1.
    To learn more about the release, try:
     
      $ helm status occm
      $ helm get occm
  3. Wait for the Helm test job to complete. Check if the test job is successful in the output.
2.2.3.3 CNC Console Configuration

CNC Console instance configuration must be updated to enable access of OCCM.

For information on how to enable OCCM, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.

2.2.3.4 OCCM Configuration

For information on the features supported by OCCM, issuer and certificate configurations, see OCCM Supported Features in the Oracle Communications Cloud Native Core, Certificate Management User Guide.

For OCCM REST API details, see Oracle Communications Cloud Native Core, Certificate Management REST Specification Guide.