2 Installing OCCM
This chapter provides information about installing OCCM in a cloud native environment.
2.1 Prerequisites
Before installing and configuring OCCM, ensure that the following prerequisites are met.
2.1.1 Software Requirements
This section lists the added or updated software required to install OCCM release 25.2.1xx.
Note:
Table 2-1 and Table 2-2 in this section offer a comprehensive list of software necessary for the proper functioning of an NF during deployment. However, these tables are indicative, and the software used can vary based on the customer's specific requirements and solution.
The Software Requirement column in Table 2-1 and Table 2-2 indicates one of the following:
- Mandatory: Absolutely essential; the software cannot function without it.
- Recommended: Suggested for optimal performance or best practices but not strictly necessary.
- Conditional: Required only under specific conditions or configurations.
- Optional: Not essential; can be included based on specific use cases or preferences.
This section lists the software that must be installed before installing OCCM:
Table 2-1 Preinstalled Software Versions
| Software | 25.2.1xx | 25.1.2xx | 25.1.1xx | 24.3.x | Software Requirement | Usage Description |
|---|---|---|---|---|---|---|
| Kubernetes | 1.33.1 | 1.32.0 | 1.31.0 | 1.30.0 | Mandatory |
Kubernetes orchestrates scalable, automated network function (NF) deployments for high availability and efficient resource utilization. Impact: Without orchestration capabilities, deploying and managing network functions (NFs) can become complex, leading to inefficient resource utilization and potential downtime. |
| Helm | 3.18 | 3.17.1 | 3.16.2 | 3.15.2 | Mandatory |
Helm, a package manager, simplifies deploying and managing network functions (NFs) on Kubernetes with reusable, versioned charts for easy automation and scaling. Impact: Pre-installation is required. Not using this capability may result in error-prone and time-consuming management of NF versions and configurations, impacting deployment consistency. |
| Podman | 4.94 | 4.9.4 | 4.9.4 | 4.6.1 | Recommended |
Podman manages and runs containerized NFs without requiring a daemon, offering flexibility and compatibility with Kubernetes. Impact: Podman is part of Oracle Linux. Without efficient container management, the development and deployment of NFs could become cumbersome, impacting agility. |
kubectl version helm version Note:
This guide covers the installation instructions for OCCM when Podman is the container platform with Helm as the Packaging Manager. For non-CNE, the operator can use commands based on their deployed Container Runtime Environment, see the Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade and Fault Recovery Guide.Note:
podman version or docker version based on the container engine installed.helm ls -AThe following table lists the versions of additional software along with the usage:
Table 2-2 Additional Software Versions
| Software | 25.2.1xx | 25.1.2xx | 25.1.1xx | 24.3.x | Software Requirement | Usage Description |
|---|---|---|---|---|---|---|
| AlertManager | 0.28.0 | 0.28.0 | 0.27.0 | 0.27.0 | Recommended |
Alertmanager is a component that works in conjunction with Prometheus to manage and dispatch alerts. It handles the routing and notification of alerts to various receivers. Impact: Not implementing alerting mechanisms can lead to delayed responses to critical issues, potentially resulting in service outages or degraded performance. |
| Calico | 3.29.3 | 3.29.1 | 3.28.1 | 3.27.3 | Recommended |
Calico provides networking and security for NFs in Kubernetes with scalable, policy-driven connectivity. Impact: CNI is mandatory for the functioning of 5G NFs. Without CNI and proper plugin , the network could face security vulnerabilities and inadequate traffic management, impacting the reliability of NF communications. |
| cinder-csi-plugin | 1.32.0 | 1.32.0 | 1.31.1 | 1.30.0 | Mandatory |
Cinder CSI (Container Storage Interface) plugin is used for provisioning and managing block storage in Kubernetes. It is often used in OpenStack environments to provide persistent storage for containerized applications Impact: This is used in openstack vCNE Solution. Without this integration, provisioning block storage for NFs could be manual and inefficient, complicating storage management. |
| containerd | 2.0.5 | 1.7.24 | 1.7.22 | 1.7.16 | Mandatory |
Containerd manages container lifecycles for running NFs efficiently in Kubernetes. Impact: A lack of a reliable container runtime could lead to performance issues and instability in NF operations. |
| CoreDNS | 1.12.0 | 1.11.13 | 1.11.1 | 1.11.1 | Recommended |
CoreDNS is the DNS server in Kubernetes, which provides DNS resolution services within the cluster. Impact: DNS is an essential part of deployment. Without proper service discovery, NFs would struggle to communicate with each other, leading to connectivity issues and operational failures. |
| Fluentd | 1.17.1 | 1.17.1 | 1.17.1 | 1.16.2 | Recommended |
Fluentd is an open-source data collector that streamlines data collection and consumption, allowing for improved data utilization and comprehension. Impact: Not utilizing centralized logging can hinder the ability to track NF activity and troubleshoot issues effectively, complicating maintenance and support. |
| Grafana | 7.5.14 | 9.5.3 | 9.5.3 | 9.5.3 | Recommended |
Grafana is a popular open-source platform for monitoring and observability. It provides a user-friendly interface for creating and viewing dashboards based on various data sources. Impact: Without visualization tools, interpreting complex metrics and gaining insights into NF performance would be cumbersome, hindering effective management. |
| Jaeger | 1.69.0 | 1.65.0 | 1.60.0 | 1.60.0 | Recommended |
Jaeger provides distributed tracing for 5G NFs, enabling performance monitoring and troubleshooting across microservices. Impact: Not utilizing distributed tracing may hinder the ability to diagnose performance bottlenecks, making it challenging to optimize NF interactions and user experience. |
| Kyverno | 1.13.4 | 1.13.4 | 1.12.5 | 1.12.0 | Recommended |
Kyverno is a Kubernetes policy engine that helps manage and enforce policies for resource configurations within a Kubernetes cluster. Impact: Failing to implement policy enforcement could lead to misconfigurations, resulting in security risks and instability in NF operations, affecting reliability. |
| MetalLB | 0.14.4 | 0.14.4 | 0.14.4 | 0.14.4 | Recommended |
MetalLB provides load balancing and external IP management for 5G NFs in Kubernetes environments. Impact: Used as LB solution in CNE. LB is mandatory for the solution to work. Without load balancing, traffic distribution among NFs may be inefficient, leading to potential bottlenecks and service degradation. |
| metrics-server | 0.7.2 | 0.7.2 | 0.7.2 | 0.7.1 | Recommended |
Metrics Server is used in Kubernetes for collecting resource usage data from pods and nodes. Impact: Without resource metrics, auto-scaling and resource optimization would be limited, potentially leading to resource contention or underutilization. |
| Multus | 4.1.3 | 4.1.3 | 3.8.0 | 3.8.0 | Recommended |
Multus enables multiple network interfaces in Kubernetes pods, allowing custom configurations and isolated paths for advanced use cases like NF deployments, ultimately supporting traffic segregation. Impact: Without this capability, connecting NFs to multiple networks could be limited, impacting network performance and isolation. |
| Oracle OpenSearch | 2.18.0 | 2.15.0 | 2.11.0 | 2.11.0 | Recommended | OpenSearch provides scalable search and analytics
for 5G NFs, enabling efficient data exploration and visualization.
Without a robust analytics solution, there would be difficulties in identifying performance issues and optimizing NF operations, affecting overall service quality. |
| OpenSearch Dashboard | 2.27.0 | 2.15.0 | 2.11.0 | 2.11.0 | Recommended |
OpenSearch Dashboard visualizes and analyzes data for 5G NFs, offering interactive insights and custom reporting. Impact: Without visualization capabilities, understanding NF performance metrics and trends would be difficult, limiting informed decision-making. |
| Prometheus | 3.4.1 | 3.2.0 | 2.52.0 | 2.52.0 | Mandatory |
Prometheus is a popular open-source monitoring and alerting toolkit. It collects and stores metrics from various sources and allows for alerting and querying. Impact: Not employing this monitoring solution could result in a lack of visibility into NF performance, making it difficult to troubleshoot issues and optimize resource usage. |
| prometheus-kube-state-metric | 2.16.0 | 2.15.0 | 2.13.0 | 2.13.0 | Recommended |
Kube-state-metrics is a service that generates metrics about the state of various resources in a Kubernetes cluster. It's commonly used for monitoring and alerting purposes. Impact: Without these metrics, monitoring the health and performance of NFs could be challenging, making it harder to proactively address issues. |
| prometheus-node-exporter | 1.9.1 | 1.8.2 | 1.8.2 | 1.8.2 | Recommended |
Node Exporter is a Prometheus exporter for collecting hardware and OS-level metrics from Linux hosts. Impact: Without node-level metrics, visibility into infrastructure performance would be limited, complicating the identification of resource bottlenecks. |
| Prometheus Operator | 0.83.0 | 0.80.1 | 0.76.0 | 0.76.0 | Recommended |
The Prometheus Operator is used for managing Prometheus monitoring systems in Kubernetes. Prometheus Operator, simplifies the configuration and management of Prometheus instances. Impact: Not using this operator could complicate the setup and management of monitoring solutions, increasing the risk of missed performance insights. |
| rook | 1.16.7 | 1.16.6 | 1.15.2 | 1.33.3 | Mandatory |
Rook is the Ceph orchestrator for Kubernetes that provides storage solutions. It is used in bm CNE solution. Impact: CSI is mandatory for the solution to work. Not utilizing Rook could increase the complexity of deploying and managing Ceph, making it difficult to scale storage solutions in a Kubernetes environment. |
| snmp-notifier | 2.0.0 | 1.6.1 | 1.5.0 | 1.4.0 | Recommended |
snmp-notifier sends SNMP alerts for 5G NFs, providing real-time notifications for network events. Impact: Without SNMP notifications, proactive monitoring of NF health and performance could be compromised, delaying response to critical issues. |
| Velero | 1.13.2 | 1.13.2 | 1.13.2 | 1.12.0 | Recommended |
Velero backs up and restores Kubernetes clusters for 5G NFs, ensuring data protection and disaster recovery. Impact: Without backup and recovery capabilities, customers would risk data loss and extended downtime, needing a full cluster reinstall in case of failure or upgrade. |
2.1.2 Environment Setup Requirements
This section provides information on environment setup requirements for installing OCCM.
Client Machine Requirement
This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.
- Helm repository configured.
- network access to the Helm repository and Docker image repository.
- network access to the Kubernetes cluster.
- required environment settings to run the
kubectl,docker, andpodmancommands. The environment must have privileges to create a namespace in the Kubernetes cluster. - Helm client installed with the push plugin.
Configure the environment in such a manner that the
helm installcommand deploys the software in the Kubernetes cluster.
Network Access Requirement
The Kubernetes cluster hosts must have network access to the following repositories:
- Local helm repository,
where the OCCM helm charts are available.
To check if the Kubernetes cluster hosts have network access to the local helm repository, run the following command:
helm repo update - Local docker image
repository: It contains the OCCM Docker images.
To check if the Kubernetes cluster hosts can access the the local Docker image repository, try to retrieve any image with tag name to check connectivity by running the following command:
where:podman pull <Podman-repo>/<image-name>:<image-tag><Podman-repo>is the IP address or host name of the Podman repository.<image-name>is the docker image name.<image-tag>is the tag the image used for the OCCM pod.
Note:
Run the kubectl and helm
commands on a system based on the deployment infrastructure.
For instance, they can be run on a client machine such as
VM, server, local desktop, and so on.
Server or Space Requirement
For information about the server or space requirements, see the Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
CNE Requirement
This section is applicable only if you are installing OCCM on Cloud Native Environment (CNE).
echo $OCCNE_VERSIONFor more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
CNC Console Requirements
OCCM supports CNC Console.
For more information about CNC Console, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide and Oracle Communications Cloud Native Configuration Console User Guide.
2.1.3 Resource Requirements
This section lists the resource requirements to install and run OCCM.
Resource Profile
Table 2-3 Resource Profile
| Microservice Name | Pod Replica | Limits | Requests | ||||
|---|---|---|---|---|---|---|---|
| CPU | Memory (Gi) | Ephimeral Storage (Mi) | CPU | Memory (Gi) | Ephimeral Storage (Mi) | ||
| OCCM | 1 | 2.5 | 2.5 | 1102 | 2.5 | 2.5 | 57 |
| Helm Test | 1 | 0.5 | 0.5 | 1102 | 0.5 | 0.5 | 57 |
| Total | 3.0 | 3.0 | 2204 | 3.0 | 3.0 | 114 | |
Note:
- Helm Test Job - This job is implemented on demand when Helm test command is run. It will run the Helm test and stops after completion. These are short-lived jobs that get terminated after the work is done. So, they are not part of active deployment resource but need to be considered only during Helm test procedures.
- Troubleshooting Tool (Debug tool) Container - If Troubleshooting Tool Container Injection is enabled during OCCM deployment or upgrade, this container will be injected to OCCM pod. These containers will stay till pod or deployment exists. Debug tool resources are not considered in the calculation. Debug tool resources usage is per pod. If debug tool is enabled, then max 0.5vCPU and 0.5Gi memory per OCCM pod is needed.
OCCM Microservice to Port Mapping
Table 2-4 OCCM Microservice to Port Mapping
| Service | Nature of Port | Nature of IP | Network Type | Port | Traffic Type | IPs Required | External IP | ||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| oc-certificate-manager | Internal | Cluster IP | Internal/Kubernetes | 8989/TCP | Configuration | No | No | ||||
2.2 Installation Sequence
This section describes preinstallation, installation, and postinstallation tasks for OCCM.
Note:
In case of fresh installation of OCCM and NF on a new cluster, you must follow the following sequence of installation:- OCCM
- cnDBTier
- CNC Console
- NF
2.2.1 Preinstallation Tasks
Before installing OCCM perform the tasks described in this section.
2.2.1.1 Downloading the OCCM Package
To download the OCCM package from My Oracle Support (MOS), perform the following steps:
- Log in to My Oracle Support using your login credentials.
- Click the Patches & Updates tab to locate the patch.
- In the Patch Search Certificate Management, and click the Product or Family (Advanced) option.
- Enter Oracle Communications Cloud Native Core - 5G in Product field and select the product from the Product drop-down list.
- From the Release drop-down list, select "Oracle Communications Cloud
Native Core Certificate Management <release_number>".
Where, <release_number> indicates the required release number of OCCM.
- Click Search.
The Patch Advanced Search Results list appears.
- Select the required patch from the list.
The Patch Details window appears.
- Click Download.
The File Download window appears.
- Click on the <p********_<release_number>_Tekelec>.zip file.
- Extract the release package zip file to download the network function patch to the system where network function must be installed.
- Extract the release package zip file.
Package is named as follows:
occm_csar_<marketing-release-number>.zipExample:
occm_csar_25_2_100_0_0.zip
2.2.1.2 Pushing the Images to Customer Docker Registry
- Untar the OCCM package to the specific repository:
tar -xvf occm_csar_<marketing-release-number>.zipFor example:
tar -xvf occm_csar_25_2_100_0_0.zipThe package consists of the following files and folders:- Files: OCCM Docker Images file and OCCM Helm
Chart
- OCCM Helm chart:
occm-25.2.100.tgz - OCCM Network Policy Helm Chart:
occm-network-policy-25.2.100.tgz - Images:
occm-25.2.100.tar, nf_test-25.2.100.tar, ocdebug-tools-25.2.100.tar
- OCCM Helm chart:
- Scripts: Custom values and Alert files:
- OCCM Custom Values File:
occm_custom_values_25.2.100.yaml - Sample Grafana Dashboard file:
occm_metric_dashboard_promha_25.2.100.json - Sample Alert File:
occm_alerting_rules_promha_25.2.100.yaml - MIB files:
occm_mib_25.2.100.mib, occm_mib_tc_25.2.100.mib, toplevel.mib - Configuration Open API specification:
occm_configuration_openapi_25.2.100.json - Network Policy Custom values file:
occm_network_policy_custom_values_25.2.100.yaml
- OCCM Custom Values File:
- Definitions: Definitions folder contains the CNE
Compatibility and definition files:
occm_cne_compatibility.yamloccm.yaml
- TOSCA-Metadata:
TOSCA.meta
- Files: OCCM Docker Images file and OCCM Helm
Chart
- Verify the checksum of tar balls mentioned in the Readme.txt.
- Run the following command to push the Docker images to docker
registry:
podman load --input <image_file_name.tar> - Run the following command to push the docker files to docker
registry:
podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag> podman push <docker_repo>/<image_name>:<image-tag> - Run the following command to check if all the images are
loaded:
podman images - Run the following command to push the Helm charts to Helm
repository:
helm cm-push --force <chart name>.tgz <helm repo>For example:helm cm-push --force occm-25.2.100.tgz occm-helm-repo
2.2.1.3 Verifying and Creating Namespace
This section explains how to verify or create a namespace in the system. If the namespace does not exist, the user must create it.
To verify and create OCCM namespace, perform the following steps:
- Run the following command to verify if the required namespace already exists in the
system:
$ kubectl get namespace - In the output of the above command, check if the required namespace is
available. If not available, create the namespace using the following
command:
$ kubectl create namespace <required namespace>For example, the following kubectl command creates the namespace
occm-ns:$ kubectl create namespace occm-ns
Naming Convention for Namespaces
The namespace must meet the following requirements:
- start and end with an alphanumeric character
- contain 63 characters or less
- contain only alphanumeric characters or '-'
Note:
It is recommended to avoid using the prefixkube- when creating a
namespace. The prefix is reserved for Kubernetes system namespaces.
2.2.1.4 Creating Service Account, Role, and Rolebinding
2.2.1.4.1 Global Service Account Configuration
This section is optional and it describes how to manually create a service account, role, and rolebinding.
A custom service account can be provided for OCCM deployment in
global.serviceAccountName of
occm_custom_values_<version>.yaml.
global:
dockerRegistry: cgbu-occncc-dev-docker.dockerhub-phx.oci.oraclecorp.com
serviceAccountName: ""Configuring Global Service Account to Manage NF Certificates with OCCM and NF in the Same Namespace
## Service account yaml file for occm-sa
apiVersion: v1
kind: ServiceAccount
metadata:
name: occm-sa
namespace: occm
annotations: {}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: occm-role
namespace: occm
rules:
- apiGroups:
- "" # "" indicates the core API group
resources:
- services
- configmaps
- pods
- secrets
- endpoints
- namespaces
verbs:
- get
- watch
- list
- create
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: occm-rolebinding
namespace: occm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: occm-role
subjects:
- kind: ServiceAccount
name: occm-sa
namespace: occmConfiguring Global Service Account to Manage NF Certificates with OCCM and NF in Separate Namespaces
OCCM provides support for key and certificate management in multiple namespaces.
In this deployment model, OCCM is deployed in namespace different from the components’ namespaces managed by it. It needs privileges to read, write, and delete Kubernetes secrets in the managed namespaces.
- AUTOMATIC Service Account Configuration: Roles and role
bindings are created for each namespace specified using the
occmAccessedNamespacesfield inoccm_custom_values.yaml. A service account for OCCM is created automatically and the roles created are assigned using the corresponding role binding.Namespaces managed by OCCM service account:occmAccessedNamespaces: - ns1 - ns2Note:
occmAccessedNamespacesmust include all hierarchical and any other namespaces where access is required.- Automatic Service Account Configuration is applicable for Single Namespace Management as well.
- Custom Service Account
Configuration: A custom service account can also be configured
against the
serviceAccountNamefield inoccm_custom_values.yaml. If this is provided, automatic service account creation doesn't get triggered. TheoccmManagedNamespacesfield doesn't need to be configured.Note:
When custom service account is used, all the namespaces listed in the YAML file must be added tooccmAccessedNamespaces, including hierarchical namespaces. This is needed for namespace validation.Note:
When using HNC-managed namespaces with a custom ServiceAccount, prefix the namespace name to the metadata.name of the associated Role and RoleBinding, and the roleRef.name in RoleBinding. For example: change to<namespace>-occm-secret-writer-roleand<namespace>-occm-secret-writer-rolebinding. This change is only needed for additional namespaces.A sample OCCM service account yaml file for creating a custom service account is as follows:apiVersion: v1 kind: ServiceAccount metadata: name: occm-sa namespace: occm annotations: {} --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: ns1 name: occm-secret-writer-role rules: - apiGroups: - "" # "" indicates the core API group resources: - secrets - namespaces verbs: - get - watch - list - create - update - delete --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: ns2 name: occm-secret-writer-role rules: - apiGroups: - "" # "" indicates the core API group resources: - secrets - namespaces verbs: - get - watch - list - create - update - delete --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: occm-secret-writer-rolebinding namespace: ns1 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: occm-secret-writer-role subjects: - kind: ServiceAccount name: occm-sa namespace: occm --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: occm-secret-writer-rolebinding namespace: ns2 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: occm-secret-writer-role subjects: - kind: ServiceAccount name: occm-sa namespace: occm --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: occm-role namespace: occm rules: - apiGroups: - "" # "" indicates the core API group resources: - services - configmaps - pods - secrets - endpoints - namespaces verbs: - get - watch - list - create - delete - update --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: occm-rolebinding namespace: occm roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: occm-role subjects: - kind: ServiceAccount name: occm-sa namespace: occm
2.2.1.4.2 Helm Test Service Account Configuration
occm_custom_values_<version>.yaml file. Custom service account can be
provided for helm in
global.helmTestServiceAccountName::global:
helmTestServiceAccountName: occm-helmtest-serviceaccounthelm test service account apiVersion: v1
kind: ServiceAccount
metadata:
name: occm-helmtest-serviceaccount
namespace: occm
annotations: {}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: occm-helmtest-role
namespace: occm
rules:
- apiGroups:
- "" # "" indicates the core API group
resources:
- services
- configmaps
- pods
- secrets
- endpoints
- serviceaccounts
- namespaces
verbs:
- get
- watch
- list
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- watch
- list
- update
- apiGroups:
- apps
resources:
- deployments
- statefulsets
verbs:
- get
- watch
- list
- update
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- watch
- list
- update
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
- rolebindings
verbs:
- get
- watch
- list
- update
- apiGroups:
- monitoring.coreos.com
resources:
- prometheusrules
verbs:
- get
- watch
- list
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: occm-helmtest-rolebinding
namespace: occm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: occm-helmtest-role
subjects:
- kind: ServiceAccount
name: occm-helmtest-serviceaccount
namespace: occm2.2.1.5 Configuring Network Policies
Kubernetes network policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application. These network policies enforce access restrictions for all the applicable data flows except communication from Kubernetes node to pod for invoking container probe.
Note: Configuring network policies is an optional step. Based on the security requirements, network policies may or may not be configured.
For more information on network policies, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.
Configuring Network Policies
Following are the various operations that can be performed for network policies:
2.2.1.5.1 Installing Network Policies
Prerequisite
Network Policies are implemented by using the network plug-in. To use network policies, you must be using a networking solution that supports Network Policy.
Note:
For a fresh installation, it is recommended to install Network Policies before installing OCCM. However, if OCCM is already installed, you can still install the Network Policies.
To install network policy:
- Open the
occm_network_policy_custom_values_25.2.100.yamlfile provided in the release package zip file. For downloading the file, see Downloading the OCCM Package and Pushing the Images to Customer Docker Registry. - The file is provided with the default network policies. If required,
update the o
ccm_network_policy_custom_values_25.2.100.yamlfile. For more information on the parameters, see Table 2-6 table.- To connect with CNC Console, update the below parameter in the
allow-ingress-from-consolenetwork policy in theoccm_network_policies_custom_values_25.2.100.yamlfile:kubernetes.io/metadata.name: <namespace in which CNC Console is deployed>
- In
allow-ingress-prometheuspolicy,kubernetes.io/metadata.nameparameter must contain the value for the namespace where Prometheus is deployed, andapp.kubernetes.io/nameparameter value should match the label from Prometheus pod.
- To connect with CNC Console, update the below parameter in the
- Run the following command to install the network
policies:
helm install <release_name> -f <occm_network_policy_custom_values_<version>.yaml> --namespace <namespace> <chartpath>./<chart>.tgzFor example:
Where,helm install occm-network-policy -f occm_network_policy_custom_values_25.2.100.yaml --namespace occm occm-network-policy-25.2.100.tgzhelm-release-name:occm-network-policy Helm release name.custom-value-file:occm-network-policy custom value file.namespace:OCCM namespace.network-policy:location where the network-policy package is stored.
Note:
Connections that were created before installing network policy and still persist are not impacted by the new network policy. Only the new connections would be impacted.2.2.1.5.2 Upgrading Network Policies
To add, delete, or update network policy
- Modify the
occm_network_policy_custom_values_25.2.100.yamlfile to update, add, or delete the network policy. - Run the following command to upgrade the network
policies:
For example:helm upgrade <release_name> -f occm_network_policy_custom_values_<version>.yaml --namespace <namespace> <chartpath>./<chart>.tgz
Where,helm upgrade occm-network-policy -f occm_network_policy_custom_values_25.2.100.yaml --namespace occm occm-network-policy-25.2.100.tgzhelm-release-name:occm-network-policy Helm release name.custom-value-file:occm-network-policy custom value file.namespace:OCCM namespace.network-policy:location where the network-policy package is stored.
2.2.1.5.3 Verifying Network Policies
Run the following command to verify if the network policies are deployed successfully:
kubectl get networkpolicy -n <namespace>
For Example:
kubectl get networkpolicy -n occm
Where,
helm-release-name:occm-network-policy Helm release name.namespace:OCCM namespace.
2.2.1.5.4 Uninstalling Network Policies
Run the following command to uninstall all the network policies:
helm uninstall <release_name> --namespace <namespace>For Example:
helm uninstall occm-network-policy --namespace occm
2.2.1.5.5 Configuration Parameters for Network Policies
Table 2-5 Supported Kubernetes Resource for Configuring Network Policies
| Parameter | Description | Details |
|---|---|---|
| apiVersion |
This is a mandatory parameter. Specifies the Kubernetes version for access control. Note: This is the supported API version for network policy. This is a read-only parameter. |
Data Type: string Default Value: |
| kind |
This is a mandatory parameter. Represents the REST resource this object represents. Note: This is a read-only parameter. |
Data Type: string Default Value: NetworkPolicy |
Table 2-6 Supported Parameters for Configuring Network Policies
| Parameter | Description | Details |
|---|---|---|
| metadata.name |
This is a mandatory parameter. Specifies a unique name for the network policy. |
{{ .metadata.name}} |
| spec.{} |
This is a mandatory parameter. This consists of all the information needed to define a particular network policy in the given namespace. Note: NRF supports the specification parameters defined in Kubernetes Resource Category. |
Default Value: NA |
For more information about this functionality, see the Network Policies section in Oracle Communications Cloud Native Core, Certificate Management User Guide.
2.2.2 Installation Tasks
Note:
- Before installing OCCM, you must complete the Prerequisites and Preinstallation Tasks.
- In a georedundant deployment, perform the steps explained in this section on all the georedundant sites.
2.2.2.1 Installing OCCM Package
This section includes information about OCCM deployment.
OCCM Deployment on Kubernetes
- Run the following command to check the version of the helm chart
installation:
helm search repo <release_name> -lFor example:helm search repo occm -lNAME CHART VERSION APP VERSION DESCRIPTION cgbu-occm-dev-helm/occm 25.2.100 25.2.100 A helm chart for OCCM deployment - Prepare occm_custom_values_25.2.100.yaml file with the required parameter information. For more information on these
parameters, see Configuration Options.
Note:
Metrics scrapping on CNETo enable metrics scrapping at CNE, add the following annotation underglobal.nonlbDeploymentssection:nonlbDeployments: labels: {} annotations: oracle.com/cnc: "true"Note:
CNE LB VM SupportTo support the CNE LB VM selection based on destination IP address with OCCM, add the following annotation underglobal.nonlbDeploymentssection:nonlbDeployments: labels: {} annotations: oracle.com.cnc/egress-network: oamNote:
Traffic SegregationTo enable egress network attachment for OCCM to CA communication, add the following annotation with the appropriate value:annotations: k8s.v1.cni.cncf.io/networks: ""Where k8s.v1.cni.cncf.io/networks: contains the network attachment information that the pod uses for network segregation. For more information about Traffic Segregation, see Oracle Communications Cloud Native Core, Certificate Management User Guide.
Example: nonlbDeployments: labels: {} annotations: k8s.v1.cni.cncf.io/networks: default/nf-oam-egr1@nf-oam-egr1 - Deploy OCCM:
- Verify Deployment:
To verify the deployment status, open a new terminal and run the following command:
kubectl get pods -n <namespace_name> -wFor example:kubectl get pods -n occm-ns -wThe pod status gets updated on a regular interval. When helm install command is run and exits with the status, you may stop watching the status of kubernetes pods.
- Installation using helm tar:
To install using Helm tar, run the following command:
helm install <release_name> -f occm_custom_values_<version>.yaml -n <namespace_name> occm-<version>.tgzFor example:helm install occm -f occm_custom_values_25.2.100.yaml --namespace occm-ns occm-25.2.100.tgzSample Output# Example: NAME: occm LAST DEPLOYED: Wed Nov 15 10:11:17 2023 NAMESPACE: occm-ns STATUS: deployed REVISION: 1 TEST SUITE: None - Installation using helm repository:
To install using Helm repository, run the following command:
helm install <release_name> <helm-repo> -f occm_custom_values_<version>.yaml --namespace <namespace_name> --version <helm_version>For example:helm install cgbu-occm-dev-helm/occm -f occm_custom_values_25.2.100.yaml --namespace occm-ns --version 25.2.100
where,
release_nameandnamespace_name: depends on customer configurationhelm-repo: is the repository name where the helm images, charts are storedvalues: is the helm configuration file which needs to be updated based on the docker registry - Verify Deployment:
- Run following command to check the deployment
status:
helm status <release_name> - Run the following command to check if all the services are deployed and
running:
kubectl get svc -n occm-nsFor example:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/occm ClusterIP 10.233.26.68 <none> 8989/TCP 21m - Run the following command to check if the pods are up and
running:
kubectl get po -n occm-nsFor example:
NAME READY STATUS RESTARTS AGE pod/occm-occm-5fb5557f75-mjhcbh 1/1 Running 0 21m
2.2.3 Postinstallation Tasks
This section explains the postinstallation tasks for OCCM.
2.2.3.1 Verifying Installation
To verify if OCCM is installed:
- Run the following command to verify the installation
status:
<helm-release> -n <namespace>For example:helm status occm -n occmIn the output, if
STATUSis showing asdeployed, then the installation is successful. - Run the following command to verify if the pods are up and
active:
kubectl get jobs,pods -n release_namespaceFor example:kubectl get pods -n occm - Run the following command to verify if the services are deployed and
active:
kubectl get services -n release_namespaceFor example:kubectl get services -n occm
Note:
Take a backup of the following files that are required during fault recovery:
- Updated
occm_custom_values_25.2.100.yamlfile.
If the installation is not successful or you do not see the status as Running for all the pods, perform the troubleshooting steps provided in the Oracle Communications cloud Native Core, Certificate Management Troubleshooting Guide.
2.2.3.2 Performing Helm Test
Helm Test is a feature which will validate OCCM Successful Installation along with readiness (Readiness probe url configured will be checked for success) of the pod. The pod to be checked will be based on the namespace and label selector configured for the helm test configurations.
Note:
Helm3 is mandatory for the Helm Test feature to work.You must follow the following instructions to run the Helm test functionality. The configurations mentioned in the following step must be done before executing the helm install command.
- Configure the helm test configurations that are under global sections
in values.yaml file.
global: # Helm test related configurations test: nfName: occm image: name: occm/nf_test tag: <version> imagePullPolicy: IfNotPresent config: logLevel: WARN timeout: 240 - Run the following helm test
command:
helm test <helm_release_name> -n <k8s namespace>For example:helm test occm -n occmdemo Pod occm-test pending Pod occm-test pending Pod occm-test pending Pod occm-test pending Pod occm-test running Pod occm-test succeeded NAME: occm LAST DEPLOYED: Wed Nov 08 02:38:25 2023 NAMESPACE: occmdemo STATUS: deployed REVISION: 1 TEST SUITE: occm-test Last Started: Wed Nov 08 02:38:25 2023 Last Completed: Wed Nov 08 02:38:25 2023 Phase: Succeeded NOTES: # Copyright 2020 (C), Oracle and/or its affiliates. All rights reserved. Thank you for installing occm. Your release is named occm , Release Revision: 1. To learn more about the release, try: $ helm status occm $ helm get occm - Wait for the helm test job to complete. Check if the test job is successful in the output.
2.2.3.3 CNC Console Configuration
CNC Console instance configuration must be updated to enable access of OCCM.
For information on how to enable OCCM, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.
2.2.3.4 OCCM Configuration
For information on the features supported by OCCM, issuer and certificate configurations, see OCCM Supported Features in the Oracle Communications Cloud Native Core, Certificate Management User Guide.
For OCCM REST API details, see Oracle Communications Cloud Native Core, Certificate Management REST Specifications Guide.
2.2.3.5 OCCM Alert and MIB Configuration in Prometheus
CNE supporting Prometheus HA
This section describes the measurement based Alert rules configuration for OCCM in
Prometheus. You must use the updated
occm_alerting_rules_promha_<version>.yaml file.
$ kubectl apply -f occm_alerting_rules_promha_<version>.yamlDisabling Alerts
- Edit
occm_alerting_rules_promha_<version>.yamlfile to remove specific alert. - Remove complete content of the specific alert from the
occm_alerting_rules_promha_<version>.yamlfile.For example, ff you want to remove OccmServiceDown alert, remove the complete content:## ALERT SAMPLE START## - alert: OccmServiceDown annotations: description: 'New certificates will not be created, and existing ones can not be renewed until OCCM is back' summary: 'namespace: {{$labels.namespace}}, podname: {{$labels.pod}}, timestamp: {{ with query "time()" }}{{ . | first | value | humanizeTimestamp }}{{ end }}: OCCM service is down' expr: absent(up{pod=~".*occm.*", namespace="occm-ns"}) or (up{pod=~".*occm.*", namespace="occm-ns"}) == 0 labels: severity: critical oid: "1.3.6.1.4.1.323.5.3.54.1.2.7004" namespace: ' {{ $labels.namespace }} ' podname: ' {{$labels.pod}} ' ## ALERT SAMPLE END## - Perform Alert configuration.
Validating Alerts
Configure and Validate Alerts in Prometheus Server. Refer to OCCM Alert Configuration for procedure to configure the alerts.
After configuring the alerts in Prometheus server, a user can verify that by following steps:
- Open the Prometheus server from your browser using the <IP>:<Port>
- Navigate to Status and then Rules
- Search OCCM. OCCMAlerts list is displayed.
Note:
If you are unable to see the alerts, it means that the alert file has not loaded in a format which the Prometheus server accepts. Modify the file and try again.Configuring SNMP-Notifier
- Run the following command to edit the deployment:
Example:kubectl edit deploy <snmp_notifier_deployment_name> -n <namespace>$ kubectl edit deploy occne-snmp-notifier -n occne-infra - Edit the destination as
follows:
Example:--snmp.destination=<destination_ip>:<destination_port>--snmp.destination=10.75.203.94:162
MIB Files for OCCM
There are two MIB files which are used to generate the traps. The user need to update these files along with the Alert file in order to fetch the traps in their environment.
occm_mib_tc_<version>.mib: This is considered as OCCM top level mib file, where the Objects and their data types are definedoccm_mib_<version>.mib: This file fetches the Objects from the top level mib file and based on the Alert notification, these objects can be selected for display.
Note:
MIB files are packaged along with OCCM CSAR package. Download the file from MOS. For more information, see Oracle Communications Cloud Native Core, Certificate Management Installation, Upgrade, and Fault Recovery Guide.