2 Installing OCCM
This chapter provides information about installing OCCM in a cloud native environment.
2.1 Prerequisites
Before installing and configuring OCCM, ensure that the following prerequisites are met.
2.1.1 Software Requirements
This section lists the software that must be installed before installing OCCM:
Install the following software before installing OCCM:
Table 2-1 Preinstalled Software
Software | Version |
---|---|
Kubernetes | 1.27.x, 1.26.x, 1.25.x |
Helm | 3.12.3 |
Podman | 4.4.1 |
kubectl version
helm version
helm ls -A
The list of additional software items, along with the supported versions and usage, is provided in the following table:
Table 2-2 Additional Software
Software | Version |
---|---|
containerd | 1.7.5 |
Calico | 3.25.2 |
MetalLB | 0.13.11 |
Prometheus | 2.44.0 |
Grafana | 9.5.3 |
Jaeger | 1.45.0 |
Istio | 1.18.2 |
Kyverno | 1.9.0 |
cert-manager | 1.12.4 |
Oracle OpenSearch | 2.3.0 |
Oracle OpenSearch Dashboard | 2.3.0 |
Fluentd OpenSearch | 1.16.2 |
Velero | 1.12.0 |
2.1.2 Environment Setup Requirements
This section provides information on environment setup requirements for installing OCCM.
Client Machine Requirement
This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.
- Helm repository configured.
- network access to the Helm repository and Docker image repository.
- network access to the Kubernetes cluster.
- required environment settings to run the
kubectl
,docker
, andpodman
commands. The environment must have privileges to create a namespace in the Kubernetes cluster. - Helm client installed with the push plugin.
Configure the environment in such a manner that the
helm install
command deploys the software in the Kubernetes cluster.
Network Access Requirement
The Kubernetes cluster hosts must have network access to the following repositories:
- Local helm repository,
where the OCCM helm charts are available.
To check if the Kubernetes cluster hosts have network access to the local helm repository, run the following command:
helm repo update
- Local docker image
repository: It contains the OCCM Docker images.
To check if the Kubernetes cluster hosts can access the the local Docker image repository, try to retrieve any image with tag name to check connectivity by running the following command:
docker pull <docker-repo>/<image-name>:<image-tag>
where:podman pull <Podman-repo>/<image-name>:<image-tag>
<docker-repo>
is the IP address or host name of the repository.<Podman-repo>
is the IP address or host name of the Podman repository.<image-name>
is the docker image name.<image-tag>
is the tag the image used for the OCCM pod.
Note:
Run the kubectl
and helm
commands on a system based on the deployment infrastructure.
For instance, they can be run on a client machine such as
VM, server, local desktop, and so on.
Server or Space Requirement
For information about the server or space requirements, see the Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
CNE Requirement
This section is applicable only if you are installing OCCM on Cloud Native Environment (CNE).
echo $OCCNE_VERSION
For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
2.1.3 Resource Requirements
This section lists the resource requirements to install and run OCCM
Resource Profile
Table 2-3 Resource Profile
Microservice Name | Pod Replica | Limits | Requests | ||||
---|---|---|---|---|---|---|---|
CPU | Memory (Gi) | Ephimeral Storage (Mi) | CPU | Memory (Gi) | Ephimeral Storage (Mi) | ||
OCCM | 1 | 2 | 2 | 1102 | 2 | 2 | 57 |
Helm Test | 1 | 0.5 | 0.5 | 1102 | 0.5 | 0.5 | 57 |
Total | 2.5 | 2.5 | 2204 | 2.5 | 2.5 | 114 |
Note:
- Helm Test Job - This job is executed on demand when helm test command is executed. It will execute the helm test and stops after completion. These are short-lived jobs which gets terminated after the work is done. So, they are not part of Active deployment Resource but needs to be considered only during helm test procedures.
- Troubleshooting Tool (Debug tool) Container - If Troubleshooting Tool Container Injection is enabled during OCCM deployment/upgrade, this container will be injected to OCCM pod. These containers will stay till pod/deployment exists. Debug tool resources are not considered in the calculation. Debug tool resources usage is per pod, if debug tool is enabled then max 0.5vCPU and 0.5Gi Memory per occm pod is needed.
2.2 Installation Sequence
This section describes preinstallation, installation, and postinstallation tasks for OCCM.
Note:
In case of fresh installation of OCCM and NF on a new cluster, you must follow the following sequence of installation:- OCCM
- cnDBTier
- CNC Console
- NF
2.2.1 Preinstallation Tasks
Before installing OCCM perform the tasks described in this section.
2.2.1.1 Downloading the OCCM Package
To download the OCCM package from My Oracle Support My Oracle Support (MOS), perform the following steps
- Log in to My Oracle Support using your login credentials.
- Click the Patches & Updates tab to locate the patch.
- In the Patch Search Certificate Management, and click the Product or Family (Advanced) option.
- Enter Oracle Communications Cloud Native Core - 5G in Product field and select the product from the Product drop-down list.
- From the Release drop-down list, select "Oracle Communications Cloud
Native Core Certificate Management <release_number>".
Where, <release_number> indicates the required release number of OCCM.
- Click Search.
The Patch Advanced Search Results list appears.
- Select the required patch from the list.
The Patch Details window appears.
- Click on Download.
The File Download window appears.
- Click on the <p********_<release_number>_Tekelec>.zip file.
- Extract the release package zip file to download the network function patch to the system where network function must be installed.
- Extract the release package zip file.
Package is named as follows:
occm_csar_<marketing-release-number>.zip
Example:
occm_csar_23_4_4_0_0.zip
2.2.1.2 Pushing the Images to Customer Docker Registry
- Untar the occm package to the specific repository:
tar -xvf occm_csar_<marketing-release-number>.zip
For Example:
tar -xvf occm_csar_23_4_4_0_0.zip
The package consists of the following files and folders:- Files: OCCM Docker Images file and OCCM Helm
Chart
- OCCM Helm chart:
occm-23.4.4.tgz
- OCCM Network Policy Helm Chart:
occm-network-policy-23.4.4.tgz
- Images:
occm-23.4.4.tar, nf_test-23.4.4.tar, ocdebug-tools-23.4.4.tar
- OCCM Helm chart:
- Scripts: Custom values and Alert files:
- OCCM Custom Values File:
occm_custom_values_23.4.4.yaml
- Sample Grafana Dashboard file:
occm_metric_dashboard_promha_23.4.4.json
- Sample Alert File:
occm_alerting_rules_promha_23.4.4.yaml
- MIB files:
occm_mib_23.4.4.mib, occm_mib_tc_23.4.4.mib, toplevel.mib
- Configuration Open API specification:
occm_configuration_openapi_23.4.4.json
- Network Policy Custom values file:
occm_network_policy_custom_values_23.4.4.yaml
- OCCM Custom Values File:
- Definitions: Definitions folder contains the CNE
Compatibility and definition files:
occm_cne_compatibility.yaml
occm.yaml
- TOSCA-Metadata:
TOSCA.meta
- Files: OCCM Docker Images file and OCCM Helm
Chart
- Verify the checksum of tar balls mentioned in Readme.txt.
- Run the following command to push the Docker images to docker
registry:
docker load --input <image_file_name.tar>
- Run the following command to push the docker files to docker
registry:
docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag> docker push <docker_repo>/<image_name>:<image-tag>
- Run the following command to check if all the images are
loaded:
docker images
- Run the following command to push the helm charts to helm
repo:
helm cm-push --force <chart name>.tgz <helm repo>
For example,helm cm-push --force occm-23.4.4.tgz occm-helm-repo
2.2.1.3 Verifying and Creating Namespace
This section explains how to verify or create a namespace in the system. If the namespace does not exist, the user must create it.
To verify and create OCCM namespace, perform the following steps:
- Run the following command to verify if the required namespace already exists in the
system:
$ kubectl get namespace
- In the output of the above command, check if the required namespace is
available. If not available, create the namespace using the following
command:
$ kubectl create namespace <required namespace>
For example, the following kubectl command creates the namespace
occm-ns
:$ kubectl create namespace occm-ns
Naming Convention for Namespaces
The namespace must meet the following requirements:
- start and end with an alphanumeric character
- contain 63 characters or less
- contain only alphanumeric characters or '-'
Note:
It is recommended to avoid using the prefixkube-
when creating a
namespace. The prefix is reserved for Kubernetes system namespaces.
2.2.1.4 Creating Service Account, Role, and Rolebinding
2.2.1.4.1 Global Service Account Configuration
This section is optional and it describes how to manually create a service account, role, and rolebinding.
A custom service account can be provided for OCCM deployment in
global.serviceAccountName of
occm_custom_values_<version>.yaml
.
global:
dockerRegistry: cgbu-occncc-dev-docker.dockerhub-phx.oci.oraclecorp.com
serviceAccountName: ""
Configuring Global Service Account to Manage NF Certificates with OCCM and NF in the Same Namespace
## Service account yaml file for occm-sa
apiVersion: v1
kind: ServiceAccount
metadata:
name: occm-sa
namespace: occm
annotations: {}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: occm-role
namespace: occm
rules:
- apiGroups:
- "" # "" indicates the core API group
resources:
- services
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- watch
- list
- create
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: occm-rolebinding
namespace: occm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: occm-role
subjects:
- kind: ServiceAccount
name: occm-sa
namespace: occm
Configuring Global Service Account to Manage NF Certificates with OCCM and NF in Separate Namespaces
OCCM provides support for key and certificate management in multiple namespaces.
In this deployment model, OCCM is deployed in namespace different from the components’ namespaces managed by it. It needs privileges to read, write, and delete Kubernetes secrets in the managed namespaces.
- AUTOMATIC Service Account Configuration: Roles and role
bindings are created for each namespace specified using the
occmAccessedNamespaces
field inoccm_custom_values.yaml
. A service account for OCCM is created automatically and the roles created are assigned using the corresponding role binding.Namespaces managed by OCCM service account:occmAccessedNamespaces: - ns1 - ns2
Note:
Automatic Service Account Configuration is applicable for Single Namespace Management as well - Custom Service Account Configuration: A custom service
account can also be configured against the
serviceAccountName
field in occm_custom_values.yaml. If this is provided, automatic service account creation doesn't get triggered. TheoccmManagedNamespaces
field doesn't need to be configured.A sample OCCM service account yaml file for creating a custom service account is as follows:apiVersion: v1 kind: ServiceAccount metadata: name: occm-sa namespace: occm annotations: {} --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: ns1 name: occm-secret-writer-role rules: - apiGroups: - "" # "" indicates the core API group resources: - secrets verbs: - get - watch - list - create - update - delete --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: ns2 name: occm-secret-writer-role rules: - apiGroups: - "" # "" indicates the core API group resources: - secrets verbs: - get - watch - list - create - update - delete --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: occm-secret-writer-rolebinding namespace: ns1 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: occm-secret-writer-role subjects: - kind: ServiceAccount name: occm-sa namespace: occm --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: occm-secret-writer-rolebinding namespace: ns2 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: occm-secret-writer-role subjects: - kind: ServiceAccount name: occm-sa namespace: occm --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: occm-role namespace: occm rules: - apiGroups: - "" # "" indicates the core API group resources: - services - configmaps - pods - secrets - endpoints verbs: - get - watch - list - create - delete - update --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: occm-rolebinding namespace: occm roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: occm-role subjects: - kind: ServiceAccount name: occm-sa namespace: occm
2.2.1.4.2 Helm Test Service Account Configuration
occm_custom_values_<version>.yaml
file. Custom service account can be
provided for helm in
global.helmTestServiceAccountName:
:global:
helmTestServiceAccountName: occm-helmtest-serviceaccount
helm test service account apiVersion: v1
kind: ServiceAccount
metadata:
name: occm-helmtest-serviceaccount
namespace: occm
annotations: {}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: occm-helmtest-role
namespace: occm
rules:
- apiGroups:
- "" # "" indicates the core API group
resources:
- services
- configmaps
- pods
- secrets
- endpoints
- serviceaccounts
verbs:
- get
- watch
- list
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- watch
- list
- update
- apiGroups:
- apps
resources:
- deployments
- statefulsets
verbs:
- get
- watch
- list
- update
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- watch
- list
- update
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
- rolebindings
verbs:
- get
- watch
- list
- update
- apiGroups:
- monitoring.coreos.com
resources:
- prometheusrules
verbs:
- get
- watch
- list
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: occm-helmtest-rolebinding
namespace: occm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: occm-helmtest-role
subjects:
- kind: ServiceAccount
name: occm-helmtest-serviceaccount
namespace: occm
2.2.1.5 Configuring Network Policies
Kubernetes network policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application. These network policies enforce access restrictions for all the applicable data flows except communication from Kubernetes node to pod for invoking container probe.
Note: Configuring network policies is an optional step. Based on the security requirements, network policies may or may not be configured.
For more information on network policies, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.
Configuring Network Policies
Following are the various operations that can be performed for network policies:
2.2.1.5.1 Installing Network Policies
Prerequisite
Network Policies are implemented by using the network plug-in. To use network policies, you must be using a networking solution that supports Network Policy.
Note:
For a fresh installation, it is recommended to install Network Policies before installing OCCM. However, if OCCM is already installed, you can still install the Network Policies.
To install network policy
- Open the occm_network_policy_custom_values_23.4.4.yaml file provided in the release package zip file. For downloading the file, see Downloading the OCCM Package and Pushing the Images to Customer Docker Registry.
- The file is provided with the default network policies. If required,
update the occm_network_policy_custom_values_23.4.4.yaml file. For more information on the parameters, see theTable 2-5 table
- To connect with CNC Console, update the below parameter in the
allow-ingress-from-console
network policy in theoccm_network_policies_custom_values_23.4.4.yaml
file:kubernetes.io/metadata.name
: <namespace in which CNCC is deployed>
- In
allow-ingress-prometheus
policy,kubernetes.io/metadata.name
parameter must contain the value for the namespace where Prometheus is deployed, andapp.kubernetes.io/name
parameter value should match the label from Prometheus pod.
- To connect with CNC Console, update the below parameter in the
- Run the following command to install the network
policies:
helm install <release_name> -f <occm_network_policy_custom_values_<version>.yaml> --namespace <namespace> <chartpath>./<chart>.tgz
For Example:
Where,helm install occm-network-policy -f occm_network_policy_custom_values_23.4.0.yaml --namespace occm occm-network-policy-23.4.4.tgz
helm-release-name:
occm-network-policy Helm release name.custom-value-file:
occm-network-policy custom value file.namespace:
OCCM namespace.network-policy:
location where the network-policy package is stored.
Note:
- Connections that were created before installing network policy and still persist are not impacted by the new network policy. Only the new connections would be impacted.
2.2.1.5.2 Upgrading Network Policies
To add, delete, or update network policy
- Modify the
occm_network_policy_custom_values_23.4.4.yaml
file to update, add, or delete the network policy. - Run the following command to upgrade the network
policies:
For example:helm upgrade <release_name> -f occm_network_policy_custom_values_<version>.yaml --namespace <namespace> <chartpath>./<chart>.tgz
Where,helm upgrade occm-network-policy -f occm_network_policy_custom_values_23.4.0.yaml --namespace occm occm-network-policy-23.4.4.tgz
helm-release-name:
occm-network-policy Helm release name.custom-value-file:
occm-network-policy custom value file.namespace:
OCCM namespace.network-policy:
location where the network-policy package is stored.
2.2.1.5.3 Verifying Network Policies
Run the following command to verify if the network policies are deployed successfully:
kubectl get networkpolicy -n <namespace>
For Example:
kubectl get networkpolicy -n occm
Where,
helm-release-name:
occm-network-policy Helm release name.namespace:
OCCM namespace.
2.2.1.5.4 Uninstalling Network Policies
Run the following command to uninstall all the network policies:
helm uninstall <release_name> --namespace <namespace>
For Example:
helm uninstall occm-network-policy --namespace occm
2.2.1.5.5 Configuration Parameters for Network Policies
Table 2-4 Supported Kubernetes Resource for Configuring Network Policies
Parameter | Description | Details |
---|---|---|
apiVersion |
This is a mandatory parameter. Specifies the Kubernetes version for access control. Note: This is the supported api version for network policy. This is a read-only parameter. |
Data Type: string Default Value: |
kind |
This is a mandatory parameter. Represents the REST resource this object represents. Note: This is a read-only parameter. |
Data Type: string Default Value: NetworkPolicy |
Table 2-5 Supported parameters for Configuring Network Policies
Parameter | Description | Details |
---|---|---|
metadata.name |
This is a mandatory parameter. Specifies a unique name for the network policy. |
{{ .metadata.name }} |
spec.{} |
This is a mandatory parameter. This consists of all the information needed to define a particular network policy in the given namespace. Note: NRF supports the spec parameters defined in Kubernetes Resource Category. |
Default Value: NA |
For more information about this functionality, see the Network Policies section in Oracle Communications Cloud Native Core, Certificate Management User Guide
2.2.2 Installation Tasks
Note:
- Before installing OCCM, you must complete the Prerequisites and Preinstallation Tasks.
- In a georedundant deployment, perform the steps explained in this section on all the georedundant sites.
2.2.2.1 Installing OCCM Package
This section includes information about OCCM deployment.
OCCM Deployment on Kubernetes
- Execute the following command to check the version of the helm chart
installation:
helm search repo <release_name> -l
Example:helm search repo occm -l
NAME CHART VERSION APP VERSION DESCRIPTION cgbu-occm-dev-helm/occm 23.4.4 23.4.4 A helm chart for OCCM deployment
- Prepare occm_custom_values_23.4.4.yaml file with the required parameter information. For more information on these parameters, see Configuration Options.
- Deploy OCCM:
- Verify Deployment:
To verify the deployment status, open a new terminal and run the following command:
kubectl get pods -n <namespace_name> -w
Example:kubectl get pods -n occm-ns -w
The pod status gets updated on a regular interval. When helm install command is run and exits with the status, you may stop watching the status of kubernetes pods.
- Installation using helm
tar:
To install using Helm tar, run the following command:
helm install <release_name> -f occm_custom_values_<version>.yaml -n <namespace_name> occm-<version>.tgz
For Example:helm install occm -f occm_custom_values_23.4.4.yaml --namespace occm-ns occm-23.4.4.tgz
Sample Output# Example: NAME: occm LAST DEPLOYED: Wed Nov 15 10:11:17 2023 NAMESPACE: occm-ns STATUS: deployed REVISION: 1 TEST SUITE: None
- Installation using helm
repository:
To install using Helm repository, run the following command:
helm install <release_name> <helm-repo> -f occm_custom_values_<version>.yaml --namespace <namespace_name> --version <helm_version>
For Example:helm install cgbu-occm-dev-helm/occm -f occm_custom_values_23.4.4.yaml --namespace occm-ns --version 23.4.4
where,
release_name
andnamespace_name
: depends on customer configurationhelm-repo
: is the repository name where the helm images, charts are storedvalues
: is the helm configuration file which needs to be updated based on the docker registry - Verify Deployment:
- Run following command to check the deployment
status:
helm status <release_name>
- Run the following command to check if all the services are deployed and
running:
kubectl get svc -n occm-ns
For Example:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/occm ClusterIP 10.233.26.68 <none> 8989/TCP 21m
- Run the following command to check if the pods are up and
running:
kubectl get po -n occm-ns
Example:
NAME READY STATUS RESTARTS AGE pod/occm-occm-5fb5557f75-mjhcbh 1/1 Running 0 21m
2.2.3 Postinstallation Tasks
This section explains the postinstallation tasks for OCCM.
2.2.3.1 Verifying Installation
To verify if OCCM is installed:
- Run the following command to verify the installation
status:
<helm-release> -n <namespace>
For example:helm status occm -n occm
In the output, if
STATUS
is showing asdeployed
, then the installation is successful. - Run the following command to verify if the pods are up and
active:
kubectl get jobs,pods -n release_namespace
For example:kubectl get pods -n occm
- Run the following command to verify if the services are deployed and
active:
kubectl get services -n release_namespace
For example:kubectl get services -n occm
Note:
Take a backup of the following files that are required during fault recovery:
- Updated
occm_custom_values_23.4.4.yaml
file.
If the installation is not successful or you do not see the status as Running for all the pods, perform the troubleshooting steps provided in the Oracle Communications cloud Native Core, Certificate Management Troubleshooting Guide.
2.2.3.2 Performing Helm Test
Helm Test is a feature which will validate OCCM Successful Installation along with readiness (Readiness probe url configured will be checked for success) of the pod. The pod to be checked will be based on the namespace and label selector configured for the helm test configurations.
Note:
Helm3 is mandatory for the Helm Test feature to work.You must follow the following instructions to run the Helm test functionality. The configurations mentioned in the following step must be done before executing the helm install command.
- Configure the helm test configurations that are under global sections
in values.yaml file.
global: # Helm test related configurations test: nfName: occm image: name: occm/nf_test tag: <version> imagePullPolicy: IfNotPresent config: logLevel: WARN timeout: 240
- Run the following helm test
command:
helm test <helm_release_name> -n <k8s namespace>
For example:helm test occm -n occmdemo Pod occm-test pending Pod occm-test pending Pod occm-test pending Pod occm-test pending Pod occm-test running Pod occm-test succeeded NAME: occm LAST DEPLOYED: Wed Nov 08 02:38:25 2023 NAMESPACE: occmdemo STATUS: deployed REVISION: 1 TEST SUITE: occm-test Last Started: Wed Nov 08 02:38:25 2023 Last Completed: Wed Nov 08 02:38:25 2023 Phase: Succeeded NOTES: # Copyright 2020 (C), Oracle and/or its affiliates. All rights reserved. Thank you for installing occm. Your release is named occm , Release Revision: 1. To learn more about the release, try: $ helm status occm $ helm get occm
- Wait for the helm test job to complete. Check if the test job is successful in the output.