2 Installing SCP

This chapter provides information about installing SCP in a cloud native environment, including the prerequisites and downloading the deployment package.

Note:

SCP supports fresh installation, and it can also be upgraded from 25.1.1xx and 25.1.2xx. For more information about how to upgrade SCP, see Upgrading SCP.

SCP installation is supported over the following platforms:

  • Oracle Communications Cloud Native Core, Cloud Native Environment (CNE): For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
  • Oracle Cloud Infrastructure (OCI) using OCI Adaptor: For more information about OCI, see Oracle Communications Cloud Native Core, OCI Deployment Guide.

SCP installation comprises of prerequisites, preinstallation, installation, and postinstallation tasks. You must perform SCP installation tasks in the same sequence as outlined in the following table:

2.1 Prerequisites

Before installing and configuring SCP, ensure that the following prerequisites are met.

2.1.1 Software Requirements

This section lists the software that must be installed before installing SCP.

Note:

Table 2-2 and Table 2-3 offer a comprehensive list of software necessary for the proper functioning of SCP during deployment. However, these tables are indicative, and the software used can vary based on the customer's specific requirements and solution.
The Software Requirement column in Table 2-2 and Table 2-3 indicates one of the following:
  • Mandatory: Absolutely essential; the software cannot function without it.
  • Recommended: Suggested for optimal performance or best practices but not strictly necessary.
  • Conditional: Required only under specific conditions or configurations.
  • Optional: Not essential; can be included based on specific use cases or preferences.

The following software must be installed before installing SCP:

Table 2-2 Preinstalled Software Versions

Software 25.2.1xx 25.1.2xx 25.1.1xx Software Requirement Usage Description
Helm 3.18.2 3.17.1 3.16.2 Mandatory

Helm, a package manager, simplifies deploying and managing NFs on Kubernetes with reusable, versioned charts for easy automation and scaling.

Impact:

Preinstallation is required. Without this capability, management of NF versions and configurations becomes time-consuming and error-prone, impacting deployment consistency.

Kubernetes 1.33.1 1.32.0 1.31.0 Mandatory

Kubernetes orchestrates scalable, automated NF deployments for high availability and efficient resource utilization.

Impact:

Preinstallation is required. Without orchestration capabilities, deploying and managing network functions (NFs) can become complex, leading to inefficient resource utilization and potential downtime.

Podman 5.2.2 4.9.4 4.9.4 Recommended

Podman is a part of Oracle Linux. It manages and runs containerized NFs without requiring a daemon, offering flexibility and compatibility with Kubernetes.

Impact:

Preinstallation is required. Without efficient container management, the development and deployment of NFs could become cumbersome, impacting agility.

To check the versions of the preinstalled software in the cloud native environment, run the following commands:

kubectl version
helm version
podman version

Note:

This guide covers the installation instructions for SCP when Podman is the container platform with Helm as the Packaging Manager. For non-CNE, the operator can use commands based on their deployed Container Runtime Environment, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

The following software are available if SCP is deployed in CNE. If you are deploying SCP in any other cloud native environment, these additional software must be installed before installing SCP.

To check the installed software, run the following command:

helm ls -A

The list of additional software items, along with the supported versions and usage, is provided in the following table:

Table 2-3 Additional Software Versions

Software 25.2.1xx 25.1.2xx 25.1.1xx Software Requirement Usage Description
AlertManager 0.28.0 0.28.0 0.27.0 Recommended

Alertmanager is a component that works in conjunction with Prometheus to manage and dispatch alerts. It handles the routing and notification of alerts to various receivers.

Impact:

Not implementing alerting mechanisms can lead to delayed responses to critical issues, potentially resulting in service outages or degraded performance.

Calico 3.29.3 3.29.1 3.28.1 Recommended

Calico provides networking and security for NFs in Kubernetes, ensuring scalable, policy-driven connectivity.

Impact:

Calico is a popular Container Network Interface (CNI) and CNI is mandatory for the functioning of 5G NFs. Without a CNI plugin, the network could witness security vulnerabilities and inadequate traffic management, impacting the reliability of NF communications.

cinder-csi-plugin 1.32.0 1.32.0 1.31.1 Mandatory

Cinder CSI (Container Storage Interface) plugin is used for provisioning and managing block storage in Kubernetes. It is often used in OpenStack environments to provide persistent storage for containerized applications

Impact:

Without the CSI plugin, provisioning block storage for NFs would be manual and inefficient, complicating storage management.

containerd 2.0.5 1.7.24 1.7.22 Recommended

Containerd manages container lifecycles to run NFs efficiently in Kubernetes.

Impact:

A lack of a reliable container runtime could lead to performance issues and instability in NF operations.

CoreDNS 1.12.0 1.11.13 1.11.1 Recommended

CoreDNS is the DNS server in Kubernetes, which provides DNS resolution services within the cluster.

Impact:

DNS is an essential part of deployment. Without proper service discovery, NFs would struggle to communicate with each other, leading to connectivity issues and operational failures.

Fluentd 1.17.1 1.17.1 1.17.1 Recommended

Fluentd is an open source data collector that streamlines data collection and consumption, ensuring improved data utilization and comprehension.

Impact:

Not utilizing centralized logging can hinder the ability to track NF activity and troubleshoot issues effectively, complicating maintenance and support.

Grafana 7.5.17 9.5.3 9.5.3 Recommended

Grafana is a popular open source platform for monitoring and observability. It provides a user-friendly interface for creating and viewing dashboards based on various data sources.

Impact:

Without visualization tools, interpreting complex metrics and gaining insights into NF performance would be cumbersome, affecting effective management.

Jaeger 1.69.0 1.65.0 1.60.0 Recommended

Jaeger provides distributed tracing for 5G NFs, enabling performance monitoring and troubleshooting across microservices.

Impact:

Not utilizing distributed tracing may hinder the ability to diagnose performance bottlenecks, making it challenging to optimize NF interactions and user experience.

Kyverno 1.13.4 1.13.4 1.12.5 Recommended

Kyverno is a Kubernetes policy engine that allows to manage and enforce policies for resource configurations within a Kubernetes cluster.

Impact:

Without the policy enforcement, there could be misconfigurations, resulting in security risks and instability in NF operations, affecting reliability.

MetalLB 0.14.4 0.14.4 0.14.4 Recommended

MetalLB is used as a load balancing solution in CNE, which is mandatory for the solution to work. MetalLB provides load balancing and external IP management for 5G NFs in Kubernetes environments.

Impact:

Without load balancing, traffic distribution among NFs may be inefficient, leading to potential bottlenecks and service degradation.

metrics-server 0.7.2 0.7.2 0.7.2 Recommended

Metrics server is used in Kubernetes for collecting resource usage data from pods and nodes.

Impact:

Without resource metrics, auto-scaling and resource optimization would be limited, potentially leading to resource contention or underutilization.

Multus 4.1.3 4.1.3 3.8.0 Recommended

Multus enables multiple network interfaces in Kubernetes pods, allowing custom configurations and isolated paths for advanced use cases such as NF deployments, ultimately supporting traffic segregation.

Impact:

Without this capability, connecting NFs to multiple networks could be limited, impacting network performance and isolation.

OpenSearch 2.19.1 2.15.0 2.11.0 Recommended

OpenSearch provides scalable search and analytics for 5G NFs, enabling efficient data exploration and visualization.

Impact:

Without a robust analytics solution, there would be difficulties in identifying performance issues and optimizing NF operations, affecting overall service quality.

OpenSearch Dashboard 2.19.1 2.15.0 2.11.0 Recommended

OpenSearch dashboard visualizes and analyzes data for 5G NFs, offering interactive insights and custom reporting.

Impact:

Without visualization capabilities, understanding NF performance metrics and trends would be difficult, limiting informed decision making.

Prometheus 3.4.1 3.2.0 2.52.0 Mandatory

Prometheus is a popular open source monitoring and alerting toolkit. It collects and stores metrics from various sources and allows for alerting and querying.

Impact:

Not employing this monitoring solution could result in a lack of visibility into NF performance, making it difficult to troubleshoot issues and optimize resource usage.

prometheus-kube-state-metric 2.16.0 2.15.0 2.13.0 Recommended

Kube-state-metrics is a service that generates metrics about the state of various resources in a Kubernetes cluster. It's commonly used for monitoring and alerting purposes.

Impact:

Without these metrics, monitoring the health and performance of NFs could be challenging, making it harder to proactively address issues.

prometheus-node-exporter 1.9.1 1.8.2 1.8.2 Recommended

Prometheus Node Exporter collects hardware and OS-level metrics from Linux hosts.

Impact:

Without node-level metrics, visibility into infrastructure performance would be limited, complicating the identification of resource bottlenecks.

Prometheus Operator 0.83.0 0.80.1 0.76.0 Recommended

The Prometheus Operator is used for managing Prometheus monitoring systems in Kubernetes. Prometheus Operator simplifies the configuration and management of Prometheus instances.

Impact:

Not using this operator could complicate the setup and management of monitoring solutions, increasing the risk of missed performance insights.

rook 1.16.7 1.16.6 1.15.2 Mandatory

Rook is the Ceph orchestrator for Kubernetes that provides storage solutions. It is used in BareMetal CNE solution.

Impact:

Not utilizing rook could increase the complexity of deploying and managing ceph, making it difficult to scale storage solutions in a Kubernetes environment.

snmp-notifier 2.0.0 1.6.1 1.5.0 Recommended

snmp-notifier sends SNMP alerts for 5G NFs, providing real-time notifications for network events.

Impact:

Without SNMP notifications, proactive monitoring of NF health and performance could be compromised, delaying response to critical issues.

Velero 1.13.2 1.13.2 1.13.2 Recommended

Velero backs up and restores Kubernetes clusters for 5G NFs, ensuring data protection and disaster recovery.

Impact:

Without backup and recovery capabilities, customers would witness a risk of data loss and extended downtime, requiring a full cluster reinstall in case of failure or upgrade.

Note:

On OCI, the above mentioned software are not required because OCI observability and management service is used for logging, metrics, alerts, and KPIs. For more information, see Oracle Communications Cloud Native Core, OCI Deployment Guide.

2.1.2 Environment Setup Requirements

This section describes the environment setup requirements for installing SCP.
2.1.2.1 Client Machine Requirement

This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.

The client machine should have:

  • Helm repository configured.
  • network access to the Helm repository and Docker image repository.
  • network access to the Kubernetes cluster.
  • required environment settings to run kubectl, docker, and podman commands. The environment should have privileges to create a namespace in the Kubernetes cluster.
  • Helm client installed with the push plugin. Configure the environment in such a manner that the helm install command deploys the software in the Kubernetes cluster.
2.1.2.2 Network Access Requirements

The Kubernetes cluster hosts must have network access to the following repositories:

  • Local Helm repository: It contains SCP Helm charts.
    To check if the Kubernetes cluster hosts can access the local Helm repository, run the following command:
    helm repo update
  • Local Docker image repository: It contains SCP Docker images.

    To check if the Kubernetes cluster hosts can access the local Docker image repository, pull any image with an image-tag using either of the following commands:

    docker pull <docker-repo>/<image-name>:<image-tag>
    podman pull <podman-repo>/<image-name>:<image-tag>

Where,

  • <docker-repo> is the IP address or host name of the Docker repository.
  • <podman-repo> is the IP address or host name of the Podman repository.
  • <image-name> is the Docker image name.
  • <image-tag> is the tag assigned to the Docker image used for the SCP pod.

For example:

docker pull CUSTOMER_REPO/oc-app-info:25.2.100

podman pull occne-repo-host:5000/ocscp/oc-app-info:25.2.100

Note:

Run kubectl and helm commands on a system based on the deployment infrastructure. For example, they can be run on a client machine such as VM, server, local desktop, and so on.
2.1.2.3 Server or Space Requirement

For information about server or space requirements, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

2.1.2.4 CNE Requirement

This section is applicable only if you are installing SCP on Cloud Native Environment (CNE).

SCP supports CNE 25.2.1xx, 25.1.2xx, and 25.1.1xx.

To check the CNE version, run the following command:

echo $OCCNE_VERSION

Note:

If Istio or Aspen Service Mesh (ASM) is installed on CNE, run the following command to patch the "disallow-capabilities" clusterpolicy of CNE and exclude the NF namespace before the NF deployment:
kubectl patch clusterpolicy disallow-capabilities --type "json" -p '[{"op":"add","path":"/spec/rules/0/exclude/any/0/resources/namespaces/-","value":"<namespace of NF>"}]'

Where, <namespace of NF> is the namespace of SCP, cnDBTier, or Oracle Communications Cloud Native Configuration Console (CNC Console).

For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

2.1.2.5 OCI Requirements

SCP can be deployed in OCI. While deploying SCP in OCI, the user must use the Operator instance/VM instead of Bastion Host.

For more information about OCI Adaptor, see Oracle Communications Cloud Native Core, OCI Adaptor User Guide.

2.1.2.6 cnDBTier Requirements

Note:

Obtain the values of the cnDBTier parameters listed in cnDBTier Customization Parameters from the delivered ocscp_dbtier_custom_values.yaml file and use these values in the new ocscp_dbtier_custom_values.yaml file if the parameter values in the new ocscp_dbtier_custom_values.yaml file are different from the delivered ocscp_dbtier_custom_values.yaml file.

SCP supports cnDBTier 25.2.1xx, 25.1.2xx, and 25.1.1xx. cnDBTier must be configured and running before installing SCP.

Note:

In georedundant deployment, each site should have a dedicated cnDBTier.

To install cnDBTier 25.2.1xx with resources recommended for SCP, customize the ocscp_dbtier_25.2.100_custom_values_25.2.100.yaml file in the ocscp_csar_25_2_1_0_0_0.zip folder with the required deployment parameters. cnDBTier parameters will vary depending on whether the deployment is on a single site, two site, or three site. For more information, see cnDBTier Customization Parameters.

Note:

If you already have an older version of cnDBTier, upgrade cnDBTier with resources recommended for SCP by customizing the ocscp_dbtier_25.2.100_custom_values_25.2.100.yaml file in the ocscp_csar_25_2_1_0_0_0.zip folder with the required deployment parameters. Use the same PVC size as it was in the previous release. For more information, see cnDBTier Customization Parameters.

For more information about cnDBTier installation, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

2.1.2.7 OCCM Requirements
SCP supports OCCM 25.2.1xx.

To support automated certificate lifecycle management, SCP integrates with Oracle Communications Cloud Native Core, Certificate Management (OCCM) in compliance with 3GPP security recommendations. For more information about OCCM, see the following guides:

  • Oracle Communications Cloud Native Core, Certificate Management Installation, Upgrade, and Fault Recovery Guide
  • Oracle Communications Cloud Native Core, Certificate Management User Guide
2.1.2.8 OSO Requirement

SCP supports Operations Services Overlay (OSO) 25.2.1xx, 25.1.2xx, and 25.1.1xx for common operation services (Prometheus and components such as alertmanager, pushgateway) on a Kubernetes cluster, which does not have these common services. For more information about OSO installation, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation Guide.

2.1.2.9 CNC Console Requirements

SCP supports CNC Console 25.2.1xx to configure and manage Network Functions. For more information, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide .

2.1.3 Resource Requirements

This section lists the resource requirements to install and run SCP.

Note:

The performance and capacity of the SCP system may vary based on the call model, feature or interface configuration, network conditions, and underlying CNE and hardware environment.

2.1.3.1 SCP Services

The following table lists resource requirement for SCP Services:

Table 2-4 SCP Services

Service Name SCP Service PODs Ephemeral Storage Per Pod
  Pod Replica vCPU/Pod Memory in Gi/Pod Minimum Value in Mi (If Enabled) Maximum Value in Gi (If Enabled)
Min Max Min Max Min Max
Helm test 1 1 3 3 3 3 70 1
Helm Hook 1 1 3 3 3 3 70 1
<helm-release-name>-scpc-subscription 1 1 2 2 2 2 70 1
<helm-release-name>-scpc-notification 1 1 8 8 8 8 70 1
<helm-release-name>-scpc-audit 1 1 3 3 4 4 70 1
<helm-release-name>-scpc-configuration 1 1 2 2 2 2 70 1
<helm-release-name>-scpc-alternate-resolution 1 1 2 2 2 2 70 1
<helm-release-name>-scp-cache 3 3 8 8 8 8 70 1
<helm-release-name>-scp-nrfproxy 2 16 8 8 8 8 70 1
<helm-release-name>-scp-load-manager 2 3 8 8 8 8 70 1
<helm-release-name>-scp-oauth-nrfproxy 2 16 8 8 8 8 70  
<helm-release-name>-scp-worker(profile 1) 2 32 4 4 12 12 70 1
<helm-release-name>-scp-worker(profile 2) 2 64 8 8 18 18 70 1
<helm-release-name>-scp-mediation 2 16 8 8 8 8 70 1
<helm-release-name>-scp-mediation test 1 1 8 8 8 8 70 1
<helm-release-name>-scp-worker(profile 3) 2 64 12 12 24 24 70 1

Note:

  • To go beyond 60000 Transactions Per Second (TPS), you must deploy SCP with scp-worker configured with Profile 2.
  • <helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".
  • Helm Hooks Jobs: These are pre and post jobs that are invoked during installation, upgrade, rollback, and uninstallation of the deployment. These are short span jobs that get terminated after the deployment completion.
  • Helm Test Job: This job is run on demand when the Helm test command is initiated. This job runs the Helm test and stops after completion. These are short-lived jobs that get terminated after the deployment is done. They are not part of active deployment resource, but are considered only during Helm test procedures.
2.1.3.2 Upgrade

Following is the resource requirement for upgrading SCP.

Table 2-5 Upgrade

Service Name Upgrade Resources Ephemeral Storage Per Pod
  Pod Replica vCPU/Pod Memory in Gi/Pod Minimum Value in Mi (If Enabled) Maximum Value in Gi (If Enabled)
Min Max Min Max Min Max
Helm test 0 0 0 0 0 0 70 1
Helm Hook 0 0 0 0 0 0 70 1
<helm-release-name>-scpc-subscription 1 1 2 2 2 2 70 1
<helm-release-name>-scpc-notification 1 1 8 8 8 8 70 1
<helm-release-name>-scpc-audit 1 1 3 3 4 4 70 1
<helm-release-name>-scpc-configuration 1 1 2 2 2 2 70 1
<helm-release-name>-scpc-alternate-resolution 1 1 2 2 2 2 70 1
<helm-release-name>-scp-cache 1 1 8 8 8 8 70 1
<helm-release-name>-scp-nrfproxy 1 4 8 8 8 8 70 1
<helm-release-name>-scp-load-manager 1 1 8 8 8 8 70 1
<helm-release-name>-scp-oauth-nrfproxy 1 4 8 8 8 8 70 1
<helm-release-name>-scp-worker(profile 1) 2 8 4 4 12 12 70 1
<helm-release-name>-scp-worker(profile 2) 2 16 8 8 18 18 70 1
<helm-release-name>-scp-mediation 2 4 8 8 8 8 70 1
<helm-release-name>-scp-mediation test 0 0 0 0 0 0 70 1
<helm-release-name>-scp-worker(profile 3) 2 16 12 12 24 24 70 1

Note:

<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".
2.1.3.3 ASM Sidecar

SCP leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external TLS communication. If ASM Sidecar injection is enabled during SCP deployment or upgrade, this container is injected to each SCP pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about installing ASM, see Configuring SCP to Support Aspen Service Mesh.

Table 2-6 ASM Sidecar

Service Name ASM Sidecar Ephemeral Storage Per Pod
  vCPU/Pod Memory in Gi/Pod Minimum Value in Mi (If Enabled) Maximum Value in Gi (If Enabled)
Min Max Min Max
Helm test 1 1 1 1 70 1
Helm Hook 3 3 3 3 70 1
<helm-release-name>-scpc-subscription 1 1 1 1 70 1
<helm-release-name>-scpc-notification 3 3 3 3 70 1
<helm-release-name>-scpc-audit 1 1 1 1 70 1
<helm-release-name>-scpc-configuration 1 1 1 1 70 1
scpc-alternate-resolution 1 1 1 1 70 1
<helm-release-name>-scp-cache 4 4 4 4 70 1
<helm-release-name>-scp-nrfproxy 5 5 5 5 70 1
<helm-release-name>-scp-load-manager 4 4 4 4 70 1
<helm-release-name>-scp-oauth-nrfproxy 5 5 5 5 70 1
scp-worker (profile 1) 3 3 4 4 70 1
<helm-release-name>-scp-worker (profile 2) 6 6 6 6 70 1
<helm-release-name>-scp-mediation 0 0 0 0 70 1
<helm-release-name>-scp-mediation test 0 0 0 0 70 1
<helm-release-name>-scp-worker (profile 3) 10 10 10 10 70 1

Note:

<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".
2.1.3.4 Debug Tool Container

The Debug Tool Container provides third-party troubleshooting tools for debugging the runtime issues in a lab environment. If Debug Tool Container injection is enabled during SCP deployment or upgrade, this container is injected to each SCP pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about configuring Debug Tool Container, see Oracle Communications Cloud Native Core, Service Communication Proxy Troubleshooting Guide.

Table 2-7 Debug Tool Container

Service Name Debug Tool Container Ephemeral Storage Per Pod
  vCPU/Pod Memory in Gi/Pod Minimum Value in Mi (If Enabled) Maximum Value in Gi (If Enabled)
Min Max Min Max
Helm test 0 0 0 0 70 1
Helm Hook 0 0 0 0 70 1
<helm-release-name>-scpc-subscription 1 1 2 2 70 1
<helm-release-name>-scpc-notification 1 1 2 2 70 1
<helm-release-name>-scpc-audit 1 1 2 2 70 1
<helm-release-name>-scpc-configuration 1 1 2 2 70 1
<helm-release-name>-scpc-alternate-resolution 1 1 2 2 70 1
<helm-release-name>-scp-cache 1 1 2 2 70 1
<helm-release-name>-scp-nrfproxy 1 1 2 2 70 1
<helm-release-name>-scp-load-manager 1 1 2 2 70 1
<helm-release-name>-scp-oauth-nrfproxy 1 1 2 2 70 1
<helm-release-name>-scp-worker(profile 1) 1 1 2 2 70 1
<helm-release-name>-scp-worker(profile 2) 1 1 2 2 70 1
<helm-release-name>-scp-mediation 1 1 2 2 70 1
<helm-release-name>-scp-mediation test 1 1 2 2 70 1
<helm-release-name>-scp-worker (profile 3) 1 1 2 2 70 1

Note:

<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".
2.1.3.5 CNC Console

Oracle Communications Cloud Native Configuration Console (CNC Console) is a Graphical User Interface (GUI) for NFs and Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) common services. For information about CNC Console resources required by SCP, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide .

2.1.3.6 cnDBTier Resources

This section describes the cnDBTier resources required to deploy SCP.

Table 2-8 cnDBTier Services Resource Requirements

Service Name CPU/Pod Memory/Pod (in GB) PVC Size (in GB) Ephemeral Storage
Min Max Min Max PVC1 PVC2 Min (MB) Max (MB)
MGMT (ndbmgmd) 2 2 4 5 14 NA 90 1000
DB (ndbmtd) 2 2 8 8 15 8 90 1000
SQL - Replication (ndbmysqld) 4 4 10 10 25 NA 90 1000
SQL - Access (ndbappmysqld) 4 4 8 8 20 NA 90 1000
Monitor Service (db-monitor-svc) 4 4 4 4 0 NA 90 1000
db-connectivity-service 0 0 0 0 0 NA 0 0
Replication Service(db-replication-svc) 2 2 12 12 190 NA 90 1000
Replication Service - Other(db-replication-svc) 0.6 1 1 2 NA NA 90 1000
Backup Manager Service (db-backup-manager-svc) 0.1 0.1 0.128 0.128 0 NA 90 1000

cnDBTier Sidecars with ASM

The following table indicates the sidecars with ASM for cnDBTier services.

Table 2-9 Sidecars with ASM per cnDBTier Service

Service Name CPU/Pod Memory/Pod (in GB) Ephemeral Storage
Min Max Min Max Min (MB) Max (MB)
MGMT (ndbmgmd) 1.2 1.2 1.256 1.256 90 100
DB (ndbmtd) 2.2 2.2 3.256 3.256 180 2100
SQL - Replication (ndbmysqld) 2.3 2.3 2.512 2.512 180 1100
SQL - Access (ndbappmysqld) 2.3 2.3 2.512 2.512 180 1100
Monitor Service (db-monitor-svc) 1 1 1 1 0 0
db-connectivity-service - - - - - -
Replication Service - Leader(db-replication-svc) 1.2 1.2 1.5 1.5 90 1000
Replication Service - Other(db-replication-svc) 1.2 1.2 1.5 1.5 - -
Backup Manager Service (db-backup-manager-svc) 1 1 1 1 0 0

cnDBTier Sidecars without ASM

The following table indicates the sidecars with ASM for cnDBTier services.

Table 2-10 Sidecars without ASM per cnDBTier Service

Service Name CPU/Pod Memory/Pod (in GB) Ephemeral Storage
Min Max Min Max Min (MB) Max (MB)
MGMT (ndbmgmd) 0.2 0.2 0.256 0.256 90 100
DB (ndbmtd) 1.2 1.2 2.256 2.256 180 2100
SQL - Replication (ndbmysqld) 0.3 0.3 0.512 0.512 180 1100
SQL - Access (ndbappmysqld) 0.3 0.3 0.512 0.512 180 1100
Monitor Service (db-monitor-svc) 0 0 0 0 0 0
db-connectivity-service - - - - - -
Replication Service - Leader(db-replication-svc) 0.2 0.2 0.5 0.5 90 1000
Replication Service - Other(db-replication-svc) 0.2 0.2 0.5 0.5 - -
Backup Manager Service (db-backup-manager-svc) 0 0 0 0 0 0
2.1.3.7 OSO Resources
This section describes the OSO resources required to deploy SCP.

Table 2-11 OSO Resource Requirement

Microservice Name CPU Memory (GB) Replica
Min Max Min Max
prom-alertmanager 0.5 0.5 2 2 2
prom-server 16 16 64 64 1
2.1.3.8 OCCM Resources

OCCM manages certificate creation, recreation, renewal, and so on for SCP. For information about OCCM resources required by SCP, see Oracle Communications Cloud Native Core, Certificate Management Installation, Upgrade, and Fault Recovery Guide.

2.2 Installation Sequence

This section describes preinstallation, installation, and postinstallation tasks for SCP.

You must perform these tasks after completing Prerequisites and in the same sequence as outlined in the following table.

Table 2-12 SCP Installation Sequence

Installation Sequence Applicable for CNE Deployment Applicable for OCI Deployment
Preinstallation Tasks Yes Yes
Installation Tasks Yes Yes
Postinstallation Tasks Yes Yes

2.2.1 Preinstallation Tasks

To install SCP, perform the tasks described in this section.

2.2.1.1 Downloading the SCP Package

To download the SCP package from My Oracle Support (MOS), perform the following procedure:

  1. Log in to My Oracle Support (MOS) using your login credentials.
  2. Click the Patches & Updates tab to locate the patch.
  3. In the Patch Search console, click Product or Family (Advanced).
  4. In the Product field, enter Oracle Communications Cloud Native Core - 5G.
  5. From the Release drop-down list, select Oracle Communications Cloud Native Core Service Communication Proxy <release_number>.

    Where, <release_number> indicates the required release number of SCP.

  6. Click Search.

    The Patch Advanced Search Results list appears.

  7. From the Patch Name column, select the required patch number.

    The Patch Details window appears.

  8. Click Download.

    The File Download window appears.

  9. Click the <p********>_<release_number>_Tekelec.zip file to download the release package.

    Where, <p********> is the MOS patch number and <release_number> is the release number of SCP.

2.2.1.2 Pushing the Images to Customer Docker Registry

SCP Images

SCP deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.

The following table lists the Docker images of SCP:

Table 2-13 Images for SCP

Microservices Image Tag
<helm-release-name>-SCP-Worker ocscp-worker 25.2.100
<helm-release-name>-SCPC-Configuration ocscp-configuration 25.2.100
<helm-release-name>-SCPC-Notification ocscp-notification 25.2.100
<helm-release-name>-SCPC-Subscription ocscp-subscription 25.2.100
<helm-release-name>-SCPC-Audit ocscp-audit 25.2.100
<helm-release-name>-SCPC-Alternate-Resolution ocscp-alternate-resolution 25.2.100
<helm-release-name>-SCP-Cache ocscp-cache 25.2.100
<helm-release-name>-SCP-nrfproxy ocscp-nrfproxy 25.2.100
<helm-release-name>-SCP-nrfProxy-oauth ocscp-nrfproxy-oauth 25.2.100
<helm-release-name>-SCP-Mediation ocmed-nfmediation 25.2.100
<helm-release-name>-SCP-loadManager ocscp-load-manager 25.2.100

Note:

<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".

To push the images to the registry:

  1. Navigate to the location where you want to install SCP, and then unzip the SCP release package (<p********>_<release_number>_Tekelec.zip) to retrieve the following CSAR package.

    The SCP package is as follows: <ReleaseName>_csar_<Releasenumber>.zip.

    Where,

    <ReleaseName> is a name that is used to track this installation instance.

    <Releasenumber> is the release number.

    For example, ocscp_csar_25_2_1_0_0_0.zip.
  2. Untar the SCP package to retrieve the OCSCP image tar file: unzip <ReleaseName>_csar_<Releasenumber>.zip.

    For example, unzip ocscp_csar_25_2_1_0_0_0.zip

    The zip file consists of the following:

    
    |── Definitions
    │   ├── ocscp_cne_compatibility.yaml
    │   └── ocscp.yaml
    ├── Files
    │   ├── ChangeLog.txt
    │   ├── Helm
    │   │   ├── ocscp-25.2.100.tgz 
    │   │   └── ocscp-network-policy-25.2.100.tgz
    │   ├── Licenses
    │   ├── nf-test-25.2.100.tar
    │   ├── ocdebug-tools-25.2.100.tar
    │   ├── ocmed-nfmediation-25.2.100.tar
    │   ├── ocscp-alternate-resolution-25.2.100.tar
    │   ├── ocscp-audit-25.2.100.tar
    │   ├── ocscp-cache-25.2.100.tar
    │   ├── ocscp-configuration-25.2.100.tar
    │   ├── ocscp-load-manager-25.2.100.tar
    │   ├── ocscp-notification-25.2.100.tar
    │   ├── ocscp-nrfproxy-25.2.100.tar
    │   ├── ocscp-subscription-25.2.100.tar
    │   ├── ocscp-nrfProxy-oauth-25.2.100.tar
    │   ├── ocscp-worker-25.2.100.tar
    │   ├── Oracle.cert
    │   └── Tests
    ├── ocscp.mf
    ├── Scripts
    │   ├── ocscp_alerting_rules_promha.yaml
    │   ├── ocscp_alertrules.yaml
    │   ├── ocscp_configuration_openapi_25.2.100.json
    │   ├── ocscp_custom_values_25.2.100.yaml
    │   ├── ocscp_dbtier_25.2.100_custom_values_25.2.100.yaml
    │   ├── ocscp_metric_dashboard_25.2.100.json
    │   ├── ocscp_metric_dashboard_promha_25.2.100.json
    │   ├── ocscp_mib_25.2.100.mib
    │   ├── ocscp_mib_tc_25.2.100.mib
    │   ├── ocscp_network_policies_values_25.2.100.yaml
    │   ├── ocscp_servicemesh_config_values_25.2.100.yaml
    │   └── toplevel.mib
    ├── Scripts
    │    ├── oci
    │    │      └── ocscp_oci_alertrules_25.2.100.zip
    │    │      └── ocscp_oci_metric_dashboard_25.2.100.zip
    └── TOSCA-Metadata
        └── TOSCA.meta
  3. Open the Files folder and run one of the following commands to load ocscp-images-25.2.100.tar:
    podman load --input /IMAGE_PATH/ocscp-images-<release_number>.tar
    docker load --input /IMAGE_PATH/ocscp-images-<release_number>.tar

    Example:

    docker load --input /IMAGE_PATH/ocscp-images-25.2.100.tar
  4. Run one of the following commands to verify that the images are loaded:
    podman images
    docker images

    Sample Output:

    docker.io/ocscp/ocscp-cache                           25.2.100   98fc90defb56        2 hours ago         725MB
    docker.io/ocscp/ocscp-nrfproxy-oauth                  25.2.100   0d92bfbf7c14        2 hours ago         720MB
    docker.io/ocscp/ocscp-configuration                   25.2.100   f23cddb3ec83        2 hours ago         725MB
    docker.io/ocscp/ocscp-worker                          25.2.100   16c8f423c3b9        2 hours ago         877MB
    docker.io/ocscp/ocscp-load-manager                    25.2.100   dab875c4179a        2 hours ago         724MB
    docker.io/ocscp/ocscp-nrfproxy                        25.2.100   85029929a670        2 hours ago         690MB
    docker.io/ocscp/ocscp-alternate-resolution            25.2.100   2c38646f8bd7        2 hours ago         695MB
    docker.io/ocscp/ocscp-audit                           25.2.100   039e25297115        2 hours ago         694MB
    docker.io/ocscp/ocscp-notification                    25.2.100   a21e6bed6177        2 hours ago         710MB
    docker.io/ocscp/ocmed-nfmediation                     25.2.100   772e01a41584        2 hours ago         710MB
  5. Verify the list of images shown in the output with the list of images shown in Table 2-13. If the list does not match, reload the image tar file.
  6. Run one of the following commands to tag the images to the registry:
    podman tag <image-name>:<image-tag> <podman-repo>/ <image-name>:<image-tag>
    docker tag <image-name>:<image-tag> <docker-repo>/ <image-name>:<image-tag>
    Where,
    • <image-name> is the image name.
    • <image-tag> is the image release number.
    • <docker-repo> is the docker registry address with Port Number if registry has port attached. This is a repository to store the images.
    • <podman-repo> is the Podman registry address with Port Number if registry has port attached. This is a repository to store the images.
  7. Run one of the following commands to push the image to the registry:
    podman push <podman-repo>/<image-name>:<image-tag>
    docker push <docker-repo>/<image-name>:<image-tag>

Note:

It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise docker push command may fail.
2.2.1.3 Pushing the SCP Images to OCI Docker Registry

SCP Images

SCP deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.

The following table lists the Docker images of SCP:

Table 2-14 Images for SCP

Microservices Image Tag
<helm-release-name>-SCP-Worker ocscp-worker 25.2.100
<helm-release-name>-SCPC-Configuration ocscp-configuration 25.2.100
<helm-release-name>-SCPC-Notification ocscp-notification 25.2.100
<helm-release-name>-SCPC-Subscription ocscp-subscription 25.2.100
<helm-release-name>-SCPC-Audit ocscp-audit 25.2.100
<helm-release-name>-SCPC-Alternate-Resolution ocscp-alternate-resolution 25.2.100
<helm-release-name>-SCP-Cache ocscp-cache 25.2.100
<helm-release-name>-SCP-nrfproxy ocscp-nrfproxy 25.2.100
<helm-release-name>-SCP-nrfProxy-oauth ocscp-nrfproxy-oauth 25.2.100
<helm-release-name>-SCP-Mediation ocmed-nfmediation 25.2.100
<helm-release-name>-SCP-loadManager ocscp-load-manager 25.2.100

Note:

<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".

To push the images to the registry:

  1. Navigate to the location where you want to install SCP, and then unzip the SCP release package (<p********>_<release_number>_Tekelec.zip) to retrieve the following CSAR package.

    The SCP package is as follows: <ReleaseName>_csar_<Releasenumber>.zip.

    Where,

    <ReleaseName> is a name that is used to track this installation instance.

    <Releasenumber> is the release number.

    For example, ocscp_csar_25_2_1_0_0_0.zip.
  2. Untar the SCP package to retrieve the OCSCP image tar file: unzip <ReleaseName>_csar_<Releasenumber>.zip.

    For example, unzip ocscp_csar_25_2_1_0_0_0.zip

    The zip file consists of the following:

    
    |── Definitions
    │   ├── ocscp_cne_compatibility.yaml
    │   └── ocscp.yaml
    ├── Files
    │   ├── ChangeLog.txt
    │   ├── Helm
    │   │   ├── ocscp-25.2.100.tgz 
    │   │   └── ocscp-network-policy-25.2.100.tgz
    │   ├── Licenses
    │   ├── nf-test-25.2.100.tar
    │   ├── ocdebug-tools-25.2.100.tar
    │   ├── ocmed-nfmediation-25.2.100.tar
    │   ├── ocscp-alternate-resolution-25.2.100.tar
    │   ├── ocscp-audit-25.2.100.tar
    │   ├── ocscp-cache-25.2.100.tar
    │   ├── ocscp-configuration-25.2.100.tar
    │   ├── ocscp-load-manager-25.2.100.tar
    │   ├── ocscp-notification-25.2.100.tar
    │   ├── ocscp-nrfproxy-25.2.100.tar
    │   ├── ocscp-subscription-25.2.100.tar
    │   ├── ocscp-nrfProxy-oauth-25.2.100.tar
    │   ├── ocscp-worker-25.2.100.tar
    │   ├── Oracle.cert
    │   └── Tests
    ├── ocscp.mf
    ├── Scripts
    │   ├── ocscp_alerting_rules_promha.yaml
    │   ├── ocscp_alertrules.yaml
    │   ├── ocscp_configuration_openapi_25.2.100.json
    │   ├── ocscp_custom_values_25.2.100.yaml
    │   ├── ocscp_dbtier_25.2.100_custom_values_25.2.100.yaml
    │   ├── ocscp_metric_dashboard_25.2.100.json
    │   ├── ocscp_metric_dashboard_promha_25.2.100.json
    │   ├── ocscp_mib_25.2.100.mib
    │   ├── ocscp_mib_tc_25.2.100.mib
    │   ├── ocscp_network_policies_values_25.2.100.yaml
    │   ├── ocscp_servicemesh_config_values_25.2.100.yaml
    │   └── toplevel.mib
    ├── Scripts
    │    ├── oci
    │    │      └── ocscp_oci_alertrules_25.2.100.zip
    │    │      └── ocscp_oci_metric_dashboard_25.2.100.zip
    └── TOSCA-Metadata
        └── TOSCA.meta
  3. Open the Files folder and run one of the following commands to load ocscp-images-25.2.100.tar:
    podman load --input /IMAGE_PATH/ocscp-images-<release_number>.tar
    docker load --input /IMAGE_PATH/ocscp-images-<release_number>.tar

    Example:

    docker load --input /IMAGE_PATH/ocscp-images-25.2.100.tar
  4. Run one of the following commands to verify that the images are loaded:
    podman images
    docker images

    Sample Output:

    docker.io/ocscp/ocscp-cache                           25.2.100   98fc90defb56        2 hours ago         725MB
    docker.io/ocscp/ocscp-nrfproxy-oauth                  25.2.100   0d92bfbf7c14        2 hours ago         720MB
    docker.io/ocscp/ocscp-configuration                   25.2.100   f23cddb3ec83        2 hours ago         725MB
    docker.io/ocscp/ocscp-worker                          25.2.100   16c8f423c3b9        2 hours ago         877MB
    docker.io/ocscp/ocscp-load-manager                    25.2.100   dab875c4179a        2 hours ago         724MB
    docker.io/ocscp/ocscp-nrfproxy                        25.2.100   85029929a670        2 hours ago         690MB
    docker.io/ocscp/ocscp-alternate-resolution            25.2.100   2c38646f8bd7        2 hours ago         695MB
    docker.io/ocscp/ocscp-audit                           25.2.100   039e25297115        2 hours ago         694MB
    docker.io/ocscp/ocscp-notification                    25.2.100   a21e6bed6177        2 hours ago         710MB
    docker.io/ocscp/ocmed-nfmediation                     25.2.100   772e01a41584        2 hours ago         710MB
  5. Verify the list of images shown in the output with the list of images shown in Table 2-13. If the list does not match, reload the image tar file.
  6. Run the following commands to log in to the OCI registry:
    podman login -u <REGISTRY_USERNAME> -p <REGISTRY_PASSWORD> <REGISTRY_NAME>
    docker login -u <REGISTRY_USERNAME> -p <REGISTRY_PASSWORD> <REGISTRY_NAME>

    Where,

    • <REGISTRY_NAME> is <Region_Key>.ocir.io.
    • <REGISTRY_USERNAME> is <Object Storage Namespace>/<identity_domain>/email_id.
    • <REGISTRY_PASSWORD> is the Auth Token generated by the user.

      For more information about OCIR configuration and creating auth token, see Oracle Communications Cloud Native Core, OCI Deployment Guide.

    • <Object Storage Namespace> can be obtained from the OCI Console by navigating to Governance & Administration > Account Management > Tenancy Details > Object Storage Namespace.
    • <Identity Domain> is the domain of the user.
    • In OCI, each region is associated with a key. For more information, see Regions and Availability Domains.
  7. Run one of the following commands to tag the images to the registry:
    podman tag <image-name>:<image-tag> <podman-repo>/ <image-name>:<image-tag>
    docker tag <image-name>:<image-tag> <docker-repo>/ <image-name>:<image-tag>
    Where,
    • <image-name> is the image name.
    • <image-tag> is the image release number.
    • <docker-repo> is the docker registry address with Port Number if registry has port attached. This is a repository to store the images.
    • <podman-repo> is the Podman registry address with Port Number if registry has port attached. This is a repository to store the images.
  8. Run one of the following commands to push the image:
    podman push <oci-repo>/<image-name>:<image-tag>
    docker push <oci-repo>/<image-name>:<image-tag>

    Where, <oci-repo> is the OCI registry path.

  9. Make all the image repositories public by performing the following steps:

    Note:

    All the image repositories must be public.
    1. Log in to the OCI Console using your login credentials.
    2. From the left navigation pane, click Developer Services.
    3. On the preview pane, click Container Registry.
    4. From the Compartment drop-down list, select networkfunctions5G (root).
    5. From the Repositories and images drop-down list, select the required image and click Change to Public.

      The images details are displayed under the Repository information tab and the image changes to public. For example, the 25.2.100db/occne/cndbtier-mysqlndb-client (Private) changes to 25.2.100db/occne/cndbtier-mysqlndb-client (Public).

    6. Repeat substep 9e to make all image repositories public.
2.2.1.4 Verifying and Creating Namespace
To verify and create a namespace:

Note:

This is a mandatory procedure, run this before proceeding further with the installation. The namespace created or verified in this procedure is an input for the next procedures.
  1. Run the following command to verify if the required namespace already exists in the system:
    kubectl get namespaces

    In the output of the above command, if the namespace exists, continue with Manually Creating Service Account, Role, and Rolebinding.

  2. If the required namespace is unavailable, create the namespace by running the following command:
    kubectl create namespace <required namespace>

    Where, <required namespace> is the name of the namespace.

    For example, the following command creates the namespace, ocscp:

    kubectl create namespace ocscp
  3. Update the namespace for the required deployment Helm parameters as described in Configuration Parameters.

    Naming Convention for Namespaces

    The namespace should:

    • start and end with an alphanumeric character.
    • contain 63 characters or less.
    • contain only alphanumeric characters or '-'.

    Note:

    It is recommended to avoid using the prefix kube- when creating a namespace. The prefix is reserved for Kubernetes system namespaces.
2.2.1.5 Manually Creating Service Account, Role, and Rolebinding

This section is optional and it describes how to manually create a service account, role, and rolebinding. It is required only when customer needs to create a role, rolebinding, and service account manually before installing SCP.

Note:

The secrets should exist in the same namespace where SCP is getting deployed. This helps to bind the Kubernetes role with the given service account.
  1. Run the following command to create an SCP resource file:
    vi <ocscp-resource-file>

    Example:

    vi ocscp-resource-template.yaml

  2. Update the ocscp-resource-template.yaml file with release specific information:

    A sample template to update the ocscp-resource-template.yaml file is as follows:

    rules:
    - apiGroups: [""]
      resources: #resources under api group to be tested. Added for helm test. Helm test dependency are services,configmaps,pods,pvc,serviceaccounts
      - services
      - configmaps
      - pods
      - secrets
      - endpoints
      - persistentvolumeclaims
      - serviceaccounts
     
      verbs: ["get", "list", "watch", "delete"] # permissions of resources under api group, delete added to perform rolling restart of cache pods.
    - apiGroups:
      - "" # "" indicates the core API group
      resources: # Added for helm test. Helm test dependency
      - services
      - configmaps
      - pods
      - secrets
      - endpoints
      - persistentvolumeclaims
      - serviceaccounts
     
      verbs: ["get", "list", "watch", "delete"] # permissions of resources under api group, delete added to perform rolling restart of cache pods.
    #APIGroups that are added due to helm test dependency are apps, autoscaling, rbac.authorization and monitoring.coreos
    - apiGroups:
      - apps
      resources:
      - deployments
      verbs: # permissions so that resources under api group has
      - get
      - watch
      - list
    - apiGroups:
      - autoscaling
      resources: # Added for helm test. Helm test dependency
      - horizontalpodautoscalers
      verbs: # permissions so that resources under api group has
      - get
      - watch
      - list
     
    - apiGroups:
      - rbac.authorization.k8s.io
      resources: # Added for helm test. Helm test dependency
      - roles
      - rolebindings
      verbs:
      - get
      - watch
      - list
    - apiGroups:
      - monitoring.coreos.com
      resources: # Added for helm test. Helm test dependency
      - prometheusrules
      verbs:
      - get
      - watch
      - list
  3. Run the following command to create service account, role, and role binding:
    kubectl -n <ocscp-namespace> create -f ocscp-resource-template.yaml

    Example:

    kubectl -n ocscp create -f ocscp-resource-template.yaml

  4. Update the scpServiceAccountName parameter in the ocscp_values_25.2.100.yaml file with the value updated in the name field under kind: ServiceAccount.

    Sample configuration:

    global:
    #Keyname to give custom service account name
    scpServiceAccountName: &scpServiceAccountName "<custom_service_account_name>"
    #Flag for auto-creation of resource, disabled by default
    autoCreateResources:
    enabled: true
    serviceAccounts:
    create: false #internal flag to decide if the following resources should
    be automated
    accounts:
    - scpServiceAccountName: *scpServiceAccountName
    type: SCP
  5. Set autoCreateResources.enabled to true and serviceAccounts.create to false so that no service account is created by SCP, and SCP uses the service account created by the user.

    For more information about the scpServiceAccountName, autoCreateResources.enabled, and serviceAccounts.create parameters, see Global Parameters.

2.2.1.6 Automatically Creating Service Account, Role, and Rolebinding
This section describes how to automatically create service account, role, and role binding by enabling the following Helm parameters:
  • Global parameter (autoCreateResources.enabled): Controls the overall automation of resource creation. This parameter is disabled by default and must be set to true to enable automation.
  • Resource-specific Parameter (serviceAccounts.create): Controls service account creation at the resource level. This parameter is enabled by default. Ensure that it is set to true alongside the global parameter to enable ServiceAccount automation. This parameter is conditional on the global parameter and will only take effect if the global parameter is set to true. If this parameter is disabled, you must create the service accounts manually. The role and role binding resources are created along with the service account as part of this automation.
The service account automation is disabled by default in the custom-values.yaml file. Perform the following procedure to enable the service account automation:

Note:

You must perform the following procedure during the upgrade.
  1. Provide the scpServiceAccountName parameter in the custom-values.yaml file to create the service account.
  2. Enable autoCreateResources.enabled and serviceAccounts.create parameters in the global section of the custom-values.yaml file.
  3. Perform Helm installation.

    The service account is created with the name provided in the custom-values.yaml file.

The existing scpServiceAccountName parameter in the custom-values.yaml file is used for service account automation to minimize the changes in the custom-values.yaml file.
global:
 
  #Keyname to give custom service account name
  scpServiceAccountName: &scpServiceAccountName ""
 
  #Flag for auto-creation of resource, disabled by default
  autoCreateResources:
    enabled: false
 
    serviceAccounts:
      create: true #internal flag to decide if the following resources should be automated
      accounts:
        - scpServiceAccountName: *scpServiceAccountName
          type: SCP

For more information about the scpServiceAccountName, autoCreateResources.enabled, and serviceAccounts.create parameters, see Global Parameters.

The following table describes service account creation using different combinations of Helm parameters:

Table 2-15 Service Account Creation using Different Combinations

Parameter scpServiceAccountName Result

autoCreateResources.enabled: true

serviceAccounts.create: true

Provided

Service account is created or updated with the provided scpServiceAccountName.

autoCreateResources.enabled: true

serviceAccounts.create: true

Not provided

Service account is created or updated with .Release.name.

autoCreateResources.enabled: true

serviceAccounts.create: false

Provided

The service account is not created. It must be created manually.

autoCreateResources.enabled: true

serviceAccounts.create: false

Not provided

The deployment fails. The scpServiceAccountName is mandatory.

autoCreateResources.enabled: false

serviceAccounts.create: true or false

Provided

The service account is not created. It must be created manually.

autoCreateResources.enabled: false

serviceAccounts.create: true or false

Not Provided

Service account is created or updated with .Release.name.

Note:

  • If the scpServiceAccountName is set during the installation but omitted during the upgrade, a new service account is created using .Release.name. If scpServiceAccountName was missing during the installation but provided during the upgrade, a new service account is created with the specified name. The original service account will not be present in the current release. However, it will be restored if the release is rolled back to its previous version.
  • When upgrading from a version that supports automated service account creation without Helm automation to a version with Helm automation, you must retain the same service names in the custom-values.yaml file.
  • If you are upgrading from an existing version that uses manually created single or multiple service accounts to a version that supports automated service account creation, where one or more service accounts are automatically generated per component, you must specify the new service account names in the custom-values.yaml file to switch to these automated service accounts. This task triggers the creation of the new service accounts, roles, and role bindings through the Helm charts. The old service accounts should be retained for rollback scenarios and should only be removed when SCP is uninstalled as part of the cleanup process.

Sample configuration of service account created with .Release.name:

global:
#Keyname to give custom service account name
scpServiceAccountName: &scpServiceAccountName ""
#Flag for auto-creation of resource, disabled by default
autoCreateResources:
enabled: true
serviceAccounts:
create: true #internal flag to decide if the following resources should
be automated
accounts:
- scpServiceAccountName: *scpServiceAccountName
type: SCP

Sample configuration of service account created with <custom_name>:

Note:

Ensure that any service account with <custom_name> does not exist.
global:
#Keyname to give custom service account name
scpServiceAccountName: &scpServiceAccountName "<custom_name>"
#Flag for auto-creation of resource, disabled by default
autoCreateResources:
enabled: true
serviceAccounts:
create: true #internal flag to decide if the following resources should
be automated
accounts:
- scpServiceAccountName: *scpServiceAccountName
type: SCP

Role and RoleBinding Name

When both autoCreateResource.enabled and serviceAccounts.create are enabled, and scpserviceAccountName is provided or left blank, the Role and RoleBinding names are created as <serviceAccountName>-role, <serviceAccountName>-rolebinding.

Hook Lifecycle

With service account automation enabled, hook-related service accounts are created separately for each hook phase (Preupgrade and Prerollback). These service accounts are managed by Helm and are automatically removed when the corresponding hook or job is complete. This is achieved by applying the required Helm hook annotations to the service accounts. To avoid conflicts, hook service accounts use a -hook suffix and must not have the same name as the primary pod service account.

2.2.1.7 Configuring Database for SCP
This section explains how database administrators can create users and database in a single and multisite deployment.

Note:

While performing a fresh installation, if SCP is already deployed, purge the deployment and remove the database and users that were used for the previous deployment. For uninstallation procedure, see Uninstalling SCP.
  1. Log in to the MySQL server and ensure that there is a privileged user (<privileged user>) with the privileges similar to a root user.
  2. On each SQL node, run the following command to verify that the privileged user has the required permissions to allow connections from remote hosts:
    mysql>select host from mysql.user where User='<privileged username>';
    +------+
    | host |
    +------+
    | % |
    +------+
    1 rowinset(0.00 sec)
  3. If you do not see '%' in the output of the above mentioned query, then run the following command to modify this field to allow connections to remote host:
    mysql>update mysql.user set host='%' where User='<privileged username>';
    Query OK, 0rowsaffected (0.00 sec)
    Rowsmatched: 1 Changed: 0 Warnings: 0
    mysql> flush privileges;
    Query OK, 0rowsaffected (0.06 sec)

    Note:

    Perform this step on each SQL node.
  4. To automatically create an application user, backup database, and application database, ensure that the createUser parameter in the ocscp_values.yaml file is set to true. To manually create an application user, application database, and backup database, set the createUser parameter to false in the ocscp_values.yaml file.

    By default, the createUser parameter value is set to true. For more information about this parameter, see Table 3-1.

  5. Run the following commands to create an application and backup database:
    • For application database:
      CREATE DATABASE <scp_dbname>;

      Example:

      CREATE DATABASE ocscpdb;
    • For backup database:
      CREATE DATABASE <scp_backupdbname>;

      Example:

      CREATE DATABASE ocscpbackupdb;
  6. Run the following command to create an application user and assign privileges:
    CREATE USER '<username>'@'%' IDENTIFIED BY '<password>';
    GRANT SELECT, INSERT, DELETE, UPDATE ON <scp_dbname>.* TO <username>@'%';
    Where,
    • <scp_dbname> is the database name.
    • <username> is the database username.

    Example:

    CREATE USER 'scpApplicationUsr'@'%' IDENTIFIED BY 'scpApplicationPasswd'; GRANT SELECT, INSERT, DELETE, UPDATE ON ocscpdb.* TO scpApplicationUsr@'%';
  7. Run the following command to grant NDB_STORED_USER permission to the application user:
    GRANT NDB_STORED_USER ON *.* TO '<username>'@'%' WITH GRANT OPTION ;

    Example:

    GRANT NDB_STORED_USER ON *.* TO 'scpApplicationUsr'@'%' WITH GRANT OPTION ;

    Note:

    During a fresh SCP installation, the application database and backup database must be removed manually by running the following command:
    drop database <dbname>;
2.2.1.8 Configuring Kubernetes Secret for Accessing Database

This section explains how to configure Kubernetes secrets for accessing SCP database.

Note:

Do not use the same credentials in different Kubernetes secrets, and the passwords stored in the secrets must follow the password policy requirements as recommended in "Changing cnDBTier Passwords" in Oracle Communications Cloud Native Core Security Guide.
2.2.1.8.1 Creating and Updating Secret for Privileged Database User
This section explains how to create and update Kubernetes secret for privileged user to access the database.
  1. Run the following command to create Kubernetes secret:
    kubectl create secret generic <secret name> --from-literal=DB_USERNAME=<privileged user> --from-literal=DB_PASSWORD=<privileged user password> --from-literal=DB_NAME=<scp application db> --from-literal=RELEASE_DB_NAME=<scp backup db> -n <scp namespace>
    Where,
    • <secret name> is the secret name of the Privileged User.
    • <privileged user> is the username of the Privileged User.
    • <privileged user password> is the password of the Privileged User.
    • <scp backup db> is the backup database name.
    • <scp namespace> is the namespace of SCP deployment.

    Note:

    Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in the later releases.

    Example:

    kubectl create secret generic privilegeduser-secret --from-literal=DB_USERNAME=scpPrivilegedUsr --from-literal=DB_PASSWORD=scpPrivilegedPasswd --from-literal=DB_NAME=ocscpdb --from-literal=RELEASE_DB_NAME=ocscpbackupdb -n scpsvc

  2. Run the following command to verify the secret created:
    kubectl describe secret <secret name> -n <scp namespace>
    Where,
    • <secret name> is the secret name of the Privileged User.
    • <scp namespace> is the namespace of SCP deployment.

    Example:

    kubectl describe secret privilegeduser-secret -n ocscp

    Sample output:

    Name:         privilegeduser-secret
    Namespace:    ocscp
    Labels:       <none>
    Annotations:  <none>
    
    Type:  Opaque
    
    Data
    ====
    mysql-password:  10 bytes
    mysql-username:  17 bytes
    
2.2.1.8.2 Creating and Updating Secret for Application Database User
This section explains how to create and update Kubernetes secret for application user to access the database.
  1. Run the following command to create a Kubernetes secret:
    kubectl create secret generic <secret name> --from-literal=DB_USERNAME=<application user> --from-literal=DB_PASSWORD=<application user password> --from-literal=DB_NAME=<scp application db> -n <scp namespace>
    Where,
    • <secret name> is the secret name of the Privileged User.
    • <application user> is the username of the Application User.
    • <application user password> is the password of the Application User.
    • <scp application db> is the application database name.
    • <scp namespace> is the namespace of SCP deployment.

    Note:

    Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in the later releases.

    Example:

    kubectl create secret generic appuser-secret --from-literal=DB_USERNAME=scpApplicationUsr --from-literal=DB_PASSWORD=scpApplicationPasswd --from-literal=DB_NAME=ocscpdb -n scpsvc

  2. Run the following command to verify the secret created:
    kubectl describe secret <application user secret name> -n <namespace>
    Where,
    • <application user secret name> is the secret name of the application user.
    • <scp namespace> is the namespace of SCP deployment.

    Example:

    kubectl describe secret appuser-secret -n ocscp

    Sample output:

    Name:         appuser-secret
    Namespace:    ocscp
    Labels:       <none>
    Annotations:  <none>
    
    Type:  Opaque
    
    Data
    ====
    mysql-password:  10 bytes
    mysql-username:  7 bytes
    
2.2.1.9 Configuring SSL or TLS Certificates to Enable HTTPS

The Secure Sockets Layer (SSL) and Transport Layer Security (TLS) certificates must be configured in SCP to enable Hypertext Transfer Protocol Secure (HTTPS). These certificates must be stored in Kubernetes secret and the secret name must be provided in the sbiProxySslConfigurations section of the custom-values.yaml file.

Perform the following procedure to configure SSL or TLS certificates for enabling HTTPS in SCP. You must perform this procedure before:
  • fresh installation of SCP.
  • performing an SCP upgrade.
You must have the following files to create Kubernetes secret for HTTPS:
  • ECDSA private key and CA signed certificate of SCP if initialAlgorithm is ES256
  • RSA private key and CA signed certificate of SCP if initialAlgorithm is RS256
  • TrustStore password file
  • KeyStore password file
  • CA Root file

Note:

  • The process to create the private keys, certificates, and passwords is at the operators' discretion.
  • The passwords for TrustStore and KeyStore must be stored in the respective password files.
  • Perform this procedure before enabling HTTPS in SCP.

You can create Kubernetes secret for enabling HTTPS in SCP using one of the following methods:

  • Managing Kubernetes secret manually
  • Managing Kubernetes secret through OCCM

Managing Kubernetes Secret Manually

  1. To create Kubernetes secret manually, run the following command:
    kubectl create secret generic <ocscp-secret-name> --from-file=<rsa private key file name> --from-file=<ssl truststore file name> --from-file=<ssl keystore file name> --from-file=<CA root bundle> --from-file=<ssl rsa certificate file name> -n <Namespace of OCSCP deployment>

    Note:

    • Note down the command used during the creation of Kubernetes secret. This command is used for the subsequent updates.
    • The secrets should exist in the same namespace where SCP is getting deployed.

    Example:

    kubectl create secret generic server-primary-ocscp-secret --from-file=server_rsa_private_key_pkcs1.pem --from-file=server_ocscp.cer --from-file=server_caroot.cer --from-file=trust.txt --from-file=key.txt -n $NAMESPACE
    kubectl create secret generic default-primary-ocscp-secret --from-file=client_rsa_private_key_pkcs1.pem --from-file=client_ocscp.cer --from-file=caroot.cer --from-file=trust.txt --from-file=key.txt -n $NAMESPACE

    Note:

    It is recommended to use the same Kubernetes secret name for the primary client and the primary server as mentioned in the example. In case you change <ocscp-secret-name>, then update the k8SecretName parameter under the sbiProxySslConfigurations section in the custom-values.yaml file. For more information about sbiProxySslConfigurations parameters, see Global Parameters.
  2. Run the following command to verify the Kubernetes secret created:
    kubectl describe secret <ocscp-secret-name> -n <Namespace of OCSCP deployment>

    Example:

    kubectl describe secret ocscp-secret -n ocscp

  3. Optional: Perform the following tasks to add, remove, or modify TLS or SSL certificates in Kubernetes secret:

    Note:

    You must have the certificates and files that you want to add or update in the Kubernetes secret.
    • To add a certificate, run the following command:
      TLS_CRT=$(base64 < "<certificate-name>" | tr -d '\n')
      kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":\"${TLS_CRT}\"}}"
      Where,
      • <certificate-name> is the certificate file name.
      • <secret-name> is the name of the Kubernetes secret, for example, ocscp-secret.

      Example:

      If you want to add a Certificate Authority (CA) Root from the caroot.cer file to the ocscp-secret, run the following command:

      TLS_CRT=$(base64 < "caroot.cer" | tr -d '\n')
      kubectl patch secret ocscp-secret  -p "{\"data\":{\"caroot.cer\":\"${TLS_CRT}\"}}" -n scpsvc

      Similarly, you can also add other certificates and keys to the ocscp-secret.

    • To update an existing certificate, run the following command:
      TLS_CRT=$(base64 < "<updated-certificate-name>" | tr -d '\n')
      kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":\"${TLS_CRT}\"}}"

      Where, <updated-certificate-name> is the certificate file that contains the updated content.

      Example:

      If you want to update the privatekey present in the rsa_private_key_pkcs1.pem file to the ocscp-secret, run the following command:

      TLS_CRT=$(base64 < "rsa_private_key_pkcs1.pem" | tr -d '\n') 
      kubectl patch secret ocscp-secret -p "{\"data\":{\"rsa_private_key_pkcs1.pem\":\"${TLS_CRT}\"}}" -n scpsvc

      Similarly, you can also update other certificates and keys to the ocscp-secret.

    • To remove an existing certificate, run the following command:
      kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":null}}"

      Where, <certificate-name> is the name of the certificate to be removed.

      The certificate must be removed when it expires or needs to be revoked.

      Example:

      To remove the CA Root from the ocscp-secret, run the following command:
      kubectl patch secret ocscp-secret  -p "{\"data\":{\"caroot.cer\":null}}" -n scpsvc
      

      Similarly, you can also remove other certificates and keys from the ocscp-secret.

The certificate update and renewal impacts are as follows:
  • Updating, adding, or deleting the certificate, terminates all the existing connections gracefully and reestablishes new connections for new requests.
  • When the certificates expires, no new connections are established for new requests, however, the existing connections remain active. After the renewal of the certificates as described in Step 3, all the existing connections are gracefully terminated. And, new connections are established with the renewed certificates.

Managing Kubernetes Secret Through OCCM

To create the Kubernetes secret using OCCM, see "Managing Certificates" in Oracle Communications Cloud Native Core, Certificate Management User Guide, and then patch the Kubernetes secret created by OCCM to add keyStore password and trustStore password files by running the following commands:
  1. To patch the Kubernetes secret created with the keyStore password file:
    TLS_CRT=$(base64 < "key.txt" | tr -d '\n')
    kubectl patch secret server-primary-ocscp-secret-occm -n scpsvc -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"

    Where, key.txt is the KeyStore password file that contains KeyStore password.

  2. To patch the Kubernetes secret created with the trustStore password file:
    TLS_CRT=$(base64 < "trust.txt" | tr -d '\n')
    kubectl patch secret server-primary-ocscp-secret-occm -n scpsvc -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"

    Where, trust.txt is the TrustStore password file that contains TrustStore password.

Note:

To monitor the lifecycle management of the certificates through OCCM, do not patch the Kubernetes secret manually to update the TLS certificate or keys. It must be done through the OCCM GUI.
2.2.1.10 Configuring SSL or TLS Certificates for OCNADD
Perform the following procedure to ensure successful TLS and SASL handshake with Oracle Communications Network Analytics Data Director (OCNADD) or Kafka:
  1. To create Kubernetes secret for the OCNADD TLS certificate, run the following command:
    kubectl create secret generic primary-ocscpdd-secret --from-file=<CA root bundle> --from-file=<ssl truststore file name> --from-file=<rsa private key file name> --from-file=<ssl rsa certificate file name> --from-file=<ssl keystore file name> -n <Namespace of SCP deployment>
    

    Example:

    kubectl create secret generic primary-ocscpdd-secret --from-file=cacert.pem --from-file=ddtrust.txt --from-file=dd_rsa_private_key_pkcs1.pem --from-file=dd_certificate.cer --from-file=ddkey.txt -n scpsvc
  2. To create the secret for OCNADD SASL credentials, run the following command:
    kubectl create secret generic ocscpddsasl-secret --from-file=<user name file> --from-file=<password file> -n <Namespace of SCP deployment>

    Example:

    kubectl create secret generic ocscpddsasl-secret --from-file=userName.txt --from-file=password.txt -n scpsvc
  3. Run the following command to verify the OCNADD secret created:
    kubectl describe secret ocscpddsasl-secret -n <Namespace of OCSCP deployment>

    Example:

    kubectl describe secret ocscpddsasl-secret -n scpsvc
2.2.1.11 Configuring SCP to Support Aspen Service Mesh

SCP leverages the Platform Service Mesh (for example, Aspen Service Mesh (ASM)) for all internal and external TLS communication by deploying a special sidecar proxy in each pod to intercept all the network communications. The service mesh integration provides inter-NF communication and allows API gateway to co-work with service mesh. The service mesh integration supports the services by deploying a special sidecar proxy in each pods to intercept all the network communications between microservices.

Supported ASM version: 1.14.6, 1.11.8, and 1.21.6.

For ASM installation and configuration, see official Aspen Service Mesh website for details.

Aspen Service Mesh (ASM) configurations are categorized as follows:

  • Control Plane: It involves adding labels or annotations to inject sidecar. The control plane configurations are part of the NF Helm chart.
  • Data Plane: It helps in traffic management, such as handling NF call flows by adding Service Entries (SE), Destination Rules (DR), Envoy Filters (EF), and other resource changes such as apiVersion change between different versions. This configuration is done manually by considering each NF requirement and ASM deployment.

Data Plane Configuration

Data Plane configuration consists of following Custom Resource Definitions (CRDs):

  • Service Entry (SE)
  • Destination Rule (DR)
  • Envoy Filter (EF)

Note:

Use Helm charts to add or remove CRDs that you may require due to ASM upgrades to configure features across different releases.

The data plane configuration is applicable in the following scenarios:

  • NF to NF Communication: During NF to NF communication, where sidecar is injected to both the NFs, SE and DR must communicate with the corresponding SE and DR of the other NF. Otherwise, the sidecar rejects the communication. All egress communications of NFs must have a configured entry for SE and DR.

    Note:

    Configure the core DNS with the producer NF endpoint to enable the sidecar access for establishing communication between cluster.
  • Kube-api-server: For Kube-api-server, there are a few NFs that require access to the Kubernetes API server. The ASM proxy (mTLS enabled) may block this. As per F5 recommendation, the NF must add SE for the Kubernetes API server for its own namespace.
  • Envoy Filters: Sidecars rewrite the header with its own default value. Therefore, the headers from back-end services are lost. You require Envoy Filters to help in passing the headers from back-end services to use it as it is.

ASM Configuration File

A sample ocscp_servicemesh_config_values_25.2.100.yaml is available in the Scripts folder of ocscp_csar_25_2_1_0_0_0.zip. For downloading the file, see Customizing SCP. To view ASM EnvoyFilter configuration enhancements, see ASM Configuration.

Note:

To connect to vDBTier, create an SE and DR for MySQL connectivity service if the database is in different cluster. Else, the sidecar rejects request as vDBTier does not support sidecars.
2.2.1.11.1 Predeployment Configurations
This section explains the predeployment configuration procedure to install SCP with ASM support.

Note:

  • For information about ASM parameters, see ASM Resource. You can log in to ASM using ASPEN credentials.
  • On the ASM setup, create service entries for respective namespace.
  1. Run the following command to create a namespace for SCP deployment if not already created:
    kubectl create ns <scp-namespace-name>
  2. Run the following command to configure access to Kubernetes API Service and create a service entry in pod networking so that pods can access Kubernetes api-server:
    kubectl apply -f kube-api-se.yaml
    Sample kube-api-se.yaml file is as follows:
    # service_entry_kubernetes.yaml
    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: kube-api-server
      namespace: <scp-namespace>
    spec:
      hosts:
      - kubernetes.default.svc.<domain>
      exportTo:
      - "."
      addresses:
      - <10.96.0.1> # cluster IP of kubernetes api server
      location: MESH_INTERNAL
      ports:
      - number: 443
        name: https
        protocol: HTTPS
      resolution: NONE
  3. Run the following command to set Network Repository Function (NRF) connectivity by creating ServiceEntry and DestinationRule and access external or public NRF service that is not part of Service Mesh Registry:
    kubectl apply -f nrf-se-dr.yaml
    Sample nrf-se-dr.yaml file is as follows:
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: nrf-dr
      namespace: <scp-namespace>
    spec:
      exportTo:
      - .
      host: ocnrf.3gpp.oracle.com
      trafficPolicy:
        tls:
          mode: MUTUAL
          clientCertificate: /etc/certs/cert-chain.pem
          privateKey: /etc/certs/key.pem
          caCertificates: /etc/certs/root-cert.pem
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: nrf-se
      namespace: <scp-namespace>
    spec:
      exportTo:
      - .
      hosts:
      - "ocnrf.3gpp.oracle.com"
      ports:
      - number: <port number of host in hosts section>
        name: http2
        protocol: HTTP2
      location: MESH_EXTERNAL
      resolution: NONE
  4. Run the following command to enable communication between internal Network Functions (NFs):

    Note:

    If Consumer and Producer NFs are not part of Service Mesh Registry, create Destination Rules and Service Entries in SCP namespace for all known call flows to enable inter NF communication.
    kubectl apply -f known-nf-se-dr.yaml
    Sample known-nf-se-dr.yaml file is as follows:
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: udm1-dr
      namespace: <scp-namespace>
    spec:
      exportTo:
      - .
      host: s24e65f98-bay190-rack38-udm-11.oracle-ocudm.cnc.us-east.oracle.com
      trafficPolicy:
        tls:
          mode: MUTUAL
          clientCertificate: /etc/certs/cert-chain.pem
          privateKey: /etc/certs/key.pem
          caCertificates: /etc/certs/root-cert.pem
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: udm1-se
      namespace: <scp-namespace>
    spec:
      exportTo:
      - .
      hosts:
      - "s24e65f98-bay190-rack38-udm-11.oracle-ocudm.cnc.us-east.oracle.com"
      ports:
      - number: 16016
        name: http2
        protocol: HTTP2
      location: MESH_EXTERNAL
      resolution: NONE

    Note:

    Create DestinationRule and ServiceEntry ASM resources for the following scenarios:
    • When an NF is registered with callback URIs or notification URIs which is not part of Service Mesh Registry
    • When a callbackReference is used in a known call flow and contains URI which is not part of Service Mesh Registry
    Run the following command:
    kubectl apply -f callback-uri-se-dr.yaml
    Sample callback-uri-se-dr.yaml file is as follows:
    apiVersion: networking.istio.io/v1alpha3
     kind: DestinationRule
     metadata:
     name: udm-callback-dr namespace: <scp-namespace>
     spec:
      exportTo: - .
      host: udm-notifications-processor-03.oracle-ocudm.cnc.us-east.oracle.com
      trafficPolicy:
       tls:
        mode: MUTUAL
        clientCertificate: /etc/certs/cert-chain.pem
        privateKey: /etc/certs/key.pem
        caCertificates: /etc/certs/root-cert.pem
     ---
     apiVersion: networking.istio.io/v1alpha3
     kind: ServiceEntry
     metadata:
     name: udm-callback-se
     namespace: <scp-namespace>
     spec:
      exportTo: - .
      hosts: - "udm-notifications-processor-03.oracle-ocudm.cnc.us-east.oracle.com"
      ports:
      - number: 16016
        name: http2
        protocol: HTTP2
        location: MESH_EXTERNAL
        resolution: NONE
  5. To equally distribute ingress connections among the SCP worker threads, run the following command to create a new YAML file with EnvoyFilter on ASM sidecar:

    You must apply EnvoyFilter to process inbound connections on ASM sidecar when SCP is deployed with ASM.

    kubectl apply -f envoy_inbound.yaml

    Sample envoy_inbound.yaml file is as follows:

    apiVersion: networking.istio.io/v1alpha3
    kind: EnvoyFilter
    metadata:
      name: inbound-envoyfilter
      namespace: <scp-namespace>
    spec:
      workloadSelector:
        labels:
          app: ocscp-scp-worker
      configPatches:
        - applyTo: LISTENER
          match:
            context: SIDECAR_INBOUND
            listener:
              portNumber: 15090
          patch:
            operation: MERGE
            value:
              connection_balance_config:
                exact_balance: {}
    

Note:

  • The ASM sidecar portNumber can be configured depending on the deployment. For example, 15090.
  • Do not configure any virtual service that applies connection or transaction timeout between various SCP services.
2.2.1.11.2 Enabling Dual Stack Networking for ASM
Perform the following procedure before deploying Aspen Service Mesh (ASM) to enable dual stack networking for ASM. Using the dual stack functionality, SCP with sidecar can use IPv4, IPv6, or both to establish connections with pods and services.

Note:

  • ASM should be deployed in dual stack mode.
  • To enable Dual Stack, perform a fresh installation of SCP. An upgrade from a single stack to a dual stack is not supported.
  1. Open the aspen-mesh-override-values.yaml file.
    For more information about the aspen-mesh-override-values.yaml file and ASM installation, see https://clouddocs.f5.com/products/aspen-service-mesh/1.11/.
  2. In the global section, do the following:
    1. To enable dual stack functionality in Istio to work in Kubernetes, set the dualStack parameter to true.
    2. To establish communication between gateway and external sources, set the ingressGatewayDualStack parameter to true.
  3. Save the aspen-mesh-override-values.yaml file.
2.2.1.11.3 Deploying SCP with ASM

Deployment Configuration

You must complete the following deployment configuration before performing the Helm install.
  1. Run the following command to create namespace label for auto sidecar injection and to automatically add the sidecars in all pods spawned in SCP namespace:
    kubectl label ns <scp-namespace> istio-injection=enabled
  2. Create a Service Account for SCP and a role with appropriate security policies for sidecar proxies to work by referring to the sa-role-rolebinding.yaml file mentioned in the next step.
  3. Map the role and service accounts by creating a role binding as specified in the sample sa-role-rolebinding.yaml file:
    kubectl apply -f sa-role-rolebinding.yaml
    Sample sa-role-rolebinding.yaml file is as follows:
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: {{ template "noncluster.role.name" . }}
      namespace: {{ .Release.Namespace }}
      labels:
        {{- include "labels.allResources" . }}
      annotations:
        {{- include "annotations.allResources" . }}
    rules:
    - apiGroups: [""]
      resources:
      - pods
      - services
      - configmaps
      verbs: ["get", "list", "watch"]
    - apiGroups:
      - "" # "" indicates the core API group
      resources:
      - secrets
      - endpoints
      verbs: ["get", "list", "watch"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: {{ template "noncluster.rolebinding.name" . }}
      namespace: {{ .Release.Namespace }}
      labels:
        {{- include "labels.allResources" . }}
      annotations:
        {{- include "annotations.allResources" . }}
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: {{ template "noncluster.role.name" . }}
    subjects:
    - kind: ServiceAccount
      name: {{ template "noncluster.serviceaccount.name" . }}
      namespace: {{ .Release.Namespace }}
    ---
    apiVersion: v1
    kind: ServiceAccount
    {{- if .Values.imagePullSecrets }}
    imagePullSecrets:
    {{- range .Values.imagePullSecrets }}
      - name: {{ . }}
    {{- end }}
    {{- end }}
    metadata:
      name: {{ template "noncluster.serviceaccount.name" . }}
      namespace: {{ .Release.Namespace }}
      labels:
        {{- include "labels.allResources" . }}
      annotations:
        {{- include "annotations.allResources" . }}
    
  4. Update ocscp_custom_values_25.2.100.yaml with the following annotations:

    Note:

    Update other values such as DB details and service account as created in the previous steps.
    global:
      customExtension:
        allResources:
          annotations:
            sidecar.istio.io/inject: "true"
        lbDeployments:
          annotations:
            sidecar.istio.io/inject: "true"
            oracle.com/cnc: "true"
        nonlbDeployments:
          annotations:
            sidecar.istio.io/inject: "true"
            oracle.com/cnc: "true"
     
      scpServiceAccountName: <"ocscp-release-1-10-2-scp-serviceaccount">
      database:
        dbHost: <"scp-db-connectivity-service"> #DB Service FQDN
     
    scpc-configuration:
      service:
        type: ClusterIP
     
    scp-worker:
      tracingenable: false
      service:   
        type: ClusterIP
    

    Note:

    1. The Sidecar inject = "false" annotation on all resources prevents sidecar injection on pods created by Helm jobs or hooks.
    2. Deployment overrides re-enable auto sidecar injection on all deployments.
    3. SCP-Worker override disables automatic sidecar injection for the SCP-Worker microservice because it is done manually in later stages. This override is only required for ASM release 1.4 or 1.5. If integrating with ASM release 1.6 or later, it must be removed.
    4. The oracle.com/cnc annotation is required for integration with OSO services.
    5. Jaeger tracing must be disabled because it may interfere with SM end-to-end traces.
  5. To set sidecar resources for each microservice in the ocscp_custom_values_25.2.100.yaml file under deployment.customExtension.annotations, configure the following ASM annotations with the resource values for the services:

    SCP uses these annotations to assign the resources of the sidecar containers.

    • sidecar.istio.io/proxyMemory: Indicates the memory requested for the sidecar.
    • sidecar.istio.io/proxyMemoryLimit: Indicates the maximum memory limit for the sidecar.
    • sidecar.istio.io/proxyCPU: Indicates the CPU requested for the sidecar.
    • sidecar.istio.io/proxyCPULimit: Indicates the CPU limit for the sidecar.
  6. Define the concurrency setting for the sidecar container. A sidecar container concurrency value must be atleast equal to number of maximum vCPUs allocated to the sidecar container as follows:
    proxy.istio.io/config: |-
              concurrency: 6
    1. Set the concurrency of SCPC-Notification pods to 18.
    2. Set the concurrency of SCP-Worker pods as follows:
      • 4 for a 4vCPU profile
      • 8 for a 8vCPU profile
      • 12 for a 12vCPU profile
2.2.1.11.4 Deployment Configurations

ASM Configuration to Allow XFCC Header

Envoy Filter should be added to allow the XFCC header on ASM sidecar.

Sample file:

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: <name>
  namespace: <namespace>
spec:
  workloadSelector:
    labels:
      app.kubernetes.io/instance: <SCP Deployment name>
  configPatches:
  - applyTo: NETWORK_FILTER
    match:
      listener:
        filterChain:
          filter:
            name: "envoy.filters.network.http_connection_manager"
    patch:
      operation: MERGE
      value:
        typed_config:
          '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v3.HttpConnectionManager
          forward_client_cert_details: ALWAYS_FORWARD_ONLY
          use_remote_address: true
          xff_num_trusted_hops: 1

Inter-NF Communication

For every new NF participating in new call flows, DestinationRule and ServiceEntry must be created in SCP namespace to enable communication. This can be done in the same way as done earlier for known call flows.

Run the following command to create DestinationRule and ServiceEntry:

kubectl apply -f new-nf-se-dr.yaml
Sample new-nf-se-dr.yaml file for DestinationRule and ServiceEntry:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: <unique DR name for NR>
  namespace: <scp-namespace>
spec:
  exportTo:
  - .
  host: <NF-public-FQDN>
  trafficPolicy:
    tls:
      mode: MUTUAL
      clientCertificate: /etc/certs/cert-chain.pem
      privateKey: /etc/certs/key.pem
      caCertificates: /etc/certs/root-cert.pem
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: <unique SE name for NR>
  namespace: <scp-namespace>
spec:
  exportTo:
  - .
  hosts:
  - <NF-public-FQDN>
  ports:
  - number: <NF-public-port>
    name: http2
    protocol: HTTP2
  location: MESH_EXTERNAL
  resolution: NONE

Operations Services Overlay Installation

For Operations Services Overlay (OSO) installation instructions, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation Guide.

Note:

If OSO is deployed in the same namespace as SCP, ensure that all deployments of OSO have the annotation to skip sidecar injection as OSO does not support ASM sidecar proxy.

CNE Common Services for Logging

For information about CNE installation instructions, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

Note:

If CNE is deployed in the same namespace as SCP, ensure that all deployments of CNE have the annotation to skip sidecar injection as CNE does not support ASM sidecar proxy.
2.2.1.11.5 Deleting ASM

This section describes the steps to delete ASM.

To delete ASM, run the following command:

helm delete <helm-release-name> -n <namespace>

Where,

  • <helm-release-name> is the release name used by the Helm command. This release name must be the same as the release name used for ServiceMesh.
  • <namespace> is the deployment namespcae used by the Helm command.

For example:

helm delete ocscp-servicemesh-config -n ocscp

To disable ASM, run the following command:

kubectl label --overwrite namespace ocscp istio-injection=disabled

To verify if ASM is disabled, run the following command:

kubectl get se,dr,peerauthentication,envoyfilter,vs -n ocscp
2.2.1.12 Configuring Network Policies for SCP
Kubernetes network policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application. These network policies enforce access restrictions for all the applicable data flows except communication from Kubernetes node to pod for invoking container probe.

Note:

Configuring network policies is a recommended step. Based on the security requirements, network policies may or may not be configured.
For more information about this functionality, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.

Note:

  • If the traffic is blocked or unblocked between the pods even after applying network policies, check if any existing policy is impacting the same pod or set of pods that might alter the overall cumulative behavior.
  • If changing default ports of services such as Prometheus, Database, Jaegar, or if Ingress or Egress Gateway names is overridden, update them in the corresponding network policies.

Configuring Network Policies

Following are the various operations that can be performed for network policies:

2.2.1.12.1 Installing Network Policies

Prerequisite

Network Policies are implemented by using the network plug-in. To use network policies, you must be using a networking solution which supports Network Policy.

Note:

For a fresh installation, it is recommended to install Network Policies before installing SCP. However, if SCP is already installed, you can still install the Network Policies.
To install network policy:
  1. Open the ocscp-network-policy-custom-values-25.2.100.yaml file provided in the release package zip file. For downloading the file, see Downloading the SCP Package and Pushing the Images to Customer Docker Registry.
  2. The file is provided with the default network policies. If required, update the ocscp-network-policy-custom-values-25.2.100.yaml file. For more information on the parameters, see the Configuration Parameters for network policy parameter table.

    Note:

    To run ATS, uncomment the following policies from ocscp-network-policy-custom-values-25.2.100.yaml:
    • allow-ingress-traffic-to-notification
    • allow-egress-for-ats
    • allow-ingress-to-ats
    • To connect with CNC Console, update the below parameter in the allow-ingress-from-console network policy in the ocscp-network-policy-custom-values-25.2.100.yaml file:
      • kubernetes.io/metadata.name: <namespace in which CNCC is deployed>
    • In allow-ingress-prometheus policy, kubernetes.io/metadata.name parameter must contain the value for the namespace where Prometheus is deployed, and app.kubernetes.io/name parameter value should match the label from Prometheus pod.
  3. Run the following command to install the network policies:
    helm install <helm-release-name> <network-policy>/ -n <namepsace> -f
        <custom-value-file>
    For example:
    helm install ocscp-network-policy ocscp-network-policy/ -n scpsvc -f ocscp-network-policy-custom-values-25.2.100.yaml
    • helm-release-name: ocscp-network-policy Helm release name.
    • custom-value-file: ocscp-network-policy custom value file.
    • namespace: SCP namespace.
    • network-policy: location where the network-policy package is stored.

Note:

  • Connections that were created before installing network policy and still persist are not impacted by the new network policy. Only the new connections would be impacted.
  • If you are using ATS suite along with network policies, it is required to install the <NF acronym> and ATS in the same namespace.
2.2.1.12.2 Upgrading Network Policies
To add, delete, or update network policy:
  1. Modify the ocscp-network-policy-custom-values-25.2.100.yaml file to update, add, and delete the network policies.
  2. Run the following command to upgrade the network policies:
    helm upgrade <helm-release-name> <network-policy>/ -n <namespace> -f
        <values.yaml>
    For example:
    helm upgrade ocscp-network-policy ocscp-network-policy/ -n ocscp -f
     ocscp-network-policy-custom-values-25.2.100.yaml
    where,
    • helm-release-name: ocscp-network-policy Helm release name.
    • custom-value-file: ocscp-network-policy custom value file.
    • namespace: SCP namespace.
    • network-policy: location where the network-policy package is stored.
2.2.1.12.3 Verifying Network Policies
Run the following command to verify if the network policies are deployed successfully:
kubectl get <helm-release-name> -n <namespace>
For Example:
kubectl get ocscp-network-policy -n ocscp
where,
  • helm-release-name: ocscp-network-policy Helm release name.
  • namespace: SCP namespace.
2.2.1.12.4 Uninstalling Network Policies
Run the following command to uninstall all the network policies:
helm uninstall <release_name> --namespace <namespace>
For example:
helm uninstall occncc-network-policy --scp
    cncc

Note:

While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.

2.2.1.12.5 Configuration Parameters for Network Policies

Table 2-16 Supported Kubernetes Resource for Configuring Network Policies

Parameter Description Details
apiVersion

This is a mandatory parameter.

Specifies the Kubernetes version for access control.

Note: This is the supported api version for network policy. This is a read-only parameter.

Data Type: string

Default Value: networking.k8s.io/v1

kind

This is a mandatory parameter.

Represents the REST resource this object represents.

Note: This is a read-only parameter.

Data Type: string

Default Value: NetworkPolicy

Table 2-17 Configuration Parameters for Network Policy

Parameter Description Details
metadata.name

This is a mandatory parameter.

Specifies a unique name for the network policy.

{{ .metadata.name }}
spec.{}

This is a mandatory parameter.

This consists of all the information needed to define a particular network policy in the given namespace.

Note: SCP supports the spec parameters defined in "Supported Kubernetes Resource for Configuring Network Policies".

Default Value: NA

For more information about this functionality, see "Network Policies" in Oracle Communications Cloud Native Core, Service Communication Proxy User Guide.

2.2.2 Installation Tasks

This section provides installation procedures to install Oracle Communications Cloud Native Core, Service Communication Proxy (SCP).

Before installing SCP, you must complete Prerequisites and Preinstallation Tasks tasks for both the deployment methods.

2.2.2.1 Installing SCP Package
To install the SCP package:

Note:

For each SCP deployment in the network, use a unique SCP database name during the installation.
  1. Run the following command to access the extracted package:
    cd ocscp-<release_number>

    Example:

    cd ocscp-25.2.100

  2. Customize the ocscp_values_25.2.100.yaml file with the required deployment parameters. See the Customizing SCP chapter to customize the file. For more information about predeployment parameter configurations, see Preinstallation Tasks.

    Note:

    In case NRF configuration is required, see Configuring Network Repository Function Details.
  3. (Optional) If you want to install SCP with Aspen Service Mesh (ASM), perform the predeployment tasks as described in Configuring SCP to Support Aspen Service Mesh.
  4. Open the ocscp_values_25.2.100.yaml file and enable Release 16 with Model C Indirect 5G SBI Communication support by adding - rel16 manually under releaseVersion, and then uncomment scpProfileInfo.servingScope and scpProfileInfo.nfSetIdList parameters.

    Note:

    - rel16 is the default release version. For more information about Release 16, see 3GPP TS 23.501.

    Sample custom-values.yaml file output:

    global:
      domain: svc.cluster.local
      clusterDomain: cluster.local
      # If ingress gateway is available then set ingressGWAvailable flag to true
      # and provide ingress gateway IP and Port in publicSignalingIP and publicSignalingPort respectively.
      # If ingressGWAvailable flag is true then service type for scp-worker will be ClusterIP
      # otherwise it will be LoadBalancer.
      # We can not set ingressGWAvailable flag true and at the same time publicSignalingIPSpecified flag as false.
      # If you want to assign a load balancer IP,set loadbalanceripenbled flag to true and
      # provide value for flag loadbalancerip
      # else a random IP will be assigned if loadbalanceripenbled is false
      # and it will not use loadbalancerip flag
      adminport: 8001
      # enable or disable jaeger tracing
      tracingEnable: &scpworkerTracingEnabled false
      enableTraceBody: &scpworkerJaegerBodyEnabled false
      #otelTracingEnabled: &scpworkerOtelTracingEnabled false
      releaseVersion:
      - rel16
  5. Run the following command to install SCP using charts from the Helm repository:
    helm install <release name> -f <custom_values.yaml> --namespace <namespace> <helm-repo>/chart_name --version <helm_version>
    1. In case charts are extracted:
      helm install <release name> -f <custom_values.yaml> --namespace <namespace> <chartpath>

    Example:

    helm install ocscp-helm-repo/ocscp -f <custom values.yaml> ocscp --namespace scpsvc --version <helm version>

    Caution:

    Do not exit from the helm install command manually. After running the helm install command, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out from the helm install command. It leads to some anomalous behavior.

2.2.3 Postinstallation Tasks

This section explains the postinstallation tasks for SCP.

2.2.3.1 Verifying SCP Installation
To verify the installation:
  1. Run the following command to verify the installation status:
    helm status <helm-release> --namespace <namespace>

    Where,

    • <helm-release> is the Helm release name of SCP.
    • <namespace> is the namespace of SCP deployment.

    Example:

    helm status ocscp --namespace ocscp

    The system displays the status as deployed if the deployment is successful.
  2. Run the following command to check whether all the services are deployed and active:
    kubectl -n <namespace_name> get services

    Example:

    NAME                                                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)
    <helm-release-name>-scp-cache                              LoadBalancer   10.96.65.127    <pending>     8091:31668/TCP,9000:31087/TCP,30001:31028/TCP                 
    <helm-release-name>-scp-cache-headless                     ClusterIP      None            <none>        8010/TCP                                                      
    <helm-release-name>-scp-load-manager                       ClusterIP      10.96.217.195   <none>        8091/TCP,8040/TCP,9000/TCP                                    
    <helm-release-name>-scp-mediation                          ClusterIP      10.96.197.99    <none>        9090/TCP,9091/TCP,8091/TCP                                    
    <helm-release-name>-scp-nrfproxy                           ClusterIP      10.96.139.20    <none>        8091/TCP,8086/TCP                                             
    <helm-release-name>-scp-nrfproxy-oauth                     ClusterIP      10.96.36.166    <none>        8091/TCP,8040/TCP,9000/TCP                                    
    <helm-release-name>-scp-worker                             LoadBalancer   10.96.65.218    <pending>     8091:31259/TCP,8000:31790/TCP,9000:31115/TCP,9443:30113/TCP   
    <helm-release-name>-scp-worker-int                         ClusterIP      10.96.64.254    <none>        8092/TCP                                                      
    <helm-release-name>-scpc-alternate-resolution              ClusterIP      10.96.69.12     <none>        8091/TCP,8084/TCP                                             
    <helm-release-name>-scpc-alternate-resolution-int          ClusterIP      10.96.91.133    <none>        8092/TCP                                                      
    <helm-release-name>-scpc-audit                             ClusterIP      10.96.178.49    <none>        8091/TCP,8083/TCP                                             
    <helm-release-name>-scpc-audit-int                         ClusterIP      10.96.170.31    <none>        8092/TCP                                                      
    <helm-release-name>-scpc-configuration                     LoadBalancer   10.96.247.33    <pending>     8091:32070/TCP,8081:30308/TCP                                 
    <helm-release-name>-scpc-configuration-int                 ClusterIP      10.96.230.133   <none>        8092/TCP                                                      
    <helm-release-name>-scpc-notification                      ClusterIP      10.96.72.44     <none>        8091/TCP,8082/TCP,9000/TCP                                    
    <helm-release-name>-scpc-notification-int                  ClusterIP      10.96.139.117   <none>        8092/TCP                                                      
    <helm-release-name>-scpc-subscription                      ClusterIP      10.96.91.150    <none>        8091/TCP,8080/TCP            
  3. Run the following command to check whether all the pods are up and active:
    kubectl -n <namespace_name> get pods

    Example:

    kubectl get pods -n scpsvc
    NAME                                       READY   STATUS    RESTARTS   AGE
    ocscp-scp-cache-8444cd8f6d-gfsmx                      1/1     Running   0             2d23h
    ocscp-scp-load-manager-5664c7c8b4-rmrd2               1/1     Running   0             2d23h
    ocscp-scp-nrfproxy-5f44ff5f55-84f44                   1/1     Running   0             2d23h
    ocscp-scp-nrfproxy-oauth-5dbc78689d-mkhnt             1/1     Running   0             3m2s
    ocscp-scp-worker-6dc45b7cfc-2tfz5                     1/1     Running   0             28h
    ocscp-scpc-audit-6ff496fcc9-jkwj5                     1/1     Running   0             2d23h
    ocscp-scpc-configuration-5d66df6f4-6hdll              1/1     Running   0             2d23h
    ocscp-scpc-notification-7f49b85c99-c4p9v              1/1     Running   0             2d23h
    ocscp-scpc-subscription-6b785f77b4-9rtn2              1/1     Running   0             2d23h

    Note:

    If the installation is unsuccessful or the STATUS of all the pods is not in the Running state, perform the troubleshooting steps provided in Oracle Communications Cloud Native Core, Service Communication Proxy Troubleshooting Guide.
2.2.3.2 Performing Helm Test

This section describes how to perform sanity check for SCP installation through Helm test. The pods to be checked should be based on the namespace and label selector configured for the Helm test configurations.

Helm Test is a feature that validates installation of SCP and determines if the NF is ready to accept traffic.

This test also checks for all the PVCs to be in bound state under the Release namespace and label selector configured.

Note:

Helm Test can be performed only on Helm3.
Perform the following Helm test procedure:
  1. Configure the Helm test configurations under the global parameters section of the ocscp_custom_values_25.2.100.yaml file as follows:
    
    nfName: ocscp
    image:
      name: nf_test
      tag: <string>
      pullPolicy: Always
    config:
      logLevel: WARN
      timeout: 180
    resources:
        - horizontalpodautoscalers/v1
        - deployments/v1
        - configmaps/v1
        - serviceaccounts/v1
        - roles/v1
        - services/v1
        - rolebindings/v1
    
    

    For more information, see Customizing SCP.

  2. Run the following Helm test command:
    helm test <release_name> -n <namespace>

    Example:

    helm test ocscp -n ocscp

    Sample Output:
    NAME: ocscp
    LAST DEPLOYED: Fri Sep 18 10:08:03 2020
    NAMESPACE: ocscp
    STATUS: deployed
    REVISION: 1
    TEST SUITE:     ocscp-test
    Last Started:   Fri Sep 18 10:41:25 2020
    Last Completed: Fri Sep 18 10:41:34 2020
    Phase:          Succeeded
    NOTES:
    # Copyright 2020 (C), Oracle and/or its affiliates. All rights reserved.

Note:

  • After running the helm test, the pod moves to a completed state. Hence, to remove the pod, run the following command:
    kubectl delete pod <releaseName>-test -n <namespace>
  • The Helm test only verifies whether all pods running in the namespace are in the Ready state, such as 1/1 or 2/2 states. It does not check the deployment.
  • If the Helm test fails, see Oracle Communications Cloud Native Core, Service Communication Proxy Troubleshooting Guide.
2.2.3.3 Taking Backup of Important Files
Take a backup of the following files, which are required during fault recovery:
  1. Updated the ocscp_custom_values_25.2.100.yaml file.
  2. Updated Helm charts.
  3. Secrets, certificates, and keys that are used during installation.
2.2.3.4 Alert Configuration

This section describes alert rules configuration for SCP. The Alert Manager uses the Prometheus measurements values as reported by microservices in conditions under alert rules to trigger alerts.

2.2.3.4.1 Applying Alerts Rule to CNE without Prometheus Operator

SCP Helm Chart Release Name: _NAME_

Prometheus NameSpace: _Namespace _

Perform the following procedure to configure Service Communication Proxy alerts in Prometheus.
  1. Run the following command to check the name of the config map used by Prometheus:
    $kubectl get configmap -n <_Namespace_>
    Example:
    $kubectl get configmap -n prometheus-alert2
    NAME                                  DATA   AGE
    lisa-prometheus-alert2-alertmanager   1      146d
    lisa-prometheus-alert2-server         4      146d
  2. Take a backup of the current config map of Prometheus. This command saves the configmap in the provided file. In the following command, the configmap is stored in the /tmp/tempConfig.yaml file:
    $ kubectl get configmaps <_NAME_>-server -o yaml -n <_Namespace_> /tmp/tempConfig.yaml
    
    Example:
    $ kubectl get configmaps lisa-prometheus-alert2-server -o yaml -n prometheus-alert2 > /tmp/tempConfig.yaml
  3. Check and delete the "alertsscp" rule if it has already configured in the prometheus config map. If configured, this step removes the " alertsscp " rule. This is an optional step if configuring the alerts for the first time.
    $ sed -i '/etc\/config\/alertsscp/d' /tmp/tempConfig.yaml
  4. Add the "alertsscp" rule in the configmap dump file under the ' rule_files ' tag.
    $ sed -i '/rule_files:/a\    \- /etc/config/alertsscp'  /tmp/tempConfig.yaml
  5. Update the configmap using below command. Ensure to use the same configmap name that was used to take a backup of the prometheus configmap.
    $ kubectl replace configmap <_NAME_>-server -f /tmp/tempConfig.yaml
    Example:
    $ kubectl replace configmap lisa-prometheus-alert2-server -f /tmp/tempConfig.yaml
  6. Run the following command to patch the configmap with a new "alertsscp" rule:

    Note:

    The patch file provided is the ocscp_csar_23_2_0_0_0.zip folder provided with SCP, that is, SCPAlertrules.yaml.
    $ kubectl patch  configmap _NAME_-server -n _Namespace_ --type merge --patch "$(cat ~/SCPAlertrules.yaml)"
    
    Example:
    $ kubectl replace configmap lisa-prometheus-alert2-server -f /tmp/tempConfig.yaml

Note:

Prometheus takes about 20 seconds to apply the updated Config map.
2.2.3.4.2 Applying Alerts Rule to CNE with Prometheus Operator
Perform the following procedure to apply alerts rule to Cloud Native Environment (CNE) with Prometheus Operator (CNE 1.9.0 and later).
  1. Run the following command to apply SCP alerts file to create Prometheus rules Custom Resource Definition (CRD):
    kubectl apply -f <file_name> -n <scp namespace>
    Where,
    • <file_name> is the SCP alerts file.
    • <scp namespace> is the SCP namespace.

    Example:

    kubectl apply -f ocscp_alerting_rules_promha_25.2.100.yaml -n scpsvc

    Sample file delivered with SCP package:

    ocscp_alerting_rules_promha_25.2.100.yaml
2.2.3.4.3 Configuring Service Communication Proxy Alert using the SCPAlertrules.yaml file

Note:

Default NameSpace is scpsvc for Service Communication Proxy. You can update the NameSpace as per the deployment.

To access the scpAlertsrules_<scp release number>.yaml file from the Scripts folder of ocscp_csar_25_1_1_0_0_0.zip, download the SCP package from My Oracle Support as described in "Downloading the SCP Package " in Oracle Communications Cloud Native Core, Service Communication Proxy Installation, Upgrade, and Fault Recovery Guide.

Alerts Details

Description and summary for alerts are added by the Prometheus alert manager.

Alerts are supported for three different resources/routing crosses threshold.
  • SCPIngress Traffic Rate Above Threshold
    • Has three threshold level Minor (above 9800 mps to 11200 mps), Major (11200 to 13300 mps), Critical (above 13300 mps). These values are configurable.
    • In the description, information is presented similar to: "Ingress Traffic Rate at Locality: <Locality of scp> is above <threshold level (minor/major/critical> threshold (i.e. <value of threshold>)"
    • In Summary: "Namespace: <Namespace of scp deployment that Locality>, Pod: <SCP-worker Pod name>: Current Ingress Traffic Rate is <Current rate of Ingress traffic > mps which is above 70 Percent of Max MPS(<upper limit of ingress traffic rate per pod>)"

      Note:

      Ingress traffic rate is per scp-worker pod in a namespace at particular SCP-Locality. Currently, 14000mps is the upper limit for per scp-worker pod.
  • SCP Routing Failed For Service
    • It alerts for which NF Service Type and NF Type at particular locality, Routing failed
    • Description: "Routing failed for service"
    • Summary: "Routing failed for service: NFService Type = <Message NF Service Type>, NFType = <Message NF Type>, Locality = <SCP Locality where Routing Failed> and value = <Accumulated failure till now, of such message for NFType and NFService Type>"

      Note:

      The value field currently does not provide the number of failures in particular time interval, instead it provides the total number of Routing failures.
  • SCP Pod Memory Usage: Type of alert is SCPWorkerPodMemoryUsage.
    • Pod memory usage for SCP Pods (Soothsayer and Worker) deployed at a particular node instance is provided.
    • The Soothsayer pod threshold is 8 GB
    • The Worker pod threshold is 16 Gi
    • Summary: Instance: "<Node Instance name>, NameSpace: <Namespace of SCP deployment>, Pod: <(Soothsayer/Worker) Pod name>: <Soothsayer/Worker> Pod High Memory usage detected"
    • Summary: "Instance: "<Node Instance name>, Namespace: <Namespace of SCP deployment>, Pod: <(Soothsayer/Worker) Pod name>: Memory usage is above <threshold value>G (current value is: <current value of memory usage>)"
2.2.3.4.4 Configuring Alert Manager for SNMP Notifier

Grouping of alerts is based on:

  • podname
  • alertname
  • severity
  • namespace
  • nfServiceType
  • nfServiceInstanceId
User needs to add subroutes for SCP alerts in AlertManager config map as below:
  1. Take a backup of the current config map of Alertmanager by running the following command:
    kubectl get configmaps <NAME-alertmanager> -oyaml -n <Namespace> > /tmp/bkupAlertManagerConfig.yaml
    

    Example:

    kubectl get configmaps occne-prometheus-alertmanager -oyaml -n occne-infra > /tmp/bkupAlertManagerConfig.yaml
  2. Edit Configmap to add subroute for SCP Trap OID:
    kubectl edit configmaps <NAME-alertmanager> -n <Namespace>
    Example:
    kubectl edit configmaps occne-prometheus-alertmanager -n occne-infra
  3. Add the subroute under 'route' in configmap:
    routes:
          - receiver: default-receiver
            group_interval: 1m
            group_wait: 10s
            repeat_interval: 9y
            group_by: [podname, alertname, severity, namespace, nfservicetype, nfserviceinstanceid, servingscope, nftype]
            match_re:
              oid: ^1.3.6.1.4.1.323.5.3.35.(.*)

MIB Files for SCP

There are two MIB files which are used to generate the traps. The user need to update these files along with the Alert file in order to fetch the traps in their environment.
  • ocscp_mib_tc_25.2.100.mib: This is considered as SCP top level mib file, where the Objects and their data types are defined.
  • ocscp_mib_25.2.100.mib: This file fetches the Objects from the top level mib file and based on the Alert notification, these objects can be selected for display.

Note:

MIB files are packaged with ocscp_csar_23_2_0_0_0.zip. You can download the file from MOS as described in Oracle Communications Cloud Native Core, Service Communication Proxy Installation, Upgrade, and Fault Recovery Guide.
2.2.3.4.5 Configuring SCP Alerts for OCI

To configure SCP alerts for OCI, OCI supports metric expressions written in MQL (Metric Query Language) and therefore requires ocscp_oci_alertrules_25.2.100.zip file for configuring alerts in OCI observability platform. For more information, see Oracle Communications Cloud Native Core, OCI Deployment Guide.

2.2.3.4.6 OSO Alerts Automation

Alerts are automated by using the Helm upgrade command with the Helm chart provided as part of OSO software package. A new oso-alr-config Helm chart is provided as part of OSO software package from 25.1.200 release onwards. For information to download OSO software package, see Oracle Communications, Cloud Native Core, Operations Services Overlay Installation and Upgrade Guide.

The alerts automation procedure is as follows:

  1. Deployed the oso-alr-config Helm chart when OSO is installed.
    This separate Helm chart allows the Helm install command to run with an input alert file.
    helm install oso-alr-config oso-alr-config/ -f custom-oso-alr-config-values.yaml -f ocscp_alertrules.yaml
  2. When the oso-alr-config Helm chart is installed, oso-alr-config is ready to use.
  3. Run the following Helm upgrade command in the oso-alr-config file to apply SCP alert file if you are enabling this feature after SCP deployment is complete:
    helm upgrade oso-alr-config oso-alr-config/ -f custom-oso-alr-config-values.yaml - f ocscp_alertrules.yaml 
  4. When the Helm upgrade is completed, you can view the alerts file that is applied to OSO Prometheus ConfigMap. This can be viewed in the Prometheus Graphical User Interface (GUI).
  5. You can also update the changes in the same alert file and perform a Helm upgrade. The alert file will be updated with the latest changes.

Cleaning Up the Alerts

Perform the following procedure to clean up the alerts:
  1. An empty ocscp_alertrules_empty.yaml file is delivered as part of the OSO software package. For information to download OSO software package, see Oracle Communications, Cloud Native Core, Operations Services Overlay Installation and Upgrade Guide. You must provide this ocscp_alertrules_empty.yaml file during the Helm upgrade.
  2. This ocscp_alertrules_empty.yaml file is used to remove all the alerts using the Helm upgrade command by providing ocscp_alertrules_empty.yaml file as an input file. This removes the alerts from the OSO Prometheus ConfigMap and Prometheus GUI and keeps the references under rule_files "/etc/config/alertsscp" and the alert rules will be empty "alertsscp: { }".
  3. Run the following Helm upgrade command to clean up alert rules:
    helm upgrade oso-alr-config oso-alr-config/ -f custom-oso-alr-config-values.yaml - f ocscp_alertrules_empty.yaml
Sample empty alert file is as follows:
apiVersion: v1
data:
  alerts: |
    {}

2.2.4 Configuring Network Repository Function Details

Network Repository Function (NRF) details must be defined during the SCP installation using the values.yaml file. You must update the NRF details in the values.yaml file.

Note:

You can configure a primary NRF and an optional secondary NRF. NRFs must have the back-end DB synchronized.

An IPv4 or IPv6 address of NRF must be configured in case NRF is outside the Kubernetes cluster. If NRF is inside the Kubernetes cluster, you can configure FQDN. If both IP address (IPv4 or IPv6) and FQDN are provided, IP address takes precedence over FQDN.

Note:

  • You must configure or remove the apiPrefix parameter based on the APIPrefix supported or not supported by NRF.
  • You must update the FQDN, IP address, and Port of NRF to point to NRF's FQDN or IP and Port. The primary NRF profile must be always set to higher, that is, 0. Ensure that the priority value of both primary and secondary profiles are not set to the same priority.

2.2.5 Configuring SCP as HTTP Proxy

To route messages towards SCP, Consumer NFs must use <FQDN or IP Address>:<PORT of SCP-Worker> of scp-worker in the http_proxy/HTTP_PROXY configuration.

Note:

Run the following commands from where SCP worker and FQDN can be accessed.
Perform the following procedure to configure SCP as HTTP proxy:
  1. To test successful deployment of SCP, run the following curl command:
    $ curl -v -X GET --url 'http://<FQDN:PORT of SCP-Worker>/nnrf-nfm/v1/subscriptions/' --header 'Host:<FQDN:PORT of NRF>'
  2. Fetch the current subscription list as a client from NRF by sending the request to NRF through SCP:

    Example:

    $ curl -v -X GET --url 'http://scp-worker.scpsvc:8000/nnrf-nfm/v1/subscriptions/' --header 'Host:ocnrf-ambassador.nrfsvc:80'

2.2.6 Configuring Multus Container Network Interface

Perform the following procedure to configure Multus Container Network Interface (CNI) after SCP installation is complete.

Note:

To verify whether this feature is enabled, see "Verifying the Availability of Multus Container Network Interface" in Oracle Communications Cloud Native Core, Service Communication Proxy Troubleshooting Guide.
  1. In the Kubernetes cluster, create a NetworkAttachmentDefinition (NAD) file.
    Example of a NAD file name: ipvlan-sig.yaml
    Sample NAD file:
    apiVersion: "k8s.cni.cncf.io/v1"
    
    kind: NetworkAttachmentDefinition
    
    metadata:
    
      name:ipvlan-siga
    
    spec:
    
      config: '{
    
          "cniVersion": "0.3.1",
    
          "type": "ipvlan",
    
          "primary": "eth1",
    
          "mode": "l2",
    
          "ipam": {
    
            "type": "host-local",
    
            "subnet": "<signaling-subnet>",
    
            "rangeStart": "x.x.x.x.",
    
            "rangeEnd": "x.x.x.x",
    
            "routes": [
    
              { "dst": "<nsx_lb_network_address_AMF>"}   ,
    
               { "dst":“<nsx_lb_network_address_SMF>”}  ,
    
                { "dst":“<nsx_lb_network_address_NRF>”} ,
    
                { "dst":“<nsx_lb_network_address_UDR>”} ,  
    
                 { "dst":“<nsx_lb_network_address_CHF>”} ,  
    
                 ],
    
            "gateway": "x.x.x.x"
    
          }
    
        }'
  2. Run the following command to create a NetworkAttachmentDefinition custom resource for defining the Multus CNI network interfaces and their routing details:
    kubectl apply -f <NAD_file_name> –n <namespace>

    Example:

    kubectl apply -f ipvlan-sig.yaml –n scpsvc
  3. Add the following annotation to the deployment for which additional network interfaces need to be added by Multus CNI:
    k8s.v1.cni.cncf.io/networks: <network as defined in NAD>

    Where, <network as defined in NAD> indicates the network as defined in NetworkAttachmentDefinition.

    Sample values.yaml file:

    scp-worker:
      deployment:
        # Labels and Annotations that are specific to deployment are added here.
        customExtension:
          labels: {}
          annotations: {k8s.v1.cni.cncf.io/networks: '[{ "name": "macvlan-siga"}]'}

2.2.7 Adding and Removing IP-based Signaling Services

The following subsections describe how to add and remove IP-based Signaling Services as part of the Support for Multiple Signaling Service IPs feature.

2.2.7.1 Adding a Signaling Service

Perform the following procedure to add an IP-based signaling service.

  1. Open the ocscp_values.yaml file.
  2. In the serviceSpecifications section, add a new service under the workerServices list similar to the default service as follows:
    name: "<service_name>"
    #type:LoadBalancer
    networkNameEnabled: false
    networkName: "metallb.universe.tf/address-pool: signaling"
    publicSignalingIPSpecified: true
    publicSignalingIP: <IP address>
    publicSignalingIPv6Specified: false
    publicSignalingIPv6: <IP address>
    ipFamilyPolicy: *workerIpFamilyPolicy
    ipFamilies: *workerIpFamilies
    port:
    staticNodePortEnabled: false
    nodePort: <Port number>
    nodePortHttps: <Port number>
    customExtension:
    labels: {}
    annotations: {}
    Where,
    • <service_name> is the name of the service.
    • <IP address> is the signaling IP address of the service.
    • <Port number> is the port number of the service.

    Example:

    name: "scp-worker-net1"
    #type:LoadBalancer
    networkNameEnabled: false
    networkName: "metallb.universe.tf/address-pool: signaling"
    publicSignalingIPSpecified: false
    publicSignalingIP: 10.75.212.100
    publicSignalingIPv6Specified: true
    publicSignalingIPv6: 2001:db8:85a3::8a2e:370:7334
    ipFamilyPolicy: *workerIpFamilyPolicy
    ipFamilies: *workerIpFamilies
    port:
    staticNodePortEnabled: false
    nodePort: 30075
    nodePortHttps: 30076
    customExtension:
    labels: {}
    annotations: {}
  3. Optional: To add preferable IP addresses for NRF callback, in the global section, under the scpSubscriptionInfo parameter, add the IP address of the new service to ip.

    You can provide either IPv4 or IPv6 address.

    Example:

    scpSubscriptionInfo:
          ip: "10.75.212.100" # metallb or primaryIp, this ip will be obtained from metallb pool. Here either IPv4 or IPv6 address can be provided.
          Scheme to use in callbackURI, either http or https
          scheme: "http"
    
  4. Save the file.
  5. Run the following Helm upgrade command and wait until the upgrade is complete:

    Note:

    It is recommended to perform the Helm upgrade on the same version of SCP that contains the newly added IP-based signaling service.
    helm upgrade <release_name> <helm_repo/helm_chart>  --version <chart_version> -f <ocscp_values.yaml> --namespace  <namespace-name>

    Where,

    • <release_name> is the release name used by the Helm command.
    • <helm_repo/helm_chart> is the location of the Helm chart extracted from the target ocscp_csar_25_2_1_0_0_0.zip file.
    • <chart_version> is the version of the Helm chart extracted from the ocscp_csar_25_2_1_0_0_0.zip file.
    • <ocscp_values.yaml> is the SCP customized values.yaml file.
    • <namespace-name> is the SCP namespace in which the SCP release is deployed.
    Example:
    helm upgrade ocscp ocscp-helm-repo/ocscp --version 25.2.100 -f ocscp_values.yaml --namespace ocscp
    
  6. Run the following command to check whether the service is available:
    kubectl get svc -n <namespace>
2.2.7.2 Removing a Signaling Service

Perform the following procedure to remove an IP-based signaling service.

Before removing any IP address, ensure that no traffic is routed to that IP. For more information, you can refer to SCP dashboard metrics in the Oracle Communications Cloud Native Core, Service Communication Proxy User Guide.
  1. Open the ocscp_values.yaml file.
  2. Locate the publicSignalingIP IP of the signaling service that you want to remove and set the corresponding publicSignalingIPSpecified parameter to false.
    Example:
    publicSignalingIPSpecified: false
    publicSignalingIP: 10.75.212.88
  3. Optional: If the service ip being removed is already part of scpSubscriptionInfo, then do one of the following:
    • To update the alternate IP: In the global section, under the scpSubscriptionInfo parameter, update the ip parameter with the preferred service IP address.
    • To remove the alternate IP: In the global section, under the scpSubscriptionInfo parameter, remove the IP address.
  4. Save the file.
  5. Run the following Helm upgrade command and wait until the upgrade is complete:

    Note:

    It is recommended to perform the Helm upgrade on the same version of SCP that already contains IP-based signaling service.
    helm upgrade <release_name> <helm_repo/helm_chart>  --version <chart_version>
          -f <ocscp_values.yaml> --namespace  <namespace-name>

    Where,

    • <release_name> is the release name used by the Helm command.
    • <helm_repo/helm_chart> is the location of the Helm chart extracted from the target ocscp_csar_25_2_1_0_0_0.zip file.
    • <chart_version> is the version of the Helm chart extracted from the ocscp_csar_25_2_1_0_0_0.zip file.
    • <ocscp_values.yaml> is the SCP customized values.yaml file.
    • <namespace-name> is the SCP namespace in which the SCP release is deployed.
    Example:
    helm upgrade ocscp ocscp-helm-repo/ocscp --version 25.2.100 -f ocscp_values.yaml --namespace ocscp
    
  6. Perform one of the following steps to clean up the deleted services:
    • To clean up Kubernetes services manually, run the following command:
      kubectl delete svc <svc_name> --namspace <namespace-name>
    • To clean up Kubernetes services through Helm upgrade, remove all the parameters of the removed IP-based service from the serviceSpecifications section of the ocscp_values.yaml file, and then perform the Helm upgrade as described in Step 7.

    Remove the following sample parameters manually from serviceSpecifications:

    name: "<service name>"
    #type:LoadBalancer
    networkNameEnabled: false
    networkName: "metallb.universe.tf/address-pool: signaling"
    publicSignalingIPSpecified: false
    publicSignalingIP: 10.75.212.88
    port:
    staticNodePortEnabled: true
    nodePort: 30075
    customExtension:
    labels: {}
    annotations: {}