2 Installing SCP

This chapter provides information about installing SCP in a cloud native environment, including the prerequisites and downloading the deployment package.

Note:

SCP supports fresh installation, and it can also be upgraded from 24.1.x and 24.2.x. For more information about how to upgrade SCP, see Upgrading SCP.

SCP installation is supported over the following platforms:

  • Oracle Communications Cloud Native Core, Cloud Native Environment (CNE): For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
  • Oracle Cloud Infrastructure (OCI) using OCI Adaptor: For more information about OCI, see Oracle Communications Cloud Native Core, OCI Deployment Guide.

SCP installation comprises of prerequisites, preinstallation, installation, and postinstallation tasks. You must perform SCP installation tasks in the same sequence as outlined in the following table:

2.1 Prerequisites

Before installing and configuring SCP, ensure that the following prerequisites are met.

2.1.1 Software Requirements

This section lists the software that must be installed before installing SCP.

The following software must be installed before installing SCP:

Table 2-2 Preinstalled Software

Software Tested Software Version Software Requirement Usage Description
SCP 25.1.1xx SCP 24.3.x SCP 24.2.x
Kubernetes 1.31.x 1.30.x 1.29.x Mandatory

Kubernetes orchestrates scalable, automated network function (NF) deployments for high availability and efficient resource utilization.

Impact:

Without orchestration capabilities, deploying and managing network functions (NFs) can become complex, leading to inefficient resource utilization and potential downtime.

Helm 3.16.2 3.15.2 3.13.2 Mandatory

Helm, a package manager, simplifies deploying and managing network functions (NFs) on Kubernetes with reusable, versioned charts for easy automation and scaling.

Impact:

Pre-installation is required. Not using this capability may result in error-prone and time-consuming management of NF versions and configurations, impacting deployment consistency.

Podman 4.9.4 4.6.1 4.6.1 Mandatory

Podman manages and runs containerized network functions (NFs) without requiring a daemon, offering flexibility and compatibility with Kubernetes.

Impact:

Pre-installation is required, as Podman is part of Oracle Linux. Without efficient container management, the development and deployment of NFs could become cumbersome, impacting agility.

To check the versions of the preinstalled software in the cloud native environment, run the following commands:

kubectl version
helm version
podman version

The following software are available if SCP is deployed in CNE. If you are deploying SCP in any other cloud native environment, these additional software must be installed before installing SCP.

To check the installed software, run the following command:

helm ls -A

The list of additional software items, along with the supported versions and usage, is provided in the following table:

Table 2-3 Additional Software

Software Tested Software Version Software Requirement Usage Description
SCP 25.1.1xx SCP 24.3.x SCP 24.2.x
Oracle OpenSearch 2.11.0 2.11.0 2.11.0 Recommended

OpenSearch provides scalable search and analytics for 5G network functions (NFs), enabling efficient data exploration and visualization.

Impact:

A lack of a robust analytics solution could lead to challenges in identifying performance issues and optimizing NF operations, ultimately affecting overall service quality.

OpenSearch Dashboard 2.11.0 2.11.0 2.11.0 Recommended

OpenSearch Dashboard visualizes and analyzes data for 5G network functions (NFs), offering interactive insights and custom reporting.

Impact:

Without visualization capabilities, understanding NF performance metrics and trends would be difficult, limiting informed decision-making.

Fluentd OpenSearch 1.17.1 1.16.2 1.16.2 Recommended

Fluentd is an open-source data collector that streamlines data collection and consumption, allowing for improved data utilization and comprehension.

Impact:

Not utilizing centralized logging can hinder the ability to track network function (NF) activity and troubleshoot issues effectively, complicating maintenance and support.

Kyverno 1.12.5 1.12.5 1.9.0 Recommended

Kyverno is a Kubernetes policy engine that helps manage and enforce policies for resource configurations within a Kubernetes cluster.

Impact:

Failing to implement policy enforcement could lead to misconfigurations, resulting in security risks and instability in network function (NF) operations, affecting reliability.

Grafana 9.5.3 9.5.3 9.5.3 Recommended

Grafana is a popular open-source platform for monitoring and observability. It provides a user-friendly interface for creating and viewing dashboards based on various data sources.

Impact:

Without visualization tools, interpreting complex metrics and gaining insights into network function (NF) performance would be cumbersome, hindering effective management.

Prometheus 2.52.0 2.52.0 2.51.1 Recommended

Prometheus is a popular open-source monitoring and alerting toolkit. It collects and stores metrics from various sources and allows for alerting and querying.

Impact:

Not employing this monitoring solution could result in a lack of visibility into network function (NF) performance, making it difficult to troubleshoot issues and optimize resource usage.

Jaeger 1.60.0 1.60.0 1.52.0 Recommended

Jaeger provides distributed tracing for 5G network functions (NFs), enabling performance monitoring and troubleshooting across microservices.

Impact:

Not utilizing distributed tracing may hinder the ability to diagnose performance bottlenecks, making it challenging to optimize NF interactions and improve the user experience.

MetalLB 0.14.4 0.14.4 0.14.4 Recommended

MetalLB provides load balancing and external IP management for 5G network functions (NFs) in Kubernetes environments and is used as the load balancing solution in CNE.

Impact:

Load balancing is mandatory for the solution to work. Without load balancing, traffic distribution among NFs may be inefficient, leading to potential bottlenecks and service degradation.

Note:

On OCI, the above mentioned software are not required because OCI observability and management service is used for logging, metrics, alerts, and KPIs. For more information, see Oracle Communications Cloud Native Core, OCI Deployment Guide.

2.1.2 Environment Setup Requirements

This section describes the environment setup requirements for installing SCP.
2.1.2.1 Client Machine Requirement

This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.

The client machine should have:

  • Helm repository configured.
  • network access to the Helm repository and Docker image repository.
  • network access to the Kubernetes cluster.
  • required environment settings to run kubectl, docker, and podman commands. The environment should have privileges to create a namespace in the Kubernetes cluster.
  • Helm client installed with the push plugin. Configure the environment in such a manner that the helm install command deploys the software in the Kubernetes cluster.
2.1.2.2 Network Access Requirements

The Kubernetes cluster hosts must have network access to the following repositories:

  • Local Helm repository: It contains SCP Helm charts.
    To check if the Kubernetes cluster hosts can access the local Helm repository, run the following command:
    helm repo update
  • Local Docker image repository: It contains SCP Docker images.

    To check if the Kubernetes cluster hosts can access the local Docker image repository, pull any image with an image-tag using either of the following commands:

    docker pull <docker-repo>/<image-name>:<image-tag>
    podman pull <podman-repo>/<image-name>:<image-tag>

Where,

  • <docker-repo> is the IP address or host name of the Docker repository.
  • <podman-repo> is the IP address or host name of the Podman repository.
  • <image-name> is the Docker image name.
  • <image-tag> is the tag assigned to the Docker image used for the SCP pod.

For example:

docker pull CUSTOMER_REPO/oc-app-info:24.3.0

podman pull occne-repo-host:5000/ocscp/oc-app-info:24.3.0

Note:

Run kubectl and helm commands on a system based on the deployment infrastructure. For example, they can be run on a client machine such as VM, server, local desktop, and so on.
2.1.2.3 Server or Space Requirement

For information about server or space requirements, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

2.1.2.4 CNE Requirement

This section is applicable only if you are installing SCP on Cloud Native Environment (CNE).

SCP supports CNE 24.3.x, 24.2.x, and 24.1.x.

To check the CNE version, run the following command:

echo $OCCNE_VERSION

Note:

If Istio or Aspen Service Mesh (ASM) is installed on CNE, run the following command to patch the "disallow-capabilities" clusterpolicy of CNE and exclude the NF namespace before the NF deployment:
kubectl patch clusterpolicy disallow-capabilities --type "json" -p '[{"op":"add","path":"/spec/rules/0/exclude/any/0/resources/namespaces/-","value":"<namespace of NF>"}]'

Where, <namespace of NF> is the namespace of SCP, cnDBTier, or Oracle Communications Cloud Native Configuration Console (CNC Console).

For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

2.1.2.5 OCI Requirements

SCP can be deployed in OCI. While deploying SCP in OCI, the user must use the Operator instance/VM instead of Bastion Host.

For more information about OCI Adaptor, see Oracle Communications Cloud Native Core, OCI Adaptor User Guide.

2.1.2.6 cnDBTier Requirements

Note:

Obtain the values of the cnDBTier parameters listed in cnDBTier Customization Parameters from the delivered ocscp_dbtier_custom_values.yaml file and use these values in the new ocscp_dbtier_custom_values.yaml file if the parameter values in the new ocscp_dbtier_custom_values.yaml file are different from the delivered ocscp_dbtier_custom_values.yaml file.

SCP supports cnDBTier 24.3.x, 24.2.x, and 24.1.x. cnDBTier must be configured and running before installing SCP.

Note:

In georedundant deployment, each site should have a dedicated cnDBTier.

To install cnDBTier 24.3.x with resources recommended for SCP, customize the ocscp_dbtier_24.3.0_custom_values_24.3.0.yaml file in the ocscp_csar_24_3_0_0_0.zip folder with the required deployment parameters. cnDBTier parameters will vary depending on whether the deployment is on a single site, two site, or three site. For more information, see cnDBTier Customization Parameters.

Note:

If you already have an older version of cnDBTier, upgrade cnDBTier with resources recommended for SCP by customizing the ocscp_dbtier_24.3.0_custom_values_24.3.0.yaml file in the ocscp_csar_24_3_0_0_0.zip folder with the required deployment parameters. Use the same PVC size as it was in the previous release. For more information, see cnDBTier Customization Parameters.

For more information about cnDBTier installation, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

2.1.2.7 OCCM Requirements
SCP supports OCCM 24.3.x.

To support automated certificate lifecycle management, SCP integrates with Oracle Communications Cloud Native Core, Certificate Management (OCCM) in compliance with 3GPP security recommendations. For more information about OCCM, see the following guides:

  • Oracle Communications Cloud Native Core, Certificate Management Installation, Upgrade, and Fault Recovery Guide
  • Oracle Communications Cloud Native Core, Certificate Management User Guide
2.1.2.8 OSO Requirement

SCP supports Operations Services Overlay (OSO) 24.3.x, 24.2.x, and 24.1.x for common operation services (Prometheus and components such as alertmanager, pushgateway) on a Kubernetes cluster, which does not have these common services. For more information about OSO installation, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation Guide.

2.1.2.9 CNC Console Requirements

SCP supports CNC Console 24.3.x to configure and manage Network Functions. For more information, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide .

2.1.3 Resource Requirements

This section lists the resource requirements to install and run SCP.

Note:

The performance and capacity of the SCP system may vary based on the call model, feature or interface configuration, network conditions, and underlying CNE and hardware environment.

2.1.3.1 SCP Services

The following table lists resource requirement for SCP Services:

Table 2-4 SCP Services

Service Name SCP Service PODs Ephemeral Storage Per Pod
  Pod Replica vCPU/Pod Memory in Gi/Pod Minimum Value in Mi (If Enabled) Maximum Value in Gi (If Enabled)
Min Max Min Max Min Max
Helm test 1 1 3 3 3 3 70 1
Helm Hook 1 1 3 3 3 3 70 1
<helm-release-name>-scpc-subscription 1 1 2 2 2 2 70 1
<helm-release-name>-scpc-notification 1 1 4 4 4 4 70 1
<helm-release-name>-scpc-audit 1 1 3 3 4 4 70 1
<helm-release-name>-scpc-configuration 1 1 2 2 2 2 70 1
<helm-release-name>-scpc-alternate-resolution 1 1 2 2 2 2 70 1
<helm-release-name>-scp-cache 3 3 8 8 8 8 70 1
<helm-release-name>-scp-nrfproxy 2 16 8 8 8 8 70 1
<helm-release-name>-scp-load-manager 2 3 8 8 8 8 70 1
<helm-release-name>-scp-oauth-nrfproxy 2 16 8 8 8 8 70  
<helm-release-name>-scp-worker(profile 1) 2 32 4 4 8 8 70 1
<helm-release-name>-scp-worker(profile 2) 2 64 8 8 12 12 70 1
<helm-release-name>-scp-mediation 2 16 8 8 8 8 70 1
<helm-release-name>-scp-mediation test 1 1 8 8 8 8 70 1
<helm-release-name>-scp-worker(profile 3) 2 64 12 12 16 16 70 1

Note:

  • To go beyond 60000 Transactions Per Second (TPS), you must deploy SCP with scp-worker configured with Profile 2.
  • <helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".
  • Helm Hooks Jobs: These are pre and post jobs that are invoked during installation, upgrade, rollback, and uninstallation of the deployment. These are short span jobs that get terminated after the deployment completion.
  • Helm Test Job: This job is run on demand when the Helm test command is initiated. This job runs the Helm test and stops after completion. These are short-lived jobs that get terminated after the deployment is done. They are not part of active deployment resource, but are considered only during Helm test procedures.
2.1.3.2 Upgrade

Following is the resource requirement for upgrading SCP.

Table 2-5 Upgrade

Service Name Upgrade Resources Ephemeral Storage Per Pod
  Pod Replica vCPU/Pod Memory in Gi/Pod Minimum Value in Mi (If Enabled) Maximum Value in Gi (If Enabled)
Min Max Min Max Min Max
Helm test 0 0 0 0 0 0 70 1
Helm Hook 0 0 0 0 0 0 70 1
<helm-release-name>-scpc-subscription 1 1 1 1 1 1 70 1
<helm-release-name>-scpc-notification 1 1 4 4 4 4 70 1
<helm-release-name>-scpc-audit 1 1 3 3 4 4 70 1
<helm-release-name>-scpc-configuration 1 1 2 2 2 2 70 1
<helm-release-name>-scpc-alternate-resolution 1 1 2 2 2 2 70 1
<helm-release-name>-scp-cache 1 1 8 8 8 8 70 1
<helm-release-name>-scp-nrfproxy 1 4 8 8 8 8 70 1
<helm-release-name>-scp-load-manager 1 1 8 8 8 8 70 1
<helm-release-name>-scp-oauth-nrfproxy 1 4 8 8 8 8 70 1
<helm-release-name>-scp-worker(profile 1) 2 8 4 4 8 8 70 1
<helm-release-name>-scp-worker(profile 2) 2 16 8 8 12 12 70 1
<helm-release-name>-scp-mediation 2 4 8 8 8 8 70 1
<helm-release-name>-scp-mediation test 0 0 0 0 0 0 70 1
<helm-release-name>-scp-worker(profile 3) 2 16 12 12 16 16 70 1

Note:

<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".
2.1.3.3 ASM Sidecar

SCP leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external TLS communication. If ASM Sidecar injection is enabled during SCP deployment or upgrade, this container is injected to each SCP pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about installing ASM, see Configuring SCP to Support Aspen Service Mesh.

Table 2-6 ASM Sidecar

Service Name ASM Sidecar Ephemeral Storage Per Pod
  vCPU/Pod Memory in Gi/Pod Minimum Value in Mi (If Enabled) Maximum Value in Gi (If Enabled)
Min Max Min Max
Helm test 2 2 1 1 70 1
Helm Hook 0 0 0 0 70 1
<helm-release-name>-scpc-subscription 2 2 1 1 70 1
<helm-release-name>-scpc-notification 2 2 1 1 70 1
<helm-release-name>-scpc-audit 2 2 1 1 70 1
<helm-release-name>-scpc-configuration 2 2 1 1 70 1
scpc-alternate-resolution 2 2 1 1 70 1
<helm-release-name>-scp-cache 4 4 4 4 70 1
<helm-release-name>-scp-nrfproxy 5 5 5 5 70 1
<helm-release-name>-scp-load-manager 4 4 4 4 70 1
<helm-release-name>-scp-oauth-nrfproxy 5 5 5 5 70 1
scp-worker (profile 1) 3 3 4 4 70 1
<helm-release-name>-scp-worker (profile 2) 5 5 5 5 70 1
<helm-release-name>-scp-mediation 0 0 0 0 70 1
<helm-release-name>-scp-mediation test 0 0 0 0 70 1
<helm-release-name>-scp-worker (profile 3) 8 8 8 8 70 1

Note:

<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".
2.1.3.4 Debug Tool Container

The Debug Tool Container provides third-party troubleshooting tools for debugging the runtime issues in a lab environment. If Debug Tool Container injection is enabled during SCP deployment or upgrade, this container is injected to each SCP pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about configuring Debug Tool Container, see Oracle Communications Cloud Native Core, Service Communication Proxy Troubleshooting Guide.

Table 2-7 Debug Tool Container

Service Name Debug Tool Container Ephemeral Storage Per Pod
  vCPU/Pod Memory in Gi/Pod Minimum Value in Mi (If Enabled) Maximum Value in Gi (If Enabled)
Min Max Min Max
Helm test 0 0 0 0 70 1
Helm Hook 0 0 0 0 70 1
<helm-release-name>-scpc-subscription 1 1 2 2 70 1
<helm-release-name>-scpc-notification 1 1 2 2 70 1
<helm-release-name>-scpc-audit 1 1 2 2 70 1
<helm-release-name>-scpc-configuration 1 1 2 2 70 1
<helm-release-name>-scpc-alternate-resolution 1 1 2 2 70 1
<helm-release-name>-scp-cache 1 1 2 2 70 1
<helm-release-name>-scp-nrfproxy 1 1 2 2 70 1
<helm-release-name>-scp-load-manager 1 1 2 2 70 1
<helm-release-name>-scp-oauth-nrfproxy 1 1 2 2 70 1
<helm-release-name>-scp-worker(profile 1) 1 1 2 2 70 1
<helm-release-name>-scp-worker(profile 2) 1 1 2 2 70 1
<helm-release-name>-scp-mediation 1 1 2 2 70 1
<helm-release-name>-scp-mediation test 1 1 2 2 70 1
<helm-release-name>-scp-worker (profile 3) 1 1 2 2 70 1

Note:

<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".
2.1.3.5 CNC Console

Oracle Communications Cloud Native Configuration Console (CNC Console) is a Graphical User Interface (GUI) for NFs and Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) common services. For information about CNC Console resources required by SCP, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide .

2.1.3.6 cnDBTier Resources

This section describes the cnDBTier resources required to deploy SCP.

Table 2-8 cnDBTier Services Resource Requirements

Service Name CPU/Pod Memory/Pod (in GB) PVC Size (in GB) Ephemeral Storage
Min Max Min Max PVC1 PVC2 Min (MB) Max (MB)
MGMT (ndbmgmd) 2 2 4 5 14 NA 90 100
DB (ndbmtd) 3 3 8 8 12 27 90 100
SQL - Replication (ndbmysqld) 4 4 10 10 25 NA 90 100
SQL - Access (ndbappmysqld) 4 4 8 8 20 NA 90 100
Monitor Service (db-monitor-svc) 0.2 0.2 0.5 0.5 0 NA 90 100
db-connectivity-service 0 0 0 0 0 NA 0 0
Replication Service(db-replication-svc) 2 2 12 12 11 0.01 90 1000
Backup Manager Service (db-backup-manager-svc) 0.1 0.1 0.128 0.128 0 NA 90 100

cnDBTier Sidecars

The following table indicates the sidecars for cnDBTier services.

Table 2-9 Sidecars per cnDBTier Service

Service Name CPU/Pod Memory/Pod (in GB) PVC Size (in GB) Ephemeral Storage
Min Max Min Max PVC1 PVC2 Min (MB) Max (MB)
MGMT (ndbmgmd) 0 0 0 0 NA NA 0 0
DB (ndbmtd) 1 1 2 2 NA NA 90 2000
SQL - Replication (ndbmysqld) 0.1 0.1 0.256 0.256 NA NA 90 100
SQL - Access (ndbappmysqld) 0.1 0.1 0.256 0.256 NA NA 90 100
Monitor Service (db-monitor-svc) 0 0 0 0 NA NA 0 0
db-connectivity-service NA NA NA NA NA NA NA NA
Replication Service(db-replication-svc) 0.2 0.2 0.5 0.5 NA NA 90 100
Backup Manager Service (db-backup-manager-svc) 0 0 0 0 NA NA 0 0
2.1.3.7 OSO Resources
This section describes the OSO resources required to deploy SCP.

Table 2-10 OSO Resource Requirement

Microservice Name CPU Memory (GB) Replica
Min Max Min Max
prom-alertmanager 0.5 0.5 1 1 2
prom-server 16 16 32 32 1
2.1.3.8 OCCM Resources

OCCM manages certificate creation, recreation, renewal, and so on for SCP. For information about OCCM resources required by SCP, see Oracle Communications Cloud Native Core, Certificate Management Installation, Upgrade, and Fault Recovery Guide.

2.2 Installation Sequence

This section describes preinstallation, installation, and postinstallation tasks for SCP.

You must perform these tasks after completing Prerequisites and in the same sequence as outlined in the following table.

Table 2-11 SCP Installation Sequence

Installation Sequence Applicable for CNE Deployment Applicable for OCI Deployment
Preinstallation Tasks Yes Yes
Installation Tasks Yes Yes
Postinstallation Tasks Yes Yes

2.2.1 Preinstallation Tasks

To install SCP, perform the tasks described in this section.

2.2.1.1 Downloading the SCP Package

To download the SCP package from My Oracle Support (MOS), perform the following procedure:

  1. Log in to My Oracle Support (MOS) using your login credentials.
  2. Click the Patches & Updates tab to locate the patch.
  3. In the Patch Search console, click Product or Family (Advanced).
  4. In the Product field, enter Oracle Communications Cloud Native Core - 5G.
  5. From the Release drop-down list, select Oracle Communications Cloud Native Core Service Communication Proxy <release_number>.

    Where, <release_number> indicates the required release number of SCP.

  6. Click Search.

    The Patch Advanced Search Results list appears.

  7. From the Patch Name column, select the required patch number.

    The Patch Details window appears.

  8. Click Download.

    The File Download window appears.

  9. Click the <p********>_<release_number>_Tekelec.zip file to download the release package.

    Where, <p********> is the MOS patch number and <release_number> is the release number of SCP.

2.2.1.2 Pushing the Images to Customer Docker Registry

SCP Images

SCP deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.

The following table lists the Docker images of SCP:

Table 2-12 Images for SCP

Microservices Image Tag
<helm-release-name>-SCP-Worker ocscp-worker 24.3.0
<helm-release-name>-SCPC-Configuration ocscp-configuration 24.3.0
<helm-release-name>-SCPC-Notification ocscp-notification 24.3.0
<helm-release-name>-SCPC-Subscription ocscp-subscription 24.3.0
<helm-release-name>-SCPC-Audit ocscp-audit 24.3.0
<helm-release-name>-SCPC-Alternate-Resolution ocscp-alternate-resolution 24.3.0
<helm-release-name>-SCP-Cache ocscp-cache 24.3.0
<helm-release-name>-SCP-nrfproxy ocscp-nrfproxy 24.3.0
<helm-release-name>-SCP-nrfProxy-oauth ocscp-nrfproxy-oauth 24.3.0
<helm-release-name>-SCP-Mediation ocmed-nfmediation 24.3.0
<helm-release-name>-SCP-loadManager ocscp-load-manager 24.3.0

Note:

<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".

To push the images to the registry:

  1. Navigate to the location where you want to install SCP, and then unzip the SCP release package (<p********>_<release_number>_Tekelec.zip) to retrieve the following CSAR package.

    The SCP package is as follows: <ReleaseName>_csar_<Releasenumber>.zip.

    Where,

    <ReleaseName> is a name that is used to track this installation instance.

    <Releasenumber> is the release number.

    For example, ocscp_csar_24_3_0_0_0.zip.
  2. Untar the SCP package to retrieve the OCSCP image tar file: unzip <ReleaseName>_csar_<Releasenumber>.zip.

    For example, unzip ocscp_csar_24_3_0_0_0.zip

    The zip file consists of the following:

    
    |── Definitions
    │   ├── ocscp_cne_compatibility.yaml
    │   └── ocscp.yaml
    ├── Files
    │   ├── ChangeLog.txt
    │   ├── Helm
    │   │   ├── ocscp-24.3.0.tgz 
    │   │   └── ocscp-network-policy-24.3.0.tgz
    │   ├── Licenses
    │   ├── nf-test-24.3.0.tar
    │   ├── ocdebug-tools-24.3.0.tar
    │   ├── ocmed-nfmediation-24.3.0.tar
    │   ├── ocscp-alternate-resolution-24.3.0.tar
    │   ├── ocscp-audit-24.3.0.tar
    │   ├── ocscp-cache-24.3.0.tar
    │   ├── ocscp-configuration-24.3.0.tar
    │   ├── ocscp-load-manager-24.3.0.tar
    │   ├── ocscp-notification-24.3.0.tar
    │   ├── ocscp-nrfproxy-24.3.0.tar
    │   ├── ocscp-subscription-24.3.0.tar
    │   ├── ocscp-nrfProxy-oauth-24.3.0.tar
    │   ├── ocscp-worker-24.3.0.tar
    │   ├── Oracle.cert
    │   └── Tests
    ├── ocscp.mf
    ├── Scripts
    │   ├── ocscp_alerting_rules_promha.yaml
    │   ├── ocscp_alertrules.yaml
    │   ├── ocscp_configuration_openapi_24.3.0.json
    │   ├── ocscp_custom_values_24.3.0.yaml
    │   ├── ocscp_dbtier_24.3.0_custom_values_24.3.0.yaml
    │   ├── ocscp_metric_dashboard_24.3.0.json
    │   ├── ocscp_metric_dashboard_promha_24.3.0.json
    │   ├── ocscp_mib_24.3.0.mib
    │   ├── ocscp_mib_tc_24.3.0.mib
    │   ├── ocscp_network_policies_values_24.3.0.yaml
    │   ├── ocscp_servicemesh_config_values_24.3.0.yaml
    │   └── toplevel.mib
    ├── Scripts
    │    ├── oci
    │    │      └── ocscp_oci_alertrules_24.3.0.zip
    │    │      └── ocscp_oci_metric_dashboard_24.3.0.zip
    └── TOSCA-Metadata
        └── TOSCA.meta
  3. Open the Files folder and run one of the following commands to load ocscp-images-24.3.0.tar:
    podman load --input /IMAGE_PATH/ocscp-images-<release_number>.tar
    docker load --input /IMAGE_PATH/ocscp-images-<release_number>.tar

    Example:

    docker load --input /IMAGE_PATH/ocscp-images-24.3.0.tar
  4. Run one of the following commands to verify that the images are loaded:
    podman images
    docker images

    Sample Output:

    docker.io/ocscp/ocscp-cache                           24.3.0   98fc90defb56        2 hours ago         725MB
    docker.io/ocscp/ocscp-nrfproxy-oauth                  24.3.0   0d92bfbf7c14        2 hours ago         720MB
    docker.io/ocscp/ocscp-configuration                   24.3.0   f23cddb3ec83        2 hours ago         725MB
    docker.io/ocscp/ocscp-worker                          24.3.0   16c8f423c3b9        2 hours ago         877MB
    docker.io/ocscp/ocscp-load-manager                    24.3.0   dab875c4179a        2 hours ago         724MB
    docker.io/ocscp/ocscp-nrfproxy                        24.3.0   85029929a670        2 hours ago         690MB
    docker.io/ocscp/ocscp-alternate-resolution            24.3.0   2c38646f8bd7        2 hours ago         695MB
    docker.io/ocscp/ocscp-audit                           24.3.0   039e25297115        2 hours ago         694MB
    docker.io/ocscp/ocscp-notification                    24.3.0   a21e6bed6177        2 hours ago         710MB
    docker.io/ocscp/ocmed-nfmediation                     24.3.0   772e01a41584        2 hours ago         710MB
  5. Verify the list of images shown in the output with the list of images shown in Table 2-12. If the list does not match, reload the image tar file.
  6. Run one of the following commands to tag the images to the registry:
    podman tag <image-name>:<image-tag> <podman-repo>/ <image-name>:<image-tag>
    docker tag <image-name>:<image-tag> <docker-repo>/ <image-name>:<image-tag>
    Where,
    • <image-name> is the image name.
    • <image-tag> is the image release number.
    • <docker-repo> is the docker registry address with Port Number if registry has port attached. This is a repository to store the images.
    • <podman-repo> is the Podman registry address with Port Number if registry has port attached. This is a repository to store the images.
  7. Run one of the following commands to push the image to the registry:
    podman push <podman-repo>/<image-name>:<image-tag>
    docker push <docker-repo>/<image-name>:<image-tag>

Note:

It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise docker push command may fail.
2.2.1.3 Pushing the SCP Images to OCI Docker Registry

SCP Images

SCP deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.

The following table lists the Docker images of SCP:

Table 2-13 Images for SCP

Microservices Image Tag
<helm-release-name>-SCP-Worker ocscp-worker 24.3.0
<helm-release-name>-SCPC-Configuration ocscp-configuration 24.3.0
<helm-release-name>-SCPC-Notification ocscp-notification 24.3.0
<helm-release-name>-SCPC-Subscription ocscp-subscription 24.3.0
<helm-release-name>-SCPC-Audit ocscp-audit 24.3.0
<helm-release-name>-SCPC-Alternate-Resolution ocscp-alternate-resolution 24.3.0
<helm-release-name>-SCP-Cache ocscp-cache 24.3.0
<helm-release-name>-SCP-nrfproxy ocscp-nrfproxy 24.3.0
<helm-release-name>-SCP-nrfProxy-oauth ocscp-nrfproxy-oauth 24.3.0
<helm-release-name>-SCP-Mediation ocmed-nfmediation 24.3.0
<helm-release-name>-SCP-loadManager ocscp-load-manager 24.3.0

Note:

<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".

To push the images to the registry:

  1. Navigate to the location where you want to install SCP, and then unzip the SCP release package (<p********>_<release_number>_Tekelec.zip) to retrieve the following CSAR package.

    The SCP package is as follows: <ReleaseName>_csar_<Releasenumber>.zip.

    Where,

    <ReleaseName> is a name that is used to track this installation instance.

    <Releasenumber> is the release number.

    For example, ocscp_csar_24_3_0_0_0.zip.
  2. Untar the SCP package to retrieve the OCSCP image tar file: unzip <ReleaseName>_csar_<Releasenumber>.zip.

    For example, unzip ocscp_csar_24_3_0_0_0.zip

    The zip file consists of the following:

    
    |── Definitions
    │   ├── ocscp_cne_compatibility.yaml
    │   └── ocscp.yaml
    ├── Files
    │   ├── ChangeLog.txt
    │   ├── Helm
    │   │   ├── ocscp-24.3.0.tgz 
    │   │   └── ocscp-network-policy-24.3.0.tgz
    │   ├── Licenses
    │   ├── nf-test-24.3.0.tar
    │   ├── ocdebug-tools-24.3.0.tar
    │   ├── ocmed-nfmediation-24.3.0.tar
    │   ├── ocscp-alternate-resolution-24.3.0.tar
    │   ├── ocscp-audit-24.3.0.tar
    │   ├── ocscp-cache-24.3.0.tar
    │   ├── ocscp-configuration-24.3.0.tar
    │   ├── ocscp-load-manager-24.3.0.tar
    │   ├── ocscp-notification-24.3.0.tar
    │   ├── ocscp-nrfproxy-24.3.0.tar
    │   ├── ocscp-subscription-24.3.0.tar
    │   ├── ocscp-nrfProxy-oauth-24.3.0.tar
    │   ├── ocscp-worker-24.3.0.tar
    │   ├── Oracle.cert
    │   └── Tests
    ├── ocscp.mf
    ├── Scripts
    │   ├── ocscp_alerting_rules_promha.yaml
    │   ├── ocscp_alertrules.yaml
    │   ├── ocscp_configuration_openapi_24.3.0.json
    │   ├── ocscp_custom_values_24.3.0.yaml
    │   ├── ocscp_dbtier_24.3.0_custom_values_24.3.0.yaml
    │   ├── ocscp_metric_dashboard_24.3.0.json
    │   ├── ocscp_metric_dashboard_promha_24.3.0.json
    │   ├── ocscp_mib_24.3.0.mib
    │   ├── ocscp_mib_tc_24.3.0.mib
    │   ├── ocscp_network_policies_values_24.3.0.yaml
    │   ├── ocscp_servicemesh_config_values_24.3.0.yaml
    │   └── toplevel.mib
    ├── Scripts
    │    ├── oci
    │    │      └── ocscp_oci_alertrules_24.3.0.zip
    │    │      └── ocscp_oci_metric_dashboard_24.3.0.zip
    └── TOSCA-Metadata
        └── TOSCA.meta
  3. Open the Files folder and run one of the following commands to load ocscp-images-24.3.0.tar:
    podman load --input /IMAGE_PATH/ocscp-images-<release_number>.tar
    docker load --input /IMAGE_PATH/ocscp-images-<release_number>.tar

    Example:

    docker load --input /IMAGE_PATH/ocscp-images-24.3.0.tar
  4. Run one of the following commands to verify that the images are loaded:
    podman images
    docker images

    Sample Output:

    docker.io/ocscp/ocscp-cache                           24.3.0   98fc90defb56        2 hours ago         725MB
    docker.io/ocscp/ocscp-nrfproxy-oauth                  24.3.0   0d92bfbf7c14        2 hours ago         720MB
    docker.io/ocscp/ocscp-configuration                   24.3.0   f23cddb3ec83        2 hours ago         725MB
    docker.io/ocscp/ocscp-worker                          24.3.0   16c8f423c3b9        2 hours ago         877MB
    docker.io/ocscp/ocscp-load-manager                    24.3.0   dab875c4179a        2 hours ago         724MB
    docker.io/ocscp/ocscp-nrfproxy                        24.3.0   85029929a670        2 hours ago         690MB
    docker.io/ocscp/ocscp-alternate-resolution            24.3.0   2c38646f8bd7        2 hours ago         695MB
    docker.io/ocscp/ocscp-audit                           24.3.0   039e25297115        2 hours ago         694MB
    docker.io/ocscp/ocscp-notification                    24.3.0   a21e6bed6177        2 hours ago         710MB
    docker.io/ocscp/ocmed-nfmediation                     24.3.0   772e01a41584        2 hours ago         710MB
  5. Verify the list of images shown in the output with the list of images shown in Table 2-12. If the list does not match, reload the image tar file.
  6. Run the following commands to log in to the OCI registry:
    podman login -u <REGISTRY_USERNAME> -p <REGISTRY_PASSWORD> <REGISTRY_NAME>
    docker login -u <REGISTRY_USERNAME> -p <REGISTRY_PASSWORD> <REGISTRY_NAME>

    Where,

    • <REGISTRY_NAME> is <Region_Key>.ocir.io.
    • <REGISTRY_USERNAME> is <Object Storage Namespace>/<identity_domain>/email_id.
    • <REGISTRY_PASSWORD> is the Auth Token generated by the user.

      For more information about OCIR configuration and creating auth token, see Oracle Communications Cloud Native Core, OCI Deployment Guide.

    • <Object Storage Namespace> can be obtained from the OCI Console by navigating to Governance & Administration > Account Management > Tenancy Details > Object Storage Namespace.
    • <Identity Domain> is the domain of the user.
    • In OCI, each region is associated with a key. For more information, see Regions and Availability Domains.
  7. Run one of the following commands to tag the images to the registry:
    podman tag <image-name>:<image-tag> <podman-repo>/ <image-name>:<image-tag>
    docker tag <image-name>:<image-tag> <docker-repo>/ <image-name>:<image-tag>
    Where,
    • <image-name> is the image name.
    • <image-tag> is the image release number.
    • <docker-repo> is the docker registry address with Port Number if registry has port attached. This is a repository to store the images.
    • <podman-repo> is the Podman registry address with Port Number if registry has port attached. This is a repository to store the images.
  8. Run one of the following commands to push the image:
    podman push <oci-repo>/<image-name>:<image-tag>
    docker push <oci-repo>/<image-name>:<image-tag>

    Where, <oci-repo> is the OCI registry path.

  9. Make all the image repositories public by performing the following steps:

    Note:

    All the image repositories must be public.
    1. Log in to the OCI Console using your login credentials.
    2. From the left navigation pane, click Developer Services.
    3. On the preview pane, click Container Registry.
    4. From the Compartment drop-down list, select networkfunctions5G (root).
    5. From the Repositories and images drop-down list, select the required image and click Change to Public.

      The images details are displayed under the Repository information tab and the image changes to public. For example, the 24.3.0db/occne/cndbtier-mysqlndb-client (Private) changes to 24.3.0db/occne/cndbtier-mysqlndb-client (Public).

    6. Repeat substep 9e to make all image repositories public.
2.2.1.4 Verifying and Creating Namespace
To verify and create a namespace:

Note:

This is a mandatory procedure, run this before proceeding further with the installation. The namespace created or verified in this procedure is an input for the next procedures.
  1. Run the following command to verify if the required namespace already exists in the system:
    kubectl get namespaces

    In the output of the above command, if the namespace exists, continue with Creating Service Account, Role, and Rolebinding.

  2. If the required namespace is unavailable, create the namespace by running the following command:
    kubectl create namespace <required namespace>

    Where, <required namespace> is the name of the namespace.

    For example, the following command creates the namespace, ocscp:

    kubectl create namespace ocscp
  3. Update the namespace for the required deployment Helm parameters as described in Configuration Parameters.

    Naming Convention for Namespaces

    The namespace should:

    • start and end with an alphanumeric character.
    • contain 63 characters or less.
    • contain only alphanumeric characters or '-'.

    Note:

    It is recommended to avoid using the prefix kube- when creating a namespace. The prefix is reserved for Kubernetes system namespaces.
2.2.1.5 Creating Service Account, Role, and Rolebinding

This section is optional and it describes how to manually create a service account, role, and rolebinding. It is required only when customer needs to create a role, rolebinding, and service account manually before installing SCP.

Note:

The secrets should exist in the same namespace where SCP is getting deployed. This helps to bind the Kubernetes role with the given service account.
  1. Run the following command to create an SCP resource file:
    vi <ocscp-resource-file>

    Example:

    vi ocscp-resource-template.yaml

  2. Update the ocscp-resource-template.yaml file with release specific information:
    A sample template to update the ocscp-resource-template.yaml file is as follows:
    rules:
    - apiGroups: [""]
      resources: #resources under api group to be tested. Added for helm test. Helm test dependency are services,configmaps,pods,pvc,serviceaccounts
      - services
      - configmaps
      - pods
      - secrets
      - endpoints
      - persistentvolumeclaims
      - serviceaccounts
    
      verbs: ["get", "list", "watch", "delete"] # permissions of resources under api group, delete added to perform rolling restart of cache pods.
    - apiGroups:
      - "" # "" indicates the core API group
      resources: # Added for helm test. Helm test dependency 
      - services
      - configmaps
      - pods
      - secrets
      - endpoints
      - persistentvolumeclaims
      - serviceaccounts
    
      verbs: ["get", "list", "watch", "delete"] # permissions of resources under api group, delete added to perform rolling restart of cache pods. 
    #APIGroups that are added due to helm test dependency are apps, autoscaling, rbac.authorization and monitoring.coreos
    - apiGroups:
      - apps
      resources:
      - deployments
      verbs: # permissions so that resources under api group has
      - get
      - watch
      - list
    - apiGroups:
      - autoscaling
      resources: # Added for helm test. Helm test dependency
      - horizontalpodautoscalers
      verbs: # permissions so that resources under api group has
      - get
      - watch
      - list
    
    - apiGroups:
      - rbac.authorization.k8s.io
      resources: # Added for helm test. Helm test dependency
      - roles
      - rolebindings
      verbs:
      - get
      - watch
      - list
    - apiGroups:
      - monitoring.coreos.com
      resources: # Added for helm test. Helm test dependency
      - prometheusrules
      verbs:
      - get
      - watch
      - list
  3. Run the following command to create service account, role, and role binding:
    kubectl -n <ocscp-namespace> create -f ocscp-resource-template.yaml

    Example:

    kubectl -n ocscp create -f ocscp-resource-template.yaml

  4. Update the serviceAccountName parameter in the ocscp_values_24.3.0.yaml file with the value updated in the name field under kind: ServiceAccount. For more information about the serviceAccountName parameter, see Global Parameters.
2.2.1.6 Configuring Database for SCP
This section explains how database administrators can create users and database in a single and multisite deployment.

Note:

While performing a fresh installation, if SCP is already deployed, purge the deployment and remove the database and users that were used for the previous deployment. For uninstallation procedure, see Uninstalling SCP.
  1. Log in to the MySQL server and ensure that there is a privileged user (<privileged user>) with the privileges similar to a root user.
  2. On each SQL node, run the following command to verify that the privileged user has the required permissions to allow connections from remote hosts:
    mysql>select host from mysql.user where User='<privileged username>';
    +------+
    | host |
    +------+
    | % |
    +------+
    1 rowinset(0.00 sec)
  3. If you do not see '%' in the output of the above mentioned query, then run the following command to modify this field to allow connections to remote host:
    mysql>update mysql.user set host='%' where User='<privileged username>';
    Query OK, 0rowsaffected (0.00 sec)
    Rowsmatched: 1 Changed: 0 Warnings: 0
    mysql> flush privileges;
    Query OK, 0rowsaffected (0.06 sec)

    Note:

    Perform this step on each SQL node.
  4. To automatically create an application user, backup database, and application database, ensure that the createUser parameter in the ocscp_values.yaml file is set to true. To manually create an application user, application database, and backup database, set the createUser parameter to false in the ocscp_values.yaml file.

    By default, the createUser parameter value is set to true. For more information about this parameter, see Table 3-1.

  5. Run the following commands to create an application and backup database:
    • For application database:
      CREATE DATABASE <scp_dbname>;

      Example:

      CREATE DATABASE ocscpdb;
    • For backup database:
      CREATE DATABASE <scp_backupdbname>;

      Example:

      CREATE DATABASE ocscpbackupdb;
  6. Run the following command to create an application user and assign privileges:
    CREATE USER '<username>'@'%' IDENTIFIED BY '<password>';
    GRANT SELECT, INSERT, DELETE, UPDATE ON <scp_dbname>.* TO <username>@'%';
    Where,
    • <scp_dbname> is the database name.
    • <username> is the database username.

    Example:

    CREATE USER 'scpApplicationUsr'@'%' IDENTIFIED BY 'scpApplicationPasswd'; GRANT SELECT, INSERT, DELETE, UPDATE ON ocscpdb.* TO scpApplicationUsr@'%';
  7. Run the following command to grant NDB_STORED_USER permission to the application user:
    GRANT NDB_STORED_USER ON *.* TO '<username>'@'%' WITH GRANT OPTION ;

    Example:

    GRANT NDB_STORED_USER ON *.* TO 'scpApplicationUsr'@'%' WITH GRANT OPTION ;

    Note:

    During a fresh SCP installation, the application database and backup database must be removed manually by running the following command:
    drop database <dbname>;
2.2.1.7 Configuring Kubernetes Secret for Accessing Database

This section explains how to configure Kubernetes secrets for accessing SCP database.

Note:

Do not use the same credentials in different Kubernetes secrets, and the passwords stored in the secrets must follow the password policy requirements as recommended in "Changing cnDBTier Passwords" in Oracle Communications Cloud Native Core Security Guide.
2.2.1.7.1 Creating and Updating Secret for Privileged Database User
This section explains how to create and update Kubernetes secret for privileged user to access the database.
  1. Run the following command to create Kubernetes secret:
    kubectl create secret generic <secret name> --from-literal=DB_USERNAME=<privileged user> --from-literal=DB_PASSWORD=<privileged user password> --from-literal=DB_NAME=<scp application db> --from-literal=RELEASE_DB_NAME=<scp backup db> -n <scp namespace>
    Where,
    • <secret name> is the secret name of the Privileged User.
    • <privileged user> is the username of the Privileged User.
    • <privileged user password> is the password of the Privileged User.
    • <scp backup db> is the backup database name.
    • <scp namespace> is the namespace of SCP deployment.

    Note:

    Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in the later releases.

    Example:

    kubectl create secret generic privilegeduser-secret --from-literal=DB_USERNAME=scpPrivilegedUsr --from-literal=DB_PASSWORD=scpPrivilegedPasswd --from-literal=DB_NAME=ocscpdb --from-literal=RELEASE_DB_NAME=ocscpbackupdb -n scpsvc

  2. Run the following command to verify the secret created:
    kubectl describe secret <secret name> -n <scp namespace>
    Where,
    • <secret name> is the secret name of the Privileged User.
    • <scp namespace> is the namespace of SCP deployment.

    Example:

    kubectl describe secret privilegeduser-secret -n ocscp

    Sample output:

    Name:         privilegeduser-secret
    Namespace:    ocscp
    Labels:       <none>
    Annotations:  <none>
    
    Type:  Opaque
    
    Data
    ====
    mysql-password:  10 bytes
    mysql-username:  17 bytes
    
2.2.1.7.2 Creating and Updating Secret for Application Database User
This section explains how to create and update Kubernetes secret for application user to access the database.
  1. Run the following command to create a Kubernetes secret:
    kubectl create secret generic <secret name> --from-literal=DB_USERNAME=<application user> --from-literal=DB_PASSWORD=<application user password> --from-literal=DB_NAME=<scp application db> -n <scp namespace>
    Where,
    • <secret name> is the secret name of the Privileged User.
    • <application user> is the username of the Application User.
    • <application user password> is the password of the Application User.
    • <scp application db> is the application database name.
    • <scp namespace> is the namespace of SCP deployment.

    Note:

    Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in the later releases.

    Example:

    kubectl create secret generic appuser-secret --from-literal=DB_USERNAME=scpApplicationUsr --from-literal=DB_PASSWORD=scpApplicationPasswd --from-literal=DB_NAME=ocscpdb -n scpsvc

  2. Run the following command to verify the secret created:
    kubectl describe secret <application user secret name> -n <namespace>
    Where,
    • <application user secret name> is the secret name of the application user.
    • <scp namespace> is the namespace of SCP deployment.

    Example:

    kubectl describe secret appuser-secret -n ocscp

    Sample output:

    Name:         appuser-secret
    Namespace:    ocscp
    Labels:       <none>
    Annotations:  <none>
    
    Type:  Opaque
    
    Data
    ====
    mysql-password:  10 bytes
    mysql-username:  7 bytes
    
2.2.1.8 Configuring SSL or TLS Certificates to Enable HTTPS

The Secure Sockets Layer (SSL) and Transport Layer Security (TLS) certificates must be configured in SCP to enable Hypertext Transfer Protocol Secure (HTTPS). These certificates must be stored in Kubernetes secret and the secret name must be provided in the sbiProxySslConfigurations section of the custom-values.yaml file.

Perform the following procedure to configure SSL or TLS certificates for enabling HTTPS in SCP. You must perform this procedure before:
  • fresh installation of SCP.
  • performing an SCP upgrade.
You must have the following files to create Kubernetes secret for HTTPS:
  • ECDSA private key and CA signed certificate of SCP if initialAlgorithm is ES256
  • RSA private key and CA signed certificate of SCP if initialAlgorithm is RS256
  • TrustStore password file
  • KeyStore password file
  • CA Root file

Note:

  • The process to create the private keys, certificates, and passwords is at the operators' discretion.
  • The passwords for TrustStore and KeyStore must be stored in the respective password files.
  • Perform this procedure before enabling HTTPS in SCP.

You can create Kubernetes secret for enabling HTTPS in SCP using one of the following methods:

  • Managing Kubernetes secret manually
  • Managing Kubernetes secret through OCCM

Managing Kubernetes Secret Manually

  1. To create Kubernetes secret manually, run the following command:
    kubectl create secret generic <ocscp-secret-name> --from-file=<rsa private key file name> --from-file=<ssl truststore file name> --from-file=<ssl keystore file name> --from-file=<CA root bundle> --from-file=<ssl rsa certificate file name> -n <Namespace of OCSCP deployment>

    Note:

    Note down the command used during the creation of Kubernetes secret. This command is used for the subsequent updates.

    Example:

    kubectl create secret generic server-primary-ocscp-secret --from-file=server_rsa_private_key_pkcs1.pem --from-file=server_ocscp.cer --from-file=server_caroot.cer --from-file=trust.txt --from-file=key.txt -n $NAMESPACE
    kubectl create secret generic default-primary-ocscp-secret --from-file=client_rsa_private_key_pkcs1.pem --from-file=client_ocscp.cer --from-file=caroot.cer --from-file=trust.txt --from-file=key.txt -n $NAMESPACE

    Note:

    It is recommended to use the same Kubernetes secret name for the primary client and the primary server as mentioned in the example. In case you change <ocscp-secret-name>, then update the k8SecretName parameter under the sbiProxySslConfigurations section in the custom-values.yaml file. For more information about sbiProxySslConfigurations parameters, see Global Parameters.

  2. Run the following command to verify the Kubernetes secret created:
    kubectl describe secret <ocscp-secret-name> -n <Namespace of OCSCP deployment>

    Example:

    kubectl describe secret ocscp-secret -n ocscp

  3. Optional: Perform the following tasks to add, remove, or modify TLS or SSL certificates in Kubernetes secret:

    Note:

    You must have the certificates and files that you want to add or update in the Kubernetes secret.
    • To add a certificate, run the following command:
      TLS_CRT=$(base64 < "<certificate-name>" | tr -d '\n')
      kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":\"${TLS_CRT}\"}}"
      Where,
      • <certificate-name> is the certificate file name.
      • <secret-name> is the name of the Kubernetes secret, for example, ocscp-secret.

      Example:

      If you want to add a Certificate Authority (CA) Root from the caroot.cer file to the ocscp-secret, run the following command:

      TLS_CRT=$(base64 < "caroot.cer" | tr -d '\n')
      kubectl patch secret ocscp-secret  -p "{\"data\":{\"caroot.cer\":\"${TLS_CRT}\"}}" -n scpsvc

      Similarly, you can also add other certificates and keys to the ocscp-secret.

    • To update an existing certificate, run the following command:
      TLS_CRT=$(base64 < "<updated-certificate-name>" | tr -d '\n')
      kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":\"${TLS_CRT}\"}}"

      Where, <updated-certificate-name> is the certificate file that contains the updated content.

      Example:

      If you want to update the privatekey present in the rsa_private_key_pkcs1.pem file to the ocscp-secret, run the following command:

      TLS_CRT=$(base64 < "rsa_private_key_pkcs1.pem" | tr -d '\n') 
      kubectl patch secret ocscp-secret -p "{\"data\":{\"rsa_private_key_pkcs1.pem\":\"${TLS_CRT}\"}}" -n scpsvc

      Similarly, you can also update other certificates and keys to the ocscp-secret.

    • To remove an existing certificate, run the following command:
      kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":null}}"

      Where, <certificate-name> is the name of the certificate to be removed.

      The certificate must be removed when it expires or needs to be revoked.

      Example:

      To remove the CA Root from the ocscp-secret, run the following command:
      kubectl patch secret ocscp-secret  -p "{\"data\":{\"caroot.cer\":null}}" -n scpsvc
      

      Similarly, you can also remove other certificates and keys from the ocscp-secret.

The certificate update and renewal impacts are as follows:
  • Updating, adding, or deleting the certificate, terminates all the existing connections gracefully and reestablishes new connections for new requests.
  • When the certificates expires, no new connections are established for new requests, however, the existing connections remain active. After the renewal of the certificates as described in Step 3, all the existing connections are gracefully terminated. And, new connections are established with the renewed certificates.

Managing Kubernetes Secret Through OCCM

To create the Kubernetes secret using OCCM, see "Managing Certificates" in Oracle Communications Cloud Native Core, Certificate Management User Guide, and then patch the Kubernetes secret created by OCCM to add keyStore password and trustStore password files by running the following commands:
  1. To patch the Kubernetes secret created with the keyStore password file:
    TLS_CRT=$(base64 < "key.txt" | tr -d '\n')
    kubectl patch secret server-primary-ocscp-secret-occm -n scpsvc -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"

    Where, key.txt is the KeyStore password file that contains KeyStore password.

  2. To patch the Kubernetes secret created with the trustStore password file:
    TLS_CRT=$(base64 < "trust.txt" | tr -d '\n')
    kubectl patch secret server-primary-ocscp-secret-occm -n scpsvc -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"

    Where, trust.txt is the TrustStore password file that contains TrustStore password.

Note:

To monitor the lifecycle management of the certificates through OCCM, do not patch the Kubernetes secret manually to update the TLS certificate or keys. It must be done through the OCCM GUI.
2.2.1.9 Configuring SCP to Support Aspen Service Mesh

SCP leverages the Platform Service Mesh (for example, Aspen Service Mesh (ASM)) for all internal and external TLS communication by deploying a special sidecar proxy in each pod to intercept all the network communications. The service mesh integration provides inter-NF communication and allows API gateway to co-work with service mesh. The service mesh integration supports the services by deploying a special sidecar proxy in each pods to intercept all the network communications between microservices.

Supported ASM version: 1.14.6 and 1.11.8

For ASM installation and configuration, see official Aspen Service Mesh website for details.

Aspen Service Mesh (ASM) configurations are categorized as follows:

  • Control Plane: It involves adding labels or annotations to inject sidecar. The control plane configurations are part of the NF Helm chart.
  • Data Plane: It helps in traffic management, such as handling NF call flows by adding Service Entries (SE), Destination Rules (DR), Envoy Filters (EF), and other resource changes such as apiVersion change between different versions. This configuration is done manually by considering each NF requirement and ASM deployment.

Data Plane Configuration

Data Plane configuration consists of following Custom Resource Definitions (CRDs):

  • Service Entry (SE)
  • Destination Rule (DR)
  • Envoy Filter (EF)

Note:

Use Helm charts to add or remove CRDs that you may require due to ASM upgrades to configure features across different releases.

The data plane configuration is applicable in the following scenarios:

  • NF to NF Communication: During NF to NF communication, where sidecar is injected to both the NFs, SE and DR must communicate with the corresponding SE and DR of the other NF. Otherwise, the sidecar rejects the communication. All egress communications of NFs must have a configured entry for SE and DR.

    Note:

    Configure the core DNS with the producer NF endpoint to enable the sidecar access for establishing communication between cluster.
  • Kube-api-server: For Kube-api-server, there are a few NFs that require access to the Kubernetes API server. The ASM proxy (mTLS enabled) may block this. As per F5 recommendation, the NF must add SE for the Kubernetes API server for its own namespace.
  • Envoy Filters: Sidecars rewrite the header with its own default value. Therefore, the headers from back-end services are lost. You require Envoy Filters to help in passing the headers from back-end services to use it as it is.

ASM Configuration File

A sample ocscp_servicemesh_config_values_24.3.0.yaml is available in the Scripts folder of ocscp_csar_24_3_0_0_0.zip. For downloading the file, see Customizing SCP. To view ASM EnvoyFilter configuration enhancements, see ASM Configuration.

Note:

To connect to vDBTier, create an SE and DR for MySQL connectivity service if the database is in different cluster. Else, the sidecar rejects request as vDBTier does not support sidecars.
2.2.1.9.1 Predeployment Configurations
This section explains the predeployment configuration procedure to install SCP with ASM support.

Note:

  • For information about ASM parameters, see ASM Resource. You can log in to ASM using ASPEN credentials.
  • On the ASM setup, create service entries for respective namespace.
  1. Run the following command to create a namespace for SCP deployment if not already created:
    kubectl create ns <scp-namespace-name>
  2. Run the following command to configure access to Kubernetes API Service and create a service entry in pod networking so that pods can access Kubernetes api-server:
    kubectl apply -f kube-api-se.yaml
    Sample kube-api-se.yaml file is as follows:
    # service_entry_kubernetes.yaml
    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: kube-api-server
      namespace: <scp-namespace>
    spec:
      hosts:
      - kubernetes.default.svc.<domain>
      exportTo:
      - "."
      addresses:
      - <10.96.0.1> # cluster IP of kubernetes api server
      location: MESH_INTERNAL
      ports:
      - number: 443
        name: https
        protocol: HTTPS
      resolution: NONE
  3. Run the following command to set Network Repository Function (NRF) connectivity by creating ServiceEntry and DestinationRule and access external or public NRF service that is not part of Service Mesh Registry:
    kubectl apply -f nrf-se-dr.yaml
    Sample nrf-se-dr.yaml file is as follows:
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: nrf-dr
      namespace: <scp-namespace>
    spec:
      exportTo:
      - .
      host: ocnrf.3gpp.oracle.com
      trafficPolicy:
        tls:
          mode: MUTUAL
          clientCertificate: /etc/certs/cert-chain.pem
          privateKey: /etc/certs/key.pem
          caCertificates: /etc/certs/root-cert.pem
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: nrf-se
      namespace: <scp-namespace>
    spec:
      exportTo:
      - .
      hosts:
      - "ocnrf.3gpp.oracle.com"
      ports:
      - number: <port number of host in hosts section>
        name: http2
        protocol: HTTP2
      location: MESH_EXTERNAL
      resolution: NONE
  4. Run the following command to enable communication between internal Network Functions (NFs):

    Note:

    If Consumer and Producer NFs are not part of Service Mesh Registry, create Destination Rules and Service Entries in SCP namespace for all known call flows to enable inter NF communication.
    kubectl apply -f known-nf-se-dr.yaml
    Sample known-nf-se-dr.yaml file is as follows:
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: udm1-dr
      namespace: <scp-namespace>
    spec:
      exportTo:
      - .
      host: s24e65f98-bay190-rack38-udm-11.oracle-ocudm.cnc.us-east.oracle.com
      trafficPolicy:
        tls:
          mode: MUTUAL
          clientCertificate: /etc/certs/cert-chain.pem
          privateKey: /etc/certs/key.pem
          caCertificates: /etc/certs/root-cert.pem
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: udm1-se
      namespace: <scp-namespace>
    spec:
      exportTo:
      - .
      hosts:
      - "s24e65f98-bay190-rack38-udm-11.oracle-ocudm.cnc.us-east.oracle.com"
      ports:
      - number: 16016
        name: http2
        protocol: HTTP2
      location: MESH_EXTERNAL
      resolution: NONE

    Note:

    Create DestinationRule and ServiceEntry ASM resources for the following scenarios:
    • When an NF is registered with callback URIs or notification URIs which is not part of Service Mesh Registry
    • When a callbackReference is used in a known call flow and contains URI which is not part of Service Mesh Registry
    Run the following command:
    kubectl apply -f callback-uri-se-dr.yaml
    Sample callback-uri-se-dr.yaml file is as follows:
    apiVersion: networking.istio.io/v1alpha3
     kind: DestinationRule
     metadata:
     name: udm-callback-dr namespace: <scp-namespace>
     spec:
      exportTo: - .
      host: udm-notifications-processor-03.oracle-ocudm.cnc.us-east.oracle.com
      trafficPolicy:
       tls:
        mode: MUTUAL
        clientCertificate: /etc/certs/cert-chain.pem
        privateKey: /etc/certs/key.pem
        caCertificates: /etc/certs/root-cert.pem
     ---
     apiVersion: networking.istio.io/v1alpha3
     kind: ServiceEntry
     metadata:
     name: udm-callback-se
     namespace: <scp-namespace>
     spec:
      exportTo: - .
      hosts: - "udm-notifications-processor-03.oracle-ocudm.cnc.us-east.oracle.com"
      ports:
      - number: 16016
        name: http2
        protocol: HTTP2
        location: MESH_EXTERNAL
        resolution: NONE
  5. To equally distribute ingress connections among the SCP worker threads, run the following command to create a new YAML file with EnvoyFilter on ASM sidecar:

    You must apply EnvoyFilter to process inbound connections on ASM sidecar when SCP is deployed with ASM.

    kubectl apply -f envoy_inbound.yaml

    Sample envoy_inbound.yaml file is as follows:

    apiVersion: networking.istio.io/v1alpha3
    kind: EnvoyFilter
    metadata:
      name: inbound-envoyfilter
      namespace: <scp-namespace>
    spec:
      workloadSelector:
        labels:
          app: ocscp-scp-worker
      configPatches:
        - applyTo: LISTENER
          match:
            context: SIDECAR_INBOUND
            listener:
              portNumber: 15090
          patch:
            operation: MERGE
            value:
              connection_balance_config:
                exact_balance: {}
    

Note:

  • The ASM sidecar portNumber can be configured depending on the deployment. For example, 15090.
  • Do not configure any virtual service that applies connection or transaction timeout between various SCP services.
2.2.1.9.2 Deploying SCP with ASM

Deployment Configuration

You must complete the following deployment configuration before performing the Helm install.
  1. Run the following command to create namespace label for auto sidecar injection and to automatically add the sidecars in all pods spawned in SCP namespace:
    kubectl label ns <scp-namespace> istio-injection=enabled
  2. Create a Service Account for SCP and a role with appropriate security policies for sidecar proxies to work by referring to the sa-role-rolebinding.yaml file mentioned in the next step.
  3. Map the role and service accounts by creating a role binding as specified in the sample sa-role-rolebinding.yaml file:
    kubectl apply -f sa-role-rolebinding.yaml
    Sample sa-role-rolebinding.yaml file is as follows:
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: {{ template "noncluster.role.name" . }}
      namespace: {{ .Release.Namespace }}
      labels:
        {{- include "labels.allResources" . }}
      annotations:
        {{- include "annotations.allResources" . }}
    rules:
    - apiGroups: [""]
      resources:
      - pods
      - services
      - configmaps
      verbs: ["get", "list", "watch"]
    - apiGroups:
      - "" # "" indicates the core API group
      resources:
      - secrets
      - endpoints
      verbs: ["get", "list", "watch"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: {{ template "noncluster.rolebinding.name" . }}
      namespace: {{ .Release.Namespace }}
      labels:
        {{- include "labels.allResources" . }}
      annotations:
        {{- include "annotations.allResources" . }}
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: {{ template "noncluster.role.name" . }}
    subjects:
    - kind: ServiceAccount
      name: {{ template "noncluster.serviceaccount.name" . }}
      namespace: {{ .Release.Namespace }}
    ---
    apiVersion: v1
    kind: ServiceAccount
    {{- if .Values.imagePullSecrets }}
    imagePullSecrets:
    {{- range .Values.imagePullSecrets }}
      - name: {{ . }}
    {{- end }}
    {{- end }}
    metadata:
      name: {{ template "noncluster.serviceaccount.name" . }}
      namespace: {{ .Release.Namespace }}
      labels:
        {{- include "labels.allResources" . }}
      annotations:
        {{- include "annotations.allResources" . }}
    
  4. Update ocscp_custom_values_24.3.0.yaml with the following annotations:

    Note:

    Update other values such as DB details and service account as created in the previous steps.
    global:
      customExtension:
        allResources:
          annotations:
            sidecar.istio.io/inject: "true"
        lbDeployments:
          annotations:
            sidecar.istio.io/inject: "true"
            oracle.com/cnc: "true"
        nonlbDeployments:
          annotations:
            sidecar.istio.io/inject: "true"
            oracle.com/cnc: "true"
     
      scpServiceAccountName: <"ocscp-release-1-10-2-scp-serviceaccount">
      database:
        dbHost: <"scp-db-connectivity-service"> #DB Service FQDN
     
    scpc-configuration:
      service:
        type: ClusterIP
     
    scp-worker:
      tracingenable: false
      service:   
        type: ClusterIP
    

    Note:

    1. The Sidecar inject = "false" annotation on all resources prevents sidecar injection on pods created by Helm jobs or hooks.
    2. Deployment overrides re-enable auto sidecar injection on all deployments.
    3. SCP-Worker override disables automatic sidecar injection for the SCP-Worker microservice because it is done manually in later stages. This override is only required for ASM release 1.4 or 1.5. If integrating with ASM release 1.6 or later, it must be removed.
    4. The oracle.com/cnc annotation is required for integration with OSO services.
    5. Jaeger tracing must be disabled because it may interfere with SM end-to-end traces.
  5. To set sidecar resources for each microservice in the ocscp_custom_values_24.3.0.yaml file under deployment.customExtension.annotations, configure the following ASM annotations with the resource values for the services:

    SCP uses these annotations to assign the resources of the sidecar containers.

    • sidecar.istio.io/proxyMemory: Indicates the memory requested for the sidecar.
    • sidecar.istio.io/proxyMemoryLimit: Indicates the maximum memory limit for the sidecar.
    • sidecar.istio.io/proxyCPU: Indicates the CPU requested for the sidecar.
    • sidecar.istio.io/proxyCPULimit: Indicates the CPU limit for the sidecar.
  6. Define the concurrency setting for the sidecar container. A sidecar container concurrency value must be atleast equal to number of maximum vCPUs allocated to the sidecar container as follows:
    proxy.istio.io/config: |-
              concurrency: 6
2.2.1.9.3 Deployment Configurations

ASM Configuration to Allow XFCC Header

Envoy Filter should be added to allow the XFCC header on ASM sidecar.

Sample file:

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: <name>
  namespace: <namespace>
spec:
  workloadSelector:
    labels:
      app.kubernetes.io/instance: <SCP Deployment name>
  configPatches:
  - applyTo: NETWORK_FILTER
    match:
      listener:
        filterChain:
          filter:
            name: "envoy.http_connection_manager"
    patch:
      operation: MERGE
      value:
        typed_config:
          '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
          forward_client_cert_details: ALWAYS_FORWARD_ONLY
          use_remote_address: true
          xff_num_trusted_hops: 1

Inter-NF Communication

For every new NF participating in new call flows, DestinationRule and ServiceEntry must be created in SCP namespace to enable communication. This can be done in the same way as done earlier for known call flows.

Run the following command to create DestinationRule and ServiceEntry:

kubectl apply -f new-nf-se-dr.yaml
Sample new-nf-se-dr.yaml file for DestinationRule and ServiceEntry:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: <unique DR name for NR>
  namespace: <scp-namespace>
spec:
  exportTo:
  - .
  host: <NF-public-FQDN>
  trafficPolicy:
    tls:
      mode: MUTUAL
      clientCertificate: /etc/certs/cert-chain.pem
      privateKey: /etc/certs/key.pem
      caCertificates: /etc/certs/root-cert.pem
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: <unique SE name for NR>
  namespace: <scp-namespace>
spec:
  exportTo:
  - .
  hosts:
  - <NF-public-FQDN>
  ports:
  - number: <NF-public-port>
    name: http2
    protocol: HTTP2
  location: MESH_EXTERNAL
  resolution: NONE

Operations Services Overlay Installation

For Operations Services Overlay (OSO) installation instructions, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation Guide.

Note:

If OSO is deployed in the same namespace as SCP, ensure that all deployments of OSO have the annotation to skip sidecar injection as OSO does not support ASM sidecar proxy.

CNE Common Services for Logging

For information about CNE installation instructions, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

Note:

If CNE is deployed in the same namespace as SCP, ensure that all deployments of CNE have the annotation to skip sidecar injection as CNE does not support ASM sidecar proxy.
2.2.1.9.4 Deleting ASM

This section describes the steps to delete ASM.

To delete ASM, run the following command:

helm delete <helm-release-name> -n <namespace>

Where,

  • <helm-release-name> is the release name used by the Helm command. This release name must be the same as the release name used for ServiceMesh.
  • <namespace> is the deployment namespcae used by the Helm command.

For example:

helm delete ocscp-servicemesh-config -n ocscp

To disable ASM, run the following command:

kubectl label --overwrite namespace ocscp istio-injection=disabled

To verify if ASM is disabled, run the following command:

kubectl get se,dr,peerauthentication,envoyfilter,vs -n ocscp
2.2.1.10 Configuring Network Policies for SCP
Kubernetes network policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application. These network policies enforce access restrictions for all the applicable data flows except communication from Kubernetes node to pod for invoking container probe.

Note:

Configuring network policies is a recommended step. Based on the security requirements, network policies may or may not be configured.
For more information about this functionality, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.

Note:

  • If the traffic is blocked or unblocked between the pods even after applying network policies, check if any existing policy is impacting the same pod or set of pods that might alter the overall cumulative behavior.
  • If changing default ports of services such as Prometheus, Database, Jaegar, or if Ingress or Egress Gateway names is overridden, update them in the corresponding network policies.

Configuring Network Policies

Following are the various operations that can be performed for network policies:

2.2.1.10.1 Installing Network Policies

Prerequisite

Network Policies are implemented by using the network plug-in. To use network policies, you must be using a networking solution which supports Network Policy.

Note:

For a fresh installation, it is recommended to install Network Policies before installing SCP. However, if SCP is already installed, you can still install the Network Policies.
To install network policy:
  1. Open the ocscp-network-policy-custom-values-24.3.0.yaml file provided in the release package zip file. For downloading the file, see Downloading the SCP Package and Pushing the Images to Customer Docker Registry.
  2. The file is provided with the default network policies. If required, update the ocscp-network-policy-custom-values-24.3.0.yaml file. For more information on the parameters, see the Configuration Parameters for network policy parameter table.

    Note:

    To run ATS, uncomment the following policies from ocscp-network-policy-custom-values-24.3.0.yaml:
    • allow-ingress-traffic-to-notification
    • allow-egress-for-ats
    • allow-ingress-to-ats
    • To connect with CNC Console, update the below parameter in the allow-ingress-from-console network policy in the ocscp-network-policy-custom-values-24.3.0.yaml file:
      • kubernetes.io/metadata.name: <namespace in which CNCC is deployed>
    • In allow-ingress-prometheus policy, kubernetes.io/metadata.name parameter must contain the value for the namespace where Prometheus is deployed, and app.kubernetes.io/name parameter value should match the label from Prometheus pod.
  3. Run the following command to install the network policies:
    helm install <helm-release-name> <network-policy>/ -n <namepsace> -f
        <custom-value-file>
    For example:
    helm install ocscp-network-policy ocscp-network-policy/ -n scpsvc -f ocscp-network-policy-custom-values-24.3.0.yaml
    • helm-release-name: ocscp-network-policy Helm release name.
    • custom-value-file: ocscp-network-policy custom value file.
    • namespace: SCP namespace.
    • network-policy: location where the network-policy package is stored.

Note:

  • Connections that were created before installing network policy and still persist are not impacted by the new network policy. Only the new connections would be impacted.
  • If you are using ATS suite along with network policies, it is required to install the <NF acronym> and ATS in the same namespace.
2.2.1.10.2 Upgrading Network Policies
To add, delete, or update network policy:
  1. Modify the ocscp-network-policy-custom-values-24.3.0.yaml file to update, add, and delete the network policies.
  2. Run the following command to upgrade the network policies:
    helm upgrade <helm-release-name> <network-policy>/ -n <namespace> -f
        <values.yaml>
    For example:
    helm upgrade ocscp-network-policy ocscp-network-policy/ -n ocscp -f
     ocscp-network-policy-custom-values-24.3.0.yaml
    where,
    • helm-release-name: ocscp-network-policy Helm release name.
    • custom-value-file: ocscp-network-policy custom value file.
    • namespace: SCP namespace.
    • network-policy: location where the network-policy package is stored.
2.2.1.10.3 Verifying Network Policies
Run the following command to verify if the network policies are deployed successfully:
kubectl get <helm-release-name> -n <namespace>
For Example:
kubectl get ocscp-network-policy -n ocscp
where,
  • helm-release-name: ocscp-network-policy Helm release name.
  • namespace: SCP namespace.
2.2.1.10.4 Uninstalling Network Policies
Run the following command to uninstall all the network policies:
helm uninstall <release_name> --namespace <namespace>
For example:
helm uninstall occncc-network-policy --scp
    cncc

Note:

While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.

2.2.1.10.5 Configuration Parameters for Network Policies

Table 2-14 Supported Kubernetes Resource for Configuring Network Policies

Parameter Description Details
apiVersion

This is a mandatory parameter.

Specifies the Kubernetes version for access control.

Note: This is the supported api version for network policy. This is a read-only parameter.

Data Type: string

Default Value: networking.k8s.io/v1

kind

This is a mandatory parameter.

Represents the REST resource this object represents.

Note: This is a read-only parameter.

Data Type: string

Default Value: NetworkPolicy

Table 2-15 Configuration Parameters for Network Policy

Parameter Description Details
metadata.name

This is a mandatory parameter.

Specifies a unique name for the network policy.

{{ .metadata.name }}
spec.{}

This is a mandatory parameter.

This consists of all the information needed to define a particular network policy in the given namespace.

Note: SCP supports the spec parameters defined in "Supported Kubernetes Resource for Configuring Network Policies".

Default Value: NA

For more information about this functionality, see "Network Policies" in Oracle Communications Cloud Native Core, Service Communication Proxy User Guide.

2.2.2 Installation Tasks

This section provides installation procedures to install Oracle Communications Cloud Native Core, Service Communication Proxy (SCP).

Before installing SCP, you must complete Prerequisites and Preinstallation Tasks tasks for both the deployment methods.

2.2.2.1 Installing SCP Package
To install the SCP package:

Note:

For each SCP deployment in the network, use a unique SCP database name during the installation.
  1. Run the following command to access the extracted package:
    cd ocscp-<release_number>

    Example:

    cd ocscp-24.3.0

  2. Customize the ocscp_values_24.3.0.yaml file with the required deployment parameters. See the Customizing SCP chapter to customize the file. For more information about predeployment parameter configurations, see Preinstallation Tasks.

    Note:

    In case NRF configuration is required, see Configuring Network Repository Function Details.
  3. (Optional) If you want to install SCP with Aspen Service Mesh (ASM), perform the predeployment tasks as described in Configuring SCP to Support Aspen Service Mesh.
  4. Open the ocscp_values_24.3.0.yaml file and enable Release 16 with Model C Indirect 5G SBI Communication support by adding - rel16 manually under releaseVersion, and then uncomment scpProfileInfo.servingScope and scpProfileInfo.nfSetIdList parameters.

    Note:

    - rel16 is the default release version. For more information about Release 16, see 3GPP TS 23.501.

    Sample custom-values.yaml file output:

    global:
      domain: svc.cluster.local
      clusterDomain: cluster.local
      # If ingress gateway is available then set ingressGWAvailable flag to true
      # and provide ingress gateway IP and Port in publicSignalingIP and publicSignalingPort respectively.
      # If ingressGWAvailable flag is true then service type for scp-worker will be ClusterIP
      # otherwise it will be LoadBalancer.
      # We can not set ingressGWAvailable flag true and at the same time publicSignalingIPSpecified flag as false.
      # If you want to assign a load balancer IP,set loadbalanceripenbled flag to true and
      # provide value for flag loadbalancerip
      # else a random IP will be assigned if loadbalanceripenbled is false
      # and it will not use loadbalancerip flag
      adminport: 8001
      # enable or disable jaeger tracing
      tracingenable: &scpworkerTracingEnabled true
      enablejaegerbody: &scpworkerJaegerBodyEnabled false
      #Support for Release15 and Release16
      #atleast one param needs should be there
      #values can be rel15 or rel16
      #Default is rel15
      releaseVersion:
      # when running R16, SCP should be deployed with rel16 enabled and rel15 commented.  whereas for running R15 features, SCP should be deployed with rel15 enabled and rel16 commented. Both rel15 and rel16 cannot be enabled together.
      #- rel15
      - rel16

    Note:

    Release 15 deployment model is not supported from SCP 23.4.0.

  5. Run the following command to install SCP using charts from the Helm repository:
    helm install <release name> -f <custom_values.yaml> --namespace <namespace> <helm-repo>/chart_name --version <helm_version>
    1. In case charts are extracted:
      helm install <release name> -f <custom_values.yaml> --namespace <namespace> <chartpath>

    Example:

    helm install ocscp-helm-repo/ocscp -f <custom values.yaml> ocscp --namespace scpsvc --version <helm version>

    Caution:

    Do not exit from the helm install command manually. After running the helm install command, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out from the helm install command. It leads to some anomalous behavior.

2.2.3 Postinstallation Tasks

This section explains the postinstallation tasks for SCP.

2.2.3.1 Verifying SCP Installation
To verify the installation:
  1. Run the following command to verify the installation status:
    helm status <helm-release> --namespace <namespace>

    Where,

    • <helm-release> is the Helm release name of SCP.
    • <namespace> is the namespace of SCP deployment.

    Example:

    helm status ocscp --namespace ocscp

    The system displays the status as deployed if the deployment is successful.
  2. Run the following command to check whether all the services are deployed and active:
    kubectl -n <namespace_name> get services
  3. Run the following command to check whether all the pods are up and active:
    kubectl -n <namespace_name> get pods

    Example:

    kubectl get pods -n scpsvc
    NAME                                       READY   STATUS    RESTARTS   AGE
    ocscp-scp-cache-8444cd8f6d-gfsmx                      1/1     Running   0             2d23h
    ocscp-scp-load-manager-5664c7c8b4-rmrd2               1/1     Running   0             2d23h
    ocscp-scp-nrfproxy-5f44ff5f55-84f44                   1/1     Running   0             2d23h
    ocscp-scp-nrfproxy-oauth-5dbc78689d-mkhnt             1/1     Running   0             3m2s
    ocscp-scp-worker-6dc45b7cfc-2tfz5                     1/1     Running   0             28h
    ocscp-scpc-audit-6ff496fcc9-jkwj5                     1/1     Running   0             2d23h
    ocscp-scpc-configuration-5d66df6f4-6hdll              1/1     Running   0             2d23h
    ocscp-scpc-notification-7f49b85c99-c4p9v              1/1     Running   0             2d23h
    ocscp-scpc-subscription-6b785f77b4-9rtn2              1/1     Running   0             2d23h

    Note:

    If the installation is unsuccessful or the STATUS of all the pods is not in the Running state, perform the troubleshooting steps provided in Oracle Communications Cloud Native Core, Service Communication Proxy Troubleshooting Guide.
2.2.3.2 Performing Helm Test

This section describes how to perform sanity check for SCP installation through Helm test. The pods to be checked should be based on the namespace and label selector configured for the Helm test configurations.

Helm Test is a feature that validates installation of SCP and determines if the NF is ready to accept traffic.

This test also checks for all the PVCs to be in bound state under the Release namespace and label selector configured.

Note:

Helm Test can be performed only on Helm3.
Perform the following Helm test procedure:
  1. Configure the Helm test configurations under the global parameters section of the ocscp_custom_values_24.3.0.yaml file as follows:
    
    nfName: ocscp
    image:
      name: nf_test
      tag: <string>
      pullPolicy: Always
    config:
      logLevel: WARN
      timeout: 180
    resources:
        - horizontalpodautoscalers/v1
        - deployments/v1
        - configmaps/v1
        - serviceaccounts/v1
        - roles/v1
        - services/v1
        - rolebindings/v1
    
    

    For more information, see Customizing SCP.

  2. Run the following Helm test command:
    helm test <release_name> -n <namespace>

    Example:

    helm test ocscp -n ocscp

    Sample Output:
    NAME: ocscp
    LAST DEPLOYED: Fri Sep 18 10:08:03 2020
    NAMESPACE: ocscp
    STATUS: deployed
    REVISION: 1
    TEST SUITE:     ocscp-test
    Last Started:   Fri Sep 18 10:41:25 2020
    Last Completed: Fri Sep 18 10:41:34 2020
    Phase:          Succeeded
    NOTES:
    # Copyright 2020 (C), Oracle and/or its affiliates. All rights reserved.

Note:

  • After running the helm test, the pod moves to a completed state. Hence, to remove the pod, run the following command:
    kubectl delete pod <releaseName>-test -n <namespace>
  • The Helm test only verifies whether all pods running in the namespace are in the Ready state, such as 1/1 or 2/2 states. It does not check the deployment.
  • If the Helm test fails, see Oracle Communications Cloud Native Core, Service Communication Proxy Troubleshooting Guide.
2.2.3.3 Taking Backup of Important Files
Take a backup of the following files, which are required during fault recovery:
  1. Updated the ocscp_custom_values_24.3.0.yaml file.
  2. Updated Helm charts.
  3. Secrets, certificates, and keys that are used during installation.

2.2.4 Configuring Network Repository Function Details

Network Repository Function (NRF) details must be defined during the SCP installation using the values.yaml file. You must update the NRF details in the values.yaml file.

Note:

You can configure a primary NRF and an optional secondary NRF. NRFs must have the back-end DB synchronized.

An IPv4 or IPv6 address of NRF must be configured in case NRF is outside the Kubernetes cluster. If NRF is inside the Kubernetes cluster, you can configure FQDN. If both IP address (IPv4 or IPv6) and FQDN are provided, IP address takes precedence over FQDN.

Note:

  • You must configure or remove the apiPrefix parameter based on the APIPrefix supported or not supported by NRF.
  • You must update the FQDN, IP address, and Port of NRF to point to NRF's FQDN or IP and Port. The primary NRF profile must be always set to higher, that is, 0. Ensure that the priority value of both primary and secondary profiles are not set to the same priority.

2.2.5 Configuring SCP as HTTP Proxy

To route messages towards SCP, Consumer NFs must use <FQDN or IP Address>:<PORT of SCP-Worker> of scp-worker in the http_proxy/HTTP_PROXY configuration.

Note:

Run the following commands from where SCP worker and FQDN can be accessed.
Perform the following procedure to configure SCP as HTTP proxy:
  1. To test successful deployment of SCP, run the following curl command:
    $ curl -v -X GET --url 'http://<FQDN:PORT of SCP-Worker>/nnrf-nfm/v1/subscriptions/' --header 'Host:<FQDN:PORT of NRF>'
  2. Fetch the current subscription list as a client from NRF by sending the request to NRF through SCP:

    Example:

    $ curl -v -X GET --url 'http://scp-worker.scpsvc:8000/nnrf-nfm/v1/subscriptions/' --header 'Host:ocnrf-ambassador.nrfsvc:80'

2.2.6 Configuring Multus Container Network Interface

Perform the following procedure to configure Multus Container Network Interface (CNI) after SCP installation is complete.

Note:

To verify whether this feature is enabled, see "Verifying the Availability of Multus Container Network Interface" in Oracle Communications Cloud Native Core, Service Communication Proxy Troubleshooting Guide.
  1. In the Kubernetes cluster, create a NetworkAttachmentDefinition (NAD) file.
    Example of a NAD file name: ipvlan-sig.yaml
    Sample NAD file:
    apiVersion: "k8s.cni.cncf.io/v1"
    
    kind: NetworkAttachmentDefinition
    
    metadata:
    
      name:ipvlan-siga
    
    spec:
    
      config: '{
    
          "cniVersion": "0.3.1",
    
          "type": "ipvlan",
    
          "master": "eth1",
    
          "mode": "l2",
    
          "ipam": {
    
            "type": "host-local",
    
            "subnet": "<signaling-subnet>",
    
            "rangeStart": "x.x.x.x.",
    
            "rangeEnd": "x.x.x.x",
    
            "routes": [
    
              { "dst": "<nsx_lb_network_address_AMF>"}   ,
    
               { "dst":“<nsx_lb_network_address_SMF>”}  ,
    
                { "dst":“<nsx_lb_network_address_NRF>”} ,
    
                { "dst":“<nsx_lb_network_address_UDR>”} ,  
    
                 { "dst":“<nsx_lb_network_address_CHF>”} ,  
    
                 ],
    
            "gateway": "x.x.x.x"
    
          }
    
        }'
  2. Run the following command to create a NetworkAttachmentDefinition custom resource for defining the Multus CNI network interfaces and their routing details:
    kubectl apply -f <NAD_file_name> –n <namespace>

    Example:

    kubectl apply -f ipvlan-sig.yaml –n scpsvc
  3. Add the following annotation to the deployment for which additional network interfaces need to be added by Multus CNI:
    k8s.v1.cni.cncf.io/networks: <network as defined in NAD>

    Where, <network as defined in NAD> indicates the network as defined in NetworkAttachmentDefinition.

    Sample values.yaml file:

    scp-worker:
      deployment:
        # Labels and Annotations that are specific to deployment are added here.
        customExtension:
          labels: {}
          annotations: {k8s.v1.cni.cncf.io/networks: '[{ "name": "macvlan-siga"}]'}

2.2.7 Adding and Removing IP-based Signaling Services

The following subsections describe how to add and remove IP-based Signaling Services as part of the Support for Multiple Signaling Service IPs feature.

2.2.7.1 Adding a Signaling Service

Perform the following procedure to add an IP-based signaling service.

  1. Open the ocscp_values.yaml file.
  2. In the serviceSpecifications section, add a new service under the workerServices list similar to the default service as follows:
    name: "<service_name>"
    #type:LoadBalancer
    networkNameEnabled: false
    networkName: "metallb.universe.tf/address-pool: signaling"
    publicSignalingIPSpecified: true
    publicSignalingIP: <IP address>
    publicSignalingIPv6Specified: false
    publicSignalingIPv6: <IP address>
    ipFamilyPolicy: *workerIpFamilyPolicy
    ipFamilies: *workerIpFamilies
    port:
    staticNodePortEnabled: false
    nodePort: <Port number>
    nodePortHttps: <Port number>
    customExtension:
    labels: {}
    annotations: {}
    Where,
    • <service_name> is the name of the service.
    • <IP address> is the signaling IP address of the service.
    • <Port number> is the port number of the service.

    Example:

    name: "scp-worker-net1"
    #type:LoadBalancer
    networkNameEnabled: false
    networkName: "metallb.universe.tf/address-pool: signaling"
    publicSignalingIPSpecified: false
    publicSignalingIP: 10.75.212.100
    publicSignalingIPv6Specified: true
    publicSignalingIPv6: 2001:db8:85a3::8a2e:370:7334
    ipFamilyPolicy: *workerIpFamilyPolicy
    ipFamilies: *workerIpFamilies
    port:
    staticNodePortEnabled: false
    nodePort: 30075
    nodePortHttps: 30076
    customExtension:
    labels: {}
    annotations: {}
  3. Optional: To add preferable IP addresses for NRF callback, in the global section, under the scpSubscriptionInfo parameter, add the IP address of the new service to ip.

    You can provide either IPv4 or IPv6 address.

    Example:

    scpSubscriptionInfo:
          ip: "10.75.212.100" # metallb or masterIp, this ip will be obtained from metallb pool. Here either IPv4 or IPv6 address can be provided.
          Scheme to use in callbackURI, either http or https
          scheme: "http"
    
  4. Save the file.
  5. Run the following Helm upgrade command and wait until the upgrade is complete:

    Note:

    It is recommended to perform the Helm upgrade on the same version of SCP that contains the newly added IP-based signaling service.
    helm upgrade <release_name> <helm_repo/helm_chart>  --version <chart_version> -f <ocscp_values.yaml> --namespace  <namespace-name>

    Where,

    • <release_name> is the release name used by the Helm command.
    • <helm_repo/helm_chart> is the location of the Helm chart extracted from the target ocscp_csar_24_3_0_0_0.zip file.
    • <chart_version> is the version of the Helm chart extracted from the ocscp_csar_24_3_0_0_0.zip file.
    • <ocscp_values.yaml> is the SCP customized values.yaml file.
    • <namespace-name> is the SCP namespace in which the SCP release is deployed.
    Example:
    helm upgrade ocscp ocscp-helm-repo/ocscp --version 24.3.0 -f ocscp_values.yaml --namespace ocscp
    
  6. Run the following command to check whether the service is available:
    kubectl get svc -n <namespace>
2.2.7.2 Removing a Signaling Service

Perform the following procedure to remove an IP-based signaling service.

Before removing any IP address, ensure that no traffic is routed to that IP. For more information, you can refer to SCP dashboard metrics in the Oracle Communications Cloud Native Core, Service Communication Proxy User Guide.
  1. Open the ocscp_values.yaml file.
  2. Locate the publicSignalingIP IP of the signaling service that you want to remove and set the corresponding publicSignalingIPSpecified parameter to false.
    Example:
    publicSignalingIPSpecified: false
    publicSignalingIP: 10.75.212.88
  3. Optional: If the service ip being removed is already part of scpSubscriptionInfo, then do one of the following:
    • To update the alternate IP: In the global section, under the scpSubscriptionInfo parameter, update the ip parameter with the preferred service IP address.
    • To remove the alternate IP: In the global section, under the scpSubscriptionInfo parameter, remove the IP address.
  4. Save the file.
  5. Run the following Helm upgrade command and wait until the upgrade is complete:

    Note:

    It is recommended to perform the Helm upgrade on the same version of SCP that already contains IP-based signaling service.
    helm upgrade <release_name> <helm_repo/helm_chart>  --version <chart_version>
          -f <ocscp_values.yaml> --namespace  <namespace-name>

    Where,

    • <release_name> is the release name used by the Helm command.
    • <helm_repo/helm_chart> is the location of the Helm chart extracted from the target ocscp_csar_24_3_0_0_0.zip file.
    • <chart_version> is the version of the Helm chart extracted from the ocscp_csar_24_3_0_0_0.zip file.
    • <ocscp_values.yaml> is the SCP customized values.yaml file.
    • <namespace-name> is the SCP namespace in which the SCP release is deployed.
    Example:
    helm upgrade ocscp ocscp-helm-repo/ocscp --version 24.3.0 -f ocscp_values.yaml --namespace ocscp
    
  6. Perform one of the following steps to clean up the deleted services:
    • To clean up Kubernetes services manually, run the following command:
      kubectl delete svc <svc_name> --namspace <namespace-name>
    • To clean up Kubernetes services through Helm upgrade, remove all the parameters of the removed IP-based service from the serviceSpecifications section of the ocscp_values.yaml file, and then perform the Helm upgrade as described in Step 7.

    Remove the following sample parameters manually from serviceSpecifications:

    name: "<service name>"
    #type:LoadBalancer
    networkNameEnabled: false
    networkName: "metallb.universe.tf/address-pool: signaling"
    publicSignalingIPSpecified: false
    publicSignalingIP: 10.75.212.88
    port:
    staticNodePortEnabled: true
    nodePort: 30075
    customExtension:
    labels: {}
    annotations: {}