2 Installing SCP
This chapter provides information about installing SCP in a cloud native environment, including the prerequisites and downloading the deployment package.
Note:
SCP supports fresh installation, and it can also be upgraded from 24.1.x and 24.2.x. For more information about how to upgrade SCP, see Upgrading SCP.SCP installation is supported over the following platforms:
- Oracle Communications Cloud Native Core, Cloud Native Environment (CNE): For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
- Oracle Cloud Infrastructure (OCI) using OCI Adaptor: For more information about OCI, see Oracle Communications Cloud Native Core, OCI Deployment Guide.
SCP installation comprises of prerequisites, preinstallation, installation, and postinstallation tasks. You must perform SCP installation tasks in the same sequence as outlined in the following table:
Table 2-1 SCP Installation Tasks
Installation Sequence | Applicable for CNE Deployment | Applicable for OCI Deployment |
---|---|---|
Prerequisites | Yes | Yes |
Software Requirements | Yes | Yes |
Environment Setup Requirements | Yes | Yes |
Resource Requirements | Yes | Yes |
Preinstallation Tasks | Yes | Yes |
Downloading the SCP Package | Yes | Yes |
Pushing the Images to Customer Docker Registry | Yes | No |
Pushing the SCP Images to OCI Docker Registry | No | Yes |
Verifying and Creating Namespace | Yes | Yes |
Creating Service Account, Role, and Rolebinding | Yes | Yes |
Configuring Database for SCP | Yes | Yes |
Configuring Kubernetes Secret for Accessing Database | Yes | Yes |
Configuring SSL or TLS Certificates to Enable HTTPS | Yes | Yes |
Configuring SCP to Support Aspen Service Mesh | Yes | Yes |
Configuring Network Policies for SCP | Yes | Yes |
Installation Tasks | Yes | Yes |
Installing SCP Package | Yes | Yes |
Postinstallation Tasks | Yes | Yes |
2.1 Prerequisites
Before installing and configuring SCP, ensure that the following prerequisites are met.
2.1.1 Software Requirements
This section lists the software that must be installed before installing SCP.
The following software must be installed before installing SCP:
Table 2-2 Preinstalled Software
Software | Tested Software Version | Software Requirement | Usage Description | ||
---|---|---|---|---|---|
SCP 25.1.1xx | SCP 24.3.x | SCP 24.2.x | |||
Kubernetes | 1.31.x | 1.30.x | 1.29.x | Mandatory |
Kubernetes orchestrates scalable, automated network function (NF) deployments for high availability and efficient resource utilization. Impact: Without orchestration capabilities, deploying and managing network functions (NFs) can become complex, leading to inefficient resource utilization and potential downtime. |
Helm | 3.16.2 | 3.15.2 | 3.13.2 | Mandatory |
Helm, a package manager, simplifies deploying and managing network functions (NFs) on Kubernetes with reusable, versioned charts for easy automation and scaling. Impact: Pre-installation is required. Not using this capability may result in error-prone and time-consuming management of NF versions and configurations, impacting deployment consistency. |
Podman | 4.9.4 | 4.6.1 | 4.6.1 | Mandatory |
Podman manages and runs containerized network functions (NFs) without requiring a daemon, offering flexibility and compatibility with Kubernetes. Impact: Pre-installation is required, as Podman is part of Oracle Linux. Without efficient container management, the development and deployment of NFs could become cumbersome, impacting agility. |
To check the versions of the preinstalled software in the cloud native environment, run the following commands:
kubectl version
helm version
podman version
The following software are available if SCP is deployed in CNE. If you are deploying SCP in any other cloud native environment, these additional software must be installed before installing SCP.
To check the installed software, run the following command:
helm ls -A
The list of additional software items, along with the supported versions and usage, is provided in the following table:
Table 2-3 Additional Software
Software | Tested Software Version | Software Requirement | Usage Description | ||
---|---|---|---|---|---|
SCP 25.1.1xx | SCP 24.3.x | SCP 24.2.x | |||
Oracle OpenSearch | 2.11.0 | 2.11.0 | 2.11.0 | Recommended |
OpenSearch provides scalable search and analytics for 5G network functions (NFs), enabling efficient data exploration and visualization. Impact: A lack of a robust analytics solution could lead to challenges in identifying performance issues and optimizing NF operations, ultimately affecting overall service quality. |
OpenSearch Dashboard | 2.11.0 | 2.11.0 | 2.11.0 | Recommended |
OpenSearch Dashboard visualizes and analyzes data for 5G network functions (NFs), offering interactive insights and custom reporting. Impact: Without visualization capabilities, understanding NF performance metrics and trends would be difficult, limiting informed decision-making. |
Fluentd OpenSearch | 1.17.1 | 1.16.2 | 1.16.2 | Recommended |
Fluentd is an open-source data collector that streamlines data collection and consumption, allowing for improved data utilization and comprehension. Impact: Not utilizing centralized logging can hinder the ability to track network function (NF) activity and troubleshoot issues effectively, complicating maintenance and support. |
Kyverno | 1.12.5 | 1.12.5 | 1.9.0 | Recommended |
Kyverno is a Kubernetes policy engine that helps manage and enforce policies for resource configurations within a Kubernetes cluster. Impact: Failing to implement policy enforcement could lead to misconfigurations, resulting in security risks and instability in network function (NF) operations, affecting reliability. |
Grafana | 9.5.3 | 9.5.3 | 9.5.3 | Recommended |
Grafana is a popular open-source platform for monitoring and observability. It provides a user-friendly interface for creating and viewing dashboards based on various data sources. Impact: Without visualization tools, interpreting complex metrics and gaining insights into network function (NF) performance would be cumbersome, hindering effective management. |
Prometheus | 2.52.0 | 2.52.0 | 2.51.1 | Recommended |
Prometheus is a popular open-source monitoring and alerting toolkit. It collects and stores metrics from various sources and allows for alerting and querying. Impact: Not employing this monitoring solution could result in a lack of visibility into network function (NF) performance, making it difficult to troubleshoot issues and optimize resource usage. |
Jaeger | 1.60.0 | 1.60.0 | 1.52.0 | Recommended |
Jaeger provides distributed tracing for 5G network functions (NFs), enabling performance monitoring and troubleshooting across microservices. Impact: Not utilizing distributed tracing may hinder the ability to diagnose performance bottlenecks, making it challenging to optimize NF interactions and improve the user experience. |
MetalLB | 0.14.4 | 0.14.4 | 0.14.4 | Recommended |
MetalLB provides load balancing and external IP management for 5G network functions (NFs) in Kubernetes environments and is used as the load balancing solution in CNE. Impact: Load balancing is mandatory for the solution to work. Without load balancing, traffic distribution among NFs may be inefficient, leading to potential bottlenecks and service degradation. |
Note:
On OCI, the above mentioned software are not required because OCI observability and management service is used for logging, metrics, alerts, and KPIs. For more information, see Oracle Communications Cloud Native Core, OCI Deployment Guide.2.1.2 Environment Setup Requirements
2.1.2.1 Client Machine Requirement
This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.
The client machine should have:
- Helm repository configured.
- network access to the Helm repository and Docker image repository.
- network access to the Kubernetes cluster.
- required environment settings to run
kubectl
,docker
, andpodman
commands. The environment should have privileges to create a namespace in the Kubernetes cluster. - Helm client installed with the push plugin. Configure the environment in
such a manner that the
helm install
command deploys the software in the Kubernetes cluster.
2.1.2.2 Network Access Requirements
The Kubernetes cluster hosts must have network access to the following repositories:
- Local Helm repository: It contains SCP Helm charts.
To check if the Kubernetes cluster hosts can access the local Helm repository, run the following command:
helm repo update
- Local Docker image repository: It contains SCP Docker images.
To check if the Kubernetes cluster hosts can access the local Docker image repository, pull any image with an image-tag using either of the following commands:
docker pull <docker-repo>/<image-name>:<image-tag>
podman pull <podman-repo>/<image-name>:<image-tag>
Where,
<docker-repo>
is the IP address or host name of the Docker repository.<podman-repo>
is the IP address or host name of the Podman repository.<image-name>
is the Docker image name.<image-tag>
is the tag assigned to the Docker image used for the SCP pod.
For example:
docker pull CUSTOMER_REPO/oc-app-info:24.3.0
podman pull occne-repo-host:5000/ocscp/oc-app-info:24.3.0
Note:
Runkubectl
and
helm
commands on a system based on the deployment infrastructure.
For example, they can be run on a client machine such as VM, server, local desktop, and
so on.
2.1.2.3 Server or Space Requirement
For information about server or space requirements, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
2.1.2.4 CNE Requirement
This section is applicable only if you are installing SCP on Cloud Native Environment (CNE).
SCP supports CNE 24.3.x, 24.2.x, and 24.1.x.
To check the CNE version, run the following command:
echo $OCCNE_VERSION
Note:
If Istio or Aspen Service Mesh (ASM) is installed on CNE, run the following command to patch the "disallow-capabilities" clusterpolicy of CNE and exclude the NF namespace before the NF deployment:kubectl patch clusterpolicy disallow-capabilities --type "json" -p '[{"op":"add","path":"/spec/rules/0/exclude/any/0/resources/namespaces/-","value":"<namespace of NF>"}]'
Where, <namespace of NF>
is the namespace of SCP, cnDBTier, or Oracle
Communications Cloud Native Configuration Console (CNC Console).
For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
2.1.2.5 OCI Requirements
SCP can be deployed in OCI. While deploying SCP in OCI, the user must use the Operator instance/VM instead of Bastion Host.
For more information about OCI Adaptor, see Oracle Communications Cloud Native Core, OCI Adaptor User Guide.
2.1.2.6 cnDBTier Requirements
Note:
Obtain the values of the cnDBTier parameters listed in cnDBTier Customization Parameters from the delivered
ocscp_dbtier_custom_values.yaml
file
and use these values in the new
ocscp_dbtier_custom_values.yaml
file if
the parameter values in the new
ocscp_dbtier_custom_values.yaml
file
are different from the delivered
ocscp_dbtier_custom_values.yaml
file.
SCP supports cnDBTier 24.3.x, 24.2.x, and 24.1.x. cnDBTier must be configured and running before installing SCP.
Note:
In georedundant deployment, each site should have a dedicated cnDBTier.To install cnDBTier 24.3.x with resources recommended for SCP,
customize the ocscp_dbtier_24.3.0_custom_values_24.3.0.yaml
file in the ocscp_csar_24_3_0_0_0.zip
folder with
the required deployment parameters. cnDBTier parameters will vary depending
on whether the deployment is on a single site, two site, or three site. For
more information, see cnDBTier Customization Parameters.
Note:
If you already have an older version of cnDBTier, upgrade cnDBTier with resources recommended for SCP by customizing theocscp_dbtier_24.3.0_custom_values_24.3.0.yaml
file in the ocscp_csar_24_3_0_0_0.zip
folder with
the required deployment parameters. Use the same PVC size as it was in the
previous release. For more information, see cnDBTier Customization Parameters.
For more information about cnDBTier installation, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
2.1.2.7 OCCM Requirements
To support automated certificate lifecycle management, SCP integrates with Oracle Communications Cloud Native Core, Certificate Management (OCCM) in compliance with 3GPP security recommendations. For more information about OCCM, see the following guides:
- Oracle Communications Cloud Native Core, Certificate Management Installation, Upgrade, and Fault Recovery Guide
- Oracle Communications Cloud Native Core, Certificate Management User Guide
2.1.2.8 OSO Requirement
SCP supports Operations Services Overlay (OSO) 24.3.x, 24.2.x, and 24.1.x for common operation services (Prometheus and components such as alertmanager, pushgateway) on a Kubernetes cluster, which does not have these common services. For more information about OSO installation, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation Guide.
2.1.3 Resource Requirements
This section lists the resource requirements to install and run SCP.
Note:
The performance and capacity of the SCP system may vary based on the call model, feature or interface configuration, network conditions, and underlying CNE and hardware environment.
2.1.3.1 SCP Services
The following table lists resource requirement for SCP Services:
Table 2-4 SCP Services
Service Name | SCP Service PODs | Ephemeral Storage Per Pod | ||||||
---|---|---|---|---|---|---|---|---|
Pod Replica | vCPU/Pod | Memory in Gi/Pod | Minimum Value in Mi (If Enabled) | Maximum Value in Gi (If Enabled) | ||||
Min | Max | Min | Max | Min | Max | |||
Helm test | 1 | 1 | 3 | 3 | 3 | 3 | 70 | 1 |
Helm Hook | 1 | 1 | 3 | 3 | 3 | 3 | 70 | 1 |
<helm-release-name>-scpc-subscription | 1 | 1 | 2 | 2 | 2 | 2 | 70 | 1 |
<helm-release-name>-scpc-notification | 1 | 1 | 4 | 4 | 4 | 4 | 70 | 1 |
<helm-release-name>-scpc-audit | 1 | 1 | 3 | 3 | 4 | 4 | 70 | 1 |
<helm-release-name>-scpc-configuration | 1 | 1 | 2 | 2 | 2 | 2 | 70 | 1 |
<helm-release-name>-scpc-alternate-resolution | 1 | 1 | 2 | 2 | 2 | 2 | 70 | 1 |
<helm-release-name>-scp-cache | 3 | 3 | 8 | 8 | 8 | 8 | 70 | 1 |
<helm-release-name>-scp-nrfproxy | 2 | 16 | 8 | 8 | 8 | 8 | 70 | 1 |
<helm-release-name>-scp-load-manager | 2 | 3 | 8 | 8 | 8 | 8 | 70 | 1 |
<helm-release-name>-scp-oauth-nrfproxy | 2 | 16 | 8 | 8 | 8 | 8 | 70 | |
<helm-release-name>-scp-worker(profile 1) | 2 | 32 | 4 | 4 | 8 | 8 | 70 | 1 |
<helm-release-name>-scp-worker(profile 2) | 2 | 64 | 8 | 8 | 12 | 12 | 70 | 1 |
<helm-release-name>-scp-mediation | 2 | 16 | 8 | 8 | 8 | 8 | 70 | 1 |
<helm-release-name>-scp-mediation test | 1 | 1 | 8 | 8 | 8 | 8 | 70 | 1 |
<helm-release-name>-scp-worker(profile 3) | 2 | 64 | 12 | 12 | 16 | 16 | 70 | 1 |
Note:
- To go beyond 60000 Transactions Per Second (TPS), you must deploy SCP with scp-worker configured with Profile 2.
- <helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".
- Helm Hooks Jobs: These are pre and post jobs that are invoked during installation, upgrade, rollback, and uninstallation of the deployment. These are short span jobs that get terminated after the deployment completion.
- Helm Test Job: This job is run on demand when the Helm test command is initiated. This job runs the Helm test and stops after completion. These are short-lived jobs that get terminated after the deployment is done. They are not part of active deployment resource, but are considered only during Helm test procedures.
2.1.3.2 Upgrade
Following is the resource requirement for upgrading SCP.
Table 2-5 Upgrade
Service Name | Upgrade Resources | Ephemeral Storage Per Pod | ||||||
---|---|---|---|---|---|---|---|---|
Pod Replica | vCPU/Pod | Memory in Gi/Pod | Minimum Value in Mi (If Enabled) | Maximum Value in Gi (If Enabled) | ||||
Min | Max | Min | Max | Min | Max | |||
Helm test | 0 | 0 | 0 | 0 | 0 | 0 | 70 | 1 |
Helm Hook | 0 | 0 | 0 | 0 | 0 | 0 | 70 | 1 |
<helm-release-name>-scpc-subscription | 1 | 1 | 1 | 1 | 1 | 1 | 70 | 1 |
<helm-release-name>-scpc-notification | 1 | 1 | 4 | 4 | 4 | 4 | 70 | 1 |
<helm-release-name>-scpc-audit | 1 | 1 | 3 | 3 | 4 | 4 | 70 | 1 |
<helm-release-name>-scpc-configuration | 1 | 1 | 2 | 2 | 2 | 2 | 70 | 1 |
<helm-release-name>-scpc-alternate-resolution | 1 | 1 | 2 | 2 | 2 | 2 | 70 | 1 |
<helm-release-name>-scp-cache | 1 | 1 | 8 | 8 | 8 | 8 | 70 | 1 |
<helm-release-name>-scp-nrfproxy | 1 | 4 | 8 | 8 | 8 | 8 | 70 | 1 |
<helm-release-name>-scp-load-manager | 1 | 1 | 8 | 8 | 8 | 8 | 70 | 1 |
<helm-release-name>-scp-oauth-nrfproxy | 1 | 4 | 8 | 8 | 8 | 8 | 70 | 1 |
<helm-release-name>-scp-worker(profile 1) | 2 | 8 | 4 | 4 | 8 | 8 | 70 | 1 |
<helm-release-name>-scp-worker(profile 2) | 2 | 16 | 8 | 8 | 12 | 12 | 70 | 1 |
<helm-release-name>-scp-mediation | 2 | 4 | 8 | 8 | 8 | 8 | 70 | 1 |
<helm-release-name>-scp-mediation test | 0 | 0 | 0 | 0 | 0 | 0 | 70 | 1 |
<helm-release-name>-scp-worker(profile 3) | 2 | 16 | 12 | 12 | 16 | 16 | 70 | 1 |
Note:
<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".2.1.3.3 ASM Sidecar
SCP leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external TLS communication. If ASM Sidecar injection is enabled during SCP deployment or upgrade, this container is injected to each SCP pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about installing ASM, see Configuring SCP to Support Aspen Service Mesh.
Table 2-6 ASM Sidecar
Service Name | ASM Sidecar | Ephemeral Storage Per Pod | ||||
---|---|---|---|---|---|---|
vCPU/Pod | Memory in Gi/Pod | Minimum Value in Mi (If Enabled) | Maximum Value in Gi (If Enabled) | |||
Min | Max | Min | Max | |||
Helm test | 2 | 2 | 1 | 1 | 70 | 1 |
Helm Hook | 0 | 0 | 0 | 0 | 70 | 1 |
<helm-release-name>-scpc-subscription | 2 | 2 | 1 | 1 | 70 | 1 |
<helm-release-name>-scpc-notification | 2 | 2 | 1 | 1 | 70 | 1 |
<helm-release-name>-scpc-audit | 2 | 2 | 1 | 1 | 70 | 1 |
<helm-release-name>-scpc-configuration | 2 | 2 | 1 | 1 | 70 | 1 |
scpc-alternate-resolution | 2 | 2 | 1 | 1 | 70 | 1 |
<helm-release-name>-scp-cache | 4 | 4 | 4 | 4 | 70 | 1 |
<helm-release-name>-scp-nrfproxy | 5 | 5 | 5 | 5 | 70 | 1 |
<helm-release-name>-scp-load-manager | 4 | 4 | 4 | 4 | 70 | 1 |
<helm-release-name>-scp-oauth-nrfproxy | 5 | 5 | 5 | 5 | 70 | 1 |
scp-worker (profile 1) | 3 | 3 | 4 | 4 | 70 | 1 |
<helm-release-name>-scp-worker (profile 2) | 5 | 5 | 5 | 5 | 70 | 1 |
<helm-release-name>-scp-mediation | 0 | 0 | 0 | 0 | 70 | 1 |
<helm-release-name>-scp-mediation test | 0 | 0 | 0 | 0 | 70 | 1 |
<helm-release-name>-scp-worker (profile 3) | 8 | 8 | 8 | 8 | 70 | 1 |
Note:
<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".2.1.3.4 Debug Tool Container
The Debug Tool Container provides third-party troubleshooting tools for debugging the runtime issues in a lab environment. If Debug Tool Container injection is enabled during SCP deployment or upgrade, this container is injected to each SCP pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about configuring Debug Tool Container, see Oracle Communications Cloud Native Core, Service Communication Proxy Troubleshooting Guide.
Table 2-7 Debug Tool Container
Service Name | Debug Tool Container | Ephemeral Storage Per Pod | ||||
---|---|---|---|---|---|---|
vCPU/Pod | Memory in Gi/Pod | Minimum Value in Mi (If Enabled) | Maximum Value in Gi (If Enabled) | |||
Min | Max | Min | Max | |||
Helm test | 0 | 0 | 0 | 0 | 70 | 1 |
Helm Hook | 0 | 0 | 0 | 0 | 70 | 1 |
<helm-release-name>-scpc-subscription | 1 | 1 | 2 | 2 | 70 | 1 |
<helm-release-name>-scpc-notification | 1 | 1 | 2 | 2 | 70 | 1 |
<helm-release-name>-scpc-audit | 1 | 1 | 2 | 2 | 70 | 1 |
<helm-release-name>-scpc-configuration | 1 | 1 | 2 | 2 | 70 | 1 |
<helm-release-name>-scpc-alternate-resolution | 1 | 1 | 2 | 2 | 70 | 1 |
<helm-release-name>-scp-cache | 1 | 1 | 2 | 2 | 70 | 1 |
<helm-release-name>-scp-nrfproxy | 1 | 1 | 2 | 2 | 70 | 1 |
<helm-release-name>-scp-load-manager | 1 | 1 | 2 | 2 | 70 | 1 |
<helm-release-name>-scp-oauth-nrfproxy | 1 | 1 | 2 | 2 | 70 | 1 |
<helm-release-name>-scp-worker(profile 1) | 1 | 1 | 2 | 2 | 70 | 1 |
<helm-release-name>-scp-worker(profile 2) | 1 | 1 | 2 | 2 | 70 | 1 |
<helm-release-name>-scp-mediation | 1 | 1 | 2 | 2 | 70 | 1 |
<helm-release-name>-scp-mediation test | 1 | 1 | 2 | 2 | 70 | 1 |
<helm-release-name>-scp-worker (profile 3) | 1 | 1 | 2 | 2 | 70 | 1 |
Note:
<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".2.1.3.5 CNC Console
Oracle Communications Cloud Native Configuration Console (CNC Console) is a Graphical User Interface (GUI) for NFs and Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) common services. For information about CNC Console resources required by SCP, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide .
2.1.3.6 cnDBTier Resources
This section describes the cnDBTier resources required to deploy SCP.
Table 2-8 cnDBTier Services Resource Requirements
Service Name | CPU/Pod | Memory/Pod (in GB) | PVC Size (in GB) | Ephemeral Storage | ||||
---|---|---|---|---|---|---|---|---|
Min | Max | Min | Max | PVC1 | PVC2 | Min (MB) | Max (MB) | |
MGMT (ndbmgmd) | 2 | 2 | 4 | 5 | 14 | NA | 90 | 100 |
DB (ndbmtd) | 3 | 3 | 8 | 8 | 12 | 27 | 90 | 100 |
SQL - Replication (ndbmysqld) | 4 | 4 | 10 | 10 | 25 | NA | 90 | 100 |
SQL - Access (ndbappmysqld) | 4 | 4 | 8 | 8 | 20 | NA | 90 | 100 |
Monitor Service (db-monitor-svc) | 0.2 | 0.2 | 0.5 | 0.5 | 0 | NA | 90 | 100 |
db-connectivity-service | 0 | 0 | 0 | 0 | 0 | NA | 0 | 0 |
Replication Service(db-replication-svc) | 2 | 2 | 12 | 12 | 11 | 0.01 | 90 | 1000 |
Backup Manager Service (db-backup-manager-svc) | 0.1 | 0.1 | 0.128 | 0.128 | 0 | NA | 90 | 100 |
cnDBTier Sidecars
Table 2-9 Sidecars per cnDBTier Service
Service Name | CPU/Pod | Memory/Pod (in GB) | PVC Size (in GB) | Ephemeral Storage | ||||
---|---|---|---|---|---|---|---|---|
Min | Max | Min | Max | PVC1 | PVC2 | Min (MB) | Max (MB) | |
MGMT (ndbmgmd) | 0 | 0 | 0 | 0 | NA | NA | 0 | 0 |
DB (ndbmtd) | 1 | 1 | 2 | 2 | NA | NA | 90 | 2000 |
SQL - Replication (ndbmysqld) | 0.1 | 0.1 | 0.256 | 0.256 | NA | NA | 90 | 100 |
SQL - Access (ndbappmysqld) | 0.1 | 0.1 | 0.256 | 0.256 | NA | NA | 90 | 100 |
Monitor Service (db-monitor-svc) | 0 | 0 | 0 | 0 | NA | NA | 0 | 0 |
db-connectivity-service | NA | NA | NA | NA | NA | NA | NA | NA |
Replication Service(db-replication-svc) | 0.2 | 0.2 | 0.5 | 0.5 | NA | NA | 90 | 100 |
Backup Manager Service (db-backup-manager-svc) | 0 | 0 | 0 | 0 | NA | NA | 0 | 0 |
2.2 Installation Sequence
This section describes preinstallation, installation, and postinstallation tasks for SCP.
You must perform these tasks after completing Prerequisites and in the same sequence as outlined in the following table.
Table 2-11 SCP Installation Sequence
Installation Sequence | Applicable for CNE Deployment | Applicable for OCI Deployment |
---|---|---|
Preinstallation Tasks | Yes | Yes |
Installation Tasks | Yes | Yes |
Postinstallation Tasks | Yes | Yes |
2.2.1 Preinstallation Tasks
To install SCP, perform the tasks described in this section.
2.2.1.1 Downloading the SCP Package
To download the SCP package from My Oracle Support (MOS), perform the following procedure:
- Log in to My Oracle Support (MOS) using your login credentials.
- Click the Patches & Updates tab to locate the patch.
- In the Patch Search console, click Product or Family (Advanced).
- In the Product field, enter Oracle Communications Cloud Native Core - 5G.
- From the Release drop-down list, select
Oracle Communications Cloud Native Core Service Communication
Proxy <release_number>.
Where,
<release_number>
indicates the required release number of SCP. - Click Search.
The Patch Advanced Search Results list appears.
- From the Patch Name column,
select the required patch number.
The Patch Details window appears.
- Click Download.
The File Download window appears.
- Click the
<p********>_<release_number>_Tekelec.zip
file to download the release package.Where,
<p********>
is the MOS patch number and<release_number>
is the release number of SCP.
2.2.1.2 Pushing the Images to Customer Docker Registry
SCP Images
SCP deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.
The following table lists the Docker images of SCP:
Table 2-12 Images for SCP
Microservices | Image | Tag |
---|---|---|
<helm-release-name>-SCP-Worker | ocscp-worker | 24.3.0 |
<helm-release-name>-SCPC-Configuration | ocscp-configuration | 24.3.0 |
<helm-release-name>-SCPC-Notification | ocscp-notification | 24.3.0 |
<helm-release-name>-SCPC-Subscription | ocscp-subscription | 24.3.0 |
<helm-release-name>-SCPC-Audit | ocscp-audit | 24.3.0 |
<helm-release-name>-SCPC-Alternate-Resolution | ocscp-alternate-resolution | 24.3.0 |
<helm-release-name>-SCP-Cache | ocscp-cache | 24.3.0 |
<helm-release-name>-SCP-nrfproxy | ocscp-nrfproxy | 24.3.0 |
<helm-release-name>-SCP-nrfProxy-oauth | ocscp-nrfproxy-oauth | 24.3.0 |
<helm-release-name>-SCP-Mediation | ocmed-nfmediation | 24.3.0 |
<helm-release-name>-SCP-loadManager | ocscp-load-manager | 24.3.0 |
Note:
<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".To push the images to the registry:
- Navigate to the location where you want to
install SCP, and then unzip the SCP release package
(
<p********>_<release_number>_Tekelec.zip
) to retrieve the following CSAR package.The SCP package is as follows:
Where,<ReleaseName>_csar_<Releasenumber>.zip
.<ReleaseName>
is a name that is used to track this installation instance.
For example,<Releasenumber>
is the release number.ocscp_csar_24_3_0_0_0.zip
. - Untar the SCP package to retrieve the OCSCP image tar file:
unzip <ReleaseName>_csar_<Releasenumber>.zip
.For example,
unzip ocscp_csar_24_3_0_0_0.zip
The zip file consists of the following:
|── Definitions │ ├── ocscp_cne_compatibility.yaml │ └── ocscp.yaml ├── Files │ ├── ChangeLog.txt │ ├── Helm │ │ ├── ocscp-24.3.0.tgz │ │ └── ocscp-network-policy-24.3.0.tgz │ ├── Licenses │ ├── nf-test-24.3.0.tar │ ├── ocdebug-tools-24.3.0.tar │ ├── ocmed-nfmediation-24.3.0.tar │ ├── ocscp-alternate-resolution-24.3.0.tar │ ├── ocscp-audit-24.3.0.tar │ ├── ocscp-cache-24.3.0.tar │ ├── ocscp-configuration-24.3.0.tar │ ├── ocscp-load-manager-24.3.0.tar │ ├── ocscp-notification-24.3.0.tar │ ├── ocscp-nrfproxy-24.3.0.tar │ ├── ocscp-subscription-24.3.0.tar │ ├── ocscp-nrfProxy-oauth-24.3.0.tar │ ├── ocscp-worker-24.3.0.tar │ ├── Oracle.cert │ └── Tests ├── ocscp.mf ├── Scripts │ ├── ocscp_alerting_rules_promha.yaml │ ├── ocscp_alertrules.yaml │ ├── ocscp_configuration_openapi_24.3.0.json │ ├── ocscp_custom_values_24.3.0.yaml │ ├── ocscp_dbtier_24.3.0_custom_values_24.3.0.yaml │ ├── ocscp_metric_dashboard_24.3.0.json │ ├── ocscp_metric_dashboard_promha_24.3.0.json │ ├── ocscp_mib_24.3.0.mib │ ├── ocscp_mib_tc_24.3.0.mib │ ├── ocscp_network_policies_values_24.3.0.yaml │ ├── ocscp_servicemesh_config_values_24.3.0.yaml │ └── toplevel.mib ├── Scripts │ ├── oci │ │ └── ocscp_oci_alertrules_24.3.0.zip │ │ └── ocscp_oci_metric_dashboard_24.3.0.zip └── TOSCA-Metadata └── TOSCA.meta
- Open the
Files
folder and run one of the following commands to loadocscp-images-24.3.0.tar
:podman load --input /IMAGE_PATH/ocscp-images-<release_number>.tar
docker load --input /IMAGE_PATH/ocscp-images-<release_number>.tar
Example:
docker load --input /IMAGE_PATH/ocscp-images-24.3.0.tar
- Run one of the following commands to verify that the images are
loaded:
podman images
docker images
Sample Output:
docker.io/ocscp/ocscp-cache 24.3.0 98fc90defb56 2 hours ago 725MB docker.io/ocscp/ocscp-nrfproxy-oauth 24.3.0 0d92bfbf7c14 2 hours ago 720MB docker.io/ocscp/ocscp-configuration 24.3.0 f23cddb3ec83 2 hours ago 725MB docker.io/ocscp/ocscp-worker 24.3.0 16c8f423c3b9 2 hours ago 877MB docker.io/ocscp/ocscp-load-manager 24.3.0 dab875c4179a 2 hours ago 724MB docker.io/ocscp/ocscp-nrfproxy 24.3.0 85029929a670 2 hours ago 690MB docker.io/ocscp/ocscp-alternate-resolution 24.3.0 2c38646f8bd7 2 hours ago 695MB docker.io/ocscp/ocscp-audit 24.3.0 039e25297115 2 hours ago 694MB docker.io/ocscp/ocscp-notification 24.3.0 a21e6bed6177 2 hours ago 710MB docker.io/ocscp/ocmed-nfmediation 24.3.0 772e01a41584 2 hours ago 710MB
- Verify the list of images shown in the output with the list of images shown in Table 2-12. If the list does not match, reload the image tar file.
- Run one of the following commands to tag the images to the
registry:
podman tag <image-name>:<image-tag> <podman-repo>/ <image-name>:<image-tag>
docker tag <image-name>:<image-tag> <docker-repo>/ <image-name>:<image-tag>
Where,<image-name>
is the image name.<image-tag>
is the image release number.<docker-repo>
is the docker registry address with Port Number if registry has port attached. This is a repository to store the images.<podman-repo>
is the Podman registry address with Port Number if registry has port attached. This is a repository to store the images.
- Run one of the following commands to push the image to the
registry:
podman push <podman-repo>/<image-name>:<image-tag>
docker push <docker-repo>/<image-name>:<image-tag>
Note:
It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise docker push command may fail.2.2.1.3 Pushing the SCP Images to OCI Docker Registry
SCP Images
SCP deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.
The following table lists the Docker images of SCP:
Table 2-13 Images for SCP
Microservices | Image | Tag |
---|---|---|
<helm-release-name>-SCP-Worker | ocscp-worker | 24.3.0 |
<helm-release-name>-SCPC-Configuration | ocscp-configuration | 24.3.0 |
<helm-release-name>-SCPC-Notification | ocscp-notification | 24.3.0 |
<helm-release-name>-SCPC-Subscription | ocscp-subscription | 24.3.0 |
<helm-release-name>-SCPC-Audit | ocscp-audit | 24.3.0 |
<helm-release-name>-SCPC-Alternate-Resolution | ocscp-alternate-resolution | 24.3.0 |
<helm-release-name>-SCP-Cache | ocscp-cache | 24.3.0 |
<helm-release-name>-SCP-nrfproxy | ocscp-nrfproxy | 24.3.0 |
<helm-release-name>-SCP-nrfProxy-oauth | ocscp-nrfproxy-oauth | 24.3.0 |
<helm-release-name>-SCP-Mediation | ocmed-nfmediation | 24.3.0 |
<helm-release-name>-SCP-loadManager | ocscp-load-manager | 24.3.0 |
Note:
<helm-release-name> will be prefixed in each microservice name. For example, if the Helm release name is OCSCP, then the SCPC-Subscription microservice name will be "OCSCP-SCPC-Subscription".To push the images to the registry:
- Navigate to the location where you want to
install SCP, and then unzip the SCP release package
(
<p********>_<release_number>_Tekelec.zip
) to retrieve the following CSAR package.The SCP package is as follows:
Where,<ReleaseName>_csar_<Releasenumber>.zip
.<ReleaseName>
is a name that is used to track this installation instance.
For example,<Releasenumber>
is the release number.ocscp_csar_24_3_0_0_0.zip
. - Untar the SCP package to retrieve the OCSCP image tar file:
unzip <ReleaseName>_csar_<Releasenumber>.zip
.For example,
unzip ocscp_csar_24_3_0_0_0.zip
The zip file consists of the following:
|── Definitions │ ├── ocscp_cne_compatibility.yaml │ └── ocscp.yaml ├── Files │ ├── ChangeLog.txt │ ├── Helm │ │ ├── ocscp-24.3.0.tgz │ │ └── ocscp-network-policy-24.3.0.tgz │ ├── Licenses │ ├── nf-test-24.3.0.tar │ ├── ocdebug-tools-24.3.0.tar │ ├── ocmed-nfmediation-24.3.0.tar │ ├── ocscp-alternate-resolution-24.3.0.tar │ ├── ocscp-audit-24.3.0.tar │ ├── ocscp-cache-24.3.0.tar │ ├── ocscp-configuration-24.3.0.tar │ ├── ocscp-load-manager-24.3.0.tar │ ├── ocscp-notification-24.3.0.tar │ ├── ocscp-nrfproxy-24.3.0.tar │ ├── ocscp-subscription-24.3.0.tar │ ├── ocscp-nrfProxy-oauth-24.3.0.tar │ ├── ocscp-worker-24.3.0.tar │ ├── Oracle.cert │ └── Tests ├── ocscp.mf ├── Scripts │ ├── ocscp_alerting_rules_promha.yaml │ ├── ocscp_alertrules.yaml │ ├── ocscp_configuration_openapi_24.3.0.json │ ├── ocscp_custom_values_24.3.0.yaml │ ├── ocscp_dbtier_24.3.0_custom_values_24.3.0.yaml │ ├── ocscp_metric_dashboard_24.3.0.json │ ├── ocscp_metric_dashboard_promha_24.3.0.json │ ├── ocscp_mib_24.3.0.mib │ ├── ocscp_mib_tc_24.3.0.mib │ ├── ocscp_network_policies_values_24.3.0.yaml │ ├── ocscp_servicemesh_config_values_24.3.0.yaml │ └── toplevel.mib ├── Scripts │ ├── oci │ │ └── ocscp_oci_alertrules_24.3.0.zip │ │ └── ocscp_oci_metric_dashboard_24.3.0.zip └── TOSCA-Metadata └── TOSCA.meta
- Open the
Files
folder and run one of the following commands to loadocscp-images-24.3.0.tar
:podman load --input /IMAGE_PATH/ocscp-images-<release_number>.tar
docker load --input /IMAGE_PATH/ocscp-images-<release_number>.tar
Example:
docker load --input /IMAGE_PATH/ocscp-images-24.3.0.tar
- Run one of the following commands to verify that the images are
loaded:
podman images
docker images
Sample Output:
docker.io/ocscp/ocscp-cache 24.3.0 98fc90defb56 2 hours ago 725MB docker.io/ocscp/ocscp-nrfproxy-oauth 24.3.0 0d92bfbf7c14 2 hours ago 720MB docker.io/ocscp/ocscp-configuration 24.3.0 f23cddb3ec83 2 hours ago 725MB docker.io/ocscp/ocscp-worker 24.3.0 16c8f423c3b9 2 hours ago 877MB docker.io/ocscp/ocscp-load-manager 24.3.0 dab875c4179a 2 hours ago 724MB docker.io/ocscp/ocscp-nrfproxy 24.3.0 85029929a670 2 hours ago 690MB docker.io/ocscp/ocscp-alternate-resolution 24.3.0 2c38646f8bd7 2 hours ago 695MB docker.io/ocscp/ocscp-audit 24.3.0 039e25297115 2 hours ago 694MB docker.io/ocscp/ocscp-notification 24.3.0 a21e6bed6177 2 hours ago 710MB docker.io/ocscp/ocmed-nfmediation 24.3.0 772e01a41584 2 hours ago 710MB
- Verify the list of images shown in the output with the list of images shown in Table 2-12. If the list does not match, reload the image tar file.
- Run the following commands to log in to the OCI
registry:
podman login -u <REGISTRY_USERNAME> -p <REGISTRY_PASSWORD> <REGISTRY_NAME>
docker login -u <REGISTRY_USERNAME> -p <REGISTRY_PASSWORD> <REGISTRY_NAME>
Where,
<REGISTRY_NAME>
is <Region_Key>.ocir.io.<REGISTRY_USERNAME>
is <Object Storage Namespace>/<identity_domain>/email_id.<REGISTRY_PASSWORD>
is the Auth Token generated by the user.For more information about OCIR configuration and creating auth token, see Oracle Communications Cloud Native Core, OCI Deployment Guide.
<Object Storage Namespace>
can be obtained from the OCI Console by navigating to Governance & Administration > Account Management > Tenancy Details > Object Storage Namespace.<Identity Domain>
is the domain of the user.- In OCI, each region is associated with a key. For more information, see Regions and Availability Domains.
- Run one of the following commands to tag the images to the
registry:
podman tag <image-name>:<image-tag> <podman-repo>/ <image-name>:<image-tag>
docker tag <image-name>:<image-tag> <docker-repo>/ <image-name>:<image-tag>
Where,<image-name>
is the image name.<image-tag>
is the image release number.<docker-repo>
is the docker registry address with Port Number if registry has port attached. This is a repository to store the images.<podman-repo>
is the Podman registry address with Port Number if registry has port attached. This is a repository to store the images.
- Run one of the following commands to push the
image:
podman push <oci-repo>/<image-name>:<image-tag>
docker push <oci-repo>/<image-name>:<image-tag>
Where,
<oci-repo>
is the OCI registry path. - Make all the image repositories public by performing the following
steps:
Note:
All the image repositories must be public.- Log in to the OCI Console using your login credentials.
- From the left navigation pane, click Developer Services.
- On the preview pane, click Container Registry.
- From the Compartment drop-down list, select networkfunctions5G (root).
- From the Repositories and
images drop-down list, select the required image and
click Change to Public.
The images details are displayed under the Repository information tab and the image changes to public. For example, the
24.3.0db/occne/cndbtier-mysqlndb-client (Private)
changes to24.3.0db/occne/cndbtier-mysqlndb-client (Public)
. - Repeat substep 9e to make all image repositories public.
2.2.1.4 Verifying and Creating Namespace
Note:
This is a mandatory procedure, run this before proceeding further with the installation. The namespace created or verified in this procedure is an input for the next procedures.2.2.1.5 Creating Service Account, Role, and Rolebinding
This section is optional and it describes how to manually create a service account, role, and rolebinding. It is required only when customer needs to create a role, rolebinding, and service account manually before installing SCP.
Note:
The secrets should exist in the same namespace where SCP is getting deployed. This helps to bind the Kubernetes role with the given service account.- Run the following command to create an SCP resource
file:
vi <ocscp-resource-file>
Example:
vi ocscp-resource-template.yaml
- Update the
ocscp-resource-template.yaml
file with release specific information:A sample template to update theocscp-resource-template.yaml
file is as follows:rules: - apiGroups: [""] resources: #resources under api group to be tested. Added for helm test. Helm test dependency are services,configmaps,pods,pvc,serviceaccounts - services - configmaps - pods - secrets - endpoints - persistentvolumeclaims - serviceaccounts verbs: ["get", "list", "watch", "delete"] # permissions of resources under api group, delete added to perform rolling restart of cache pods. - apiGroups: - "" # "" indicates the core API group resources: # Added for helm test. Helm test dependency - services - configmaps - pods - secrets - endpoints - persistentvolumeclaims - serviceaccounts verbs: ["get", "list", "watch", "delete"] # permissions of resources under api group, delete added to perform rolling restart of cache pods. #APIGroups that are added due to helm test dependency are apps, autoscaling, rbac.authorization and monitoring.coreos - apiGroups: - apps resources: - deployments verbs: # permissions so that resources under api group has - get - watch - list - apiGroups: - autoscaling resources: # Added for helm test. Helm test dependency - horizontalpodautoscalers verbs: # permissions so that resources under api group has - get - watch - list - apiGroups: - rbac.authorization.k8s.io resources: # Added for helm test. Helm test dependency - roles - rolebindings verbs: - get - watch - list - apiGroups: - monitoring.coreos.com resources: # Added for helm test. Helm test dependency - prometheusrules verbs: - get - watch - list
- Run the following command to create service account, role, and role
binding:
kubectl -n <ocscp-namespace> create -f ocscp-resource-template.yaml
Example:
kubectl -n ocscp create -f ocscp-resource-template.yaml
- Update the
serviceAccountName
parameter in the ocscp_values_24.3.0.yaml file with the value updated in thename
field underkind: ServiceAccount
. For more information about theserviceAccountName
parameter, see Global Parameters.
2.2.1.6 Configuring Database for SCP
Note:
While performing a fresh installation, if SCP is already deployed, purge the deployment and remove the database and users that were used for the previous deployment. For uninstallation procedure, see Uninstalling SCP.2.2.1.7 Configuring Kubernetes Secret for Accessing Database
This section explains how to configure Kubernetes secrets for accessing SCP database.
Note:
Do not use the same credentials in different Kubernetes secrets, and the passwords stored in the secrets must follow the password policy requirements as recommended in "Changing cnDBTier Passwords" in Oracle Communications Cloud Native Core Security Guide.2.2.1.8 Configuring SSL or TLS Certificates to Enable HTTPS
The Secure Sockets Layer (SSL) and Transport Layer Security (TLS)
certificates must be configured in SCP to enable Hypertext Transfer Protocol Secure
(HTTPS). These certificates must be stored in Kubernetes secret and the secret name
must be provided in the sbiProxySslConfigurations
section of the
custom-values.yaml
file.
- fresh installation of SCP.
- performing an SCP upgrade.
- ECDSA private key and CA signed certificate of SCP if initialAlgorithm is ES256
- RSA private key and CA signed certificate of SCP if initialAlgorithm is RS256
- TrustStore password file
- KeyStore password file
- CA Root file
Note:
- The process to create the private keys, certificates, and passwords is at the operators' discretion.
- The passwords for TrustStore and KeyStore must be stored in the respective password files.
- Perform this procedure before enabling HTTPS in SCP.
You can create Kubernetes secret for enabling HTTPS in SCP using one of the following methods:
- Managing Kubernetes secret manually
- Managing Kubernetes secret through OCCM
Managing Kubernetes Secret Manually
- Updating, adding, or deleting the certificate, terminates all the existing connections gracefully and reestablishes new connections for new requests.
- When the certificates expires, no new connections are established for new requests, however, the existing connections remain active. After the renewal of the certificates as described in Step 3, all the existing connections are gracefully terminated. And, new connections are established with the renewed certificates.
Managing Kubernetes Secret Through OCCM
To create the Kubernetes secret using OCCM, see "Managing Certificates" in Oracle Communications Cloud Native Core, Certificate Management User Guide, and then patch the Kubernetes secret created by OCCM to add keyStore password and trustStore password files by running the following commands:- To patch the Kubernetes secret created with the keyStore
password
file:
TLS_CRT=$(base64 < "key.txt" | tr -d '\n') kubectl patch secret server-primary-ocscp-secret-occm -n scpsvc -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"
Where,
key.txt
is the KeyStore password file that contains KeyStore password. - To patch the Kubernetes secret created with the trustStore
password
file:
TLS_CRT=$(base64 < "trust.txt" | tr -d '\n') kubectl patch secret server-primary-ocscp-secret-occm -n scpsvc -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"
Where,
trust.txt
is the TrustStore password file that contains TrustStore password.
Note:
To monitor the lifecycle management of the certificates through OCCM, do not patch the Kubernetes secret manually to update the TLS certificate or keys. It must be done through the OCCM GUI.2.2.1.9 Configuring SCP to Support Aspen Service Mesh
SCP leverages the Platform Service Mesh (for example, Aspen Service Mesh (ASM)) for all internal and external TLS communication by deploying a special sidecar proxy in each pod to intercept all the network communications. The service mesh integration provides inter-NF communication and allows API gateway to co-work with service mesh. The service mesh integration supports the services by deploying a special sidecar proxy in each pods to intercept all the network communications between microservices.
Supported ASM version: 1.14.6 and 1.11.8
For ASM installation and configuration, see official Aspen Service Mesh website for details.
Aspen Service Mesh (ASM) configurations are categorized as follows:
- Control Plane: It involves adding labels or annotations to inject sidecar. The control plane configurations are part of the NF Helm chart.
- Data Plane: It helps in traffic management, such as handling NF call flows by adding Service Entries (SE), Destination Rules (DR), Envoy Filters (EF), and other resource changes such as apiVersion change between different versions. This configuration is done manually by considering each NF requirement and ASM deployment.
Data Plane Configuration
Data Plane configuration consists of following Custom Resource Definitions (CRDs):
- Service Entry (SE)
- Destination Rule (DR)
- Envoy Filter (EF)
Note:
Use Helm charts to add or remove CRDs that you may require due to ASM upgrades to configure features across different releases.The data plane configuration is applicable in the following scenarios:
- NF to NF Communication: During NF to NF communication, where
sidecar is injected to both the NFs, SE and DR must communicate with the
corresponding SE and DR of the other NF. Otherwise, the sidecar rejects the
communication. All egress communications of NFs must have a configured entry for
SE and DR.
Note:
Configure the core DNS with the producer NF endpoint to enable the sidecar access for establishing communication between cluster. - Kube-api-server: For Kube-api-server, there are a few NFs that require access to the Kubernetes API server. The ASM proxy (mTLS enabled) may block this. As per F5 recommendation, the NF must add SE for the Kubernetes API server for its own namespace.
- Envoy Filters: Sidecars rewrite the header with its own default value. Therefore, the headers from back-end services are lost. You require Envoy Filters to help in passing the headers from back-end services to use it as it is.
ASM Configuration File
ocscp_servicemesh_config_values_24.3.0.yaml
is available in the Scripts
folder of ocscp_csar_24_3_0_0_0.zip
. For downloading the file, see Customizing SCP. To view ASM
EnvoyFilter configuration enhancements, see ASM Configuration.
Note:
To connect to vDBTier, create an SE and DR for MySQL connectivity service if the database is in different cluster. Else, the sidecar rejects request as vDBTier does not support sidecars.2.2.1.9.1 Predeployment Configurations
Note:
- For information about ASM parameters, see ASM Resource. You can log in to ASM using ASPEN credentials.
- On the ASM setup, create service entries for respective namespace.
- Run the following command to create a namespace for SCP deployment if not
already created:
kubectl create ns <scp-namespace-name>
- Run the following command to configure access to Kubernetes API Service and
create a service entry in pod networking so that pods can access Kubernetes
api-server:
kubectl apply -f kube-api-se.yaml
Samplekube-api-se.yaml
file is as follows:# service_entry_kubernetes.yaml apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: kube-api-server namespace: <scp-namespace> spec: hosts: - kubernetes.default.svc.<domain> exportTo: - "." addresses: - <10.96.0.1> # cluster IP of kubernetes api server location: MESH_INTERNAL ports: - number: 443 name: https protocol: HTTPS resolution: NONE
- Run the following command to set Network Repository Function (NRF)
connectivity by creating ServiceEntry and DestinationRule and access
external or public NRF service that is not part of Service Mesh
Registry:
kubectl apply -f nrf-se-dr.yaml
Samplenrf-se-dr.yaml
file is as follows:apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: nrf-dr namespace: <scp-namespace> spec: exportTo: - . host: ocnrf.3gpp.oracle.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/cert-chain.pem privateKey: /etc/certs/key.pem caCertificates: /etc/certs/root-cert.pem --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: nrf-se namespace: <scp-namespace> spec: exportTo: - . hosts: - "ocnrf.3gpp.oracle.com" ports: - number: <port number of host in hosts section> name: http2 protocol: HTTP2 location: MESH_EXTERNAL resolution: NONE
- Run the following command to enable communication between internal Network
Functions (NFs):
Note:
If Consumer and Producer NFs are not part of Service Mesh Registry, create Destination Rules and Service Entries in SCP namespace for all known call flows to enable inter NF communication.kubectl apply -f known-nf-se-dr.yaml
Sampleknown-nf-se-dr.yaml
file is as follows:apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: udm1-dr namespace: <scp-namespace> spec: exportTo: - . host: s24e65f98-bay190-rack38-udm-11.oracle-ocudm.cnc.us-east.oracle.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/cert-chain.pem privateKey: /etc/certs/key.pem caCertificates: /etc/certs/root-cert.pem --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: udm1-se namespace: <scp-namespace> spec: exportTo: - . hosts: - "s24e65f98-bay190-rack38-udm-11.oracle-ocudm.cnc.us-east.oracle.com" ports: - number: 16016 name: http2 protocol: HTTP2 location: MESH_EXTERNAL resolution: NONE
Note:
Create DestinationRule and ServiceEntry ASM resources for the following scenarios:- When an NF is registered with callback URIs or notification URIs which is not part of Service Mesh Registry
- When a callbackReference is used in a known call flow and contains URI which is not part of Service Mesh Registry
kubectl apply -f callback-uri-se-dr.yaml
Samplecallback-uri-se-dr.yaml
file is as follows:apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: udm-callback-dr namespace: <scp-namespace> spec: exportTo: - . host: udm-notifications-processor-03.oracle-ocudm.cnc.us-east.oracle.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/cert-chain.pem privateKey: /etc/certs/key.pem caCertificates: /etc/certs/root-cert.pem --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: udm-callback-se namespace: <scp-namespace> spec: exportTo: - . hosts: - "udm-notifications-processor-03.oracle-ocudm.cnc.us-east.oracle.com" ports: - number: 16016 name: http2 protocol: HTTP2 location: MESH_EXTERNAL resolution: NONE
- To equally distribute ingress connections among the SCP worker
threads, run the following command to create a new YAML file with EnvoyFilter on ASM
sidecar:
You must apply EnvoyFilter to process inbound connections on ASM sidecar when SCP is deployed with ASM.
kubectl apply -f envoy_inbound.yaml
Sample
envoy_inbound.yaml
file is as follows:apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: inbound-envoyfilter namespace: <scp-namespace> spec: workloadSelector: labels: app: ocscp-scp-worker configPatches: - applyTo: LISTENER match: context: SIDECAR_INBOUND listener: portNumber: 15090 patch: operation: MERGE value: connection_balance_config: exact_balance: {}
Note:
- The ASM sidecar
portNumber
can be configured depending on the deployment. For example,15090
. - Do not configure any virtual service that applies connection or transaction timeout between various SCP services.
2.2.1.9.2 Deploying SCP with ASM
Deployment Configuration
- Run the following command to create namespace label for auto sidecar
injection and to automatically add the sidecars in all pods spawned in SCP
namespace:
kubectl label ns <scp-namespace> istio-injection=enabled
- Create a Service Account for SCP and a role with appropriate security
policies for sidecar proxies to work by referring to the
sa-role-rolebinding.yaml
file mentioned in the next step. - Map the role and service accounts by creating a role binding as specified
in the sample
sa-role-rolebinding.yaml
file:kubectl apply -f sa-role-rolebinding.yaml
Samplesa-role-rolebinding.yaml
file is as follows:apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: {{ template "noncluster.role.name" . }} namespace: {{ .Release.Namespace }} labels: {{- include "labels.allResources" . }} annotations: {{- include "annotations.allResources" . }} rules: - apiGroups: [""] resources: - pods - services - configmaps verbs: ["get", "list", "watch"] - apiGroups: - "" # "" indicates the core API group resources: - secrets - endpoints verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: {{ template "noncluster.rolebinding.name" . }} namespace: {{ .Release.Namespace }} labels: {{- include "labels.allResources" . }} annotations: {{- include "annotations.allResources" . }} roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: {{ template "noncluster.role.name" . }} subjects: - kind: ServiceAccount name: {{ template "noncluster.serviceaccount.name" . }} namespace: {{ .Release.Namespace }} --- apiVersion: v1 kind: ServiceAccount {{- if .Values.imagePullSecrets }} imagePullSecrets: {{- range .Values.imagePullSecrets }} - name: {{ . }} {{- end }} {{- end }} metadata: name: {{ template "noncluster.serviceaccount.name" . }} namespace: {{ .Release.Namespace }} labels: {{- include "labels.allResources" . }} annotations: {{- include "annotations.allResources" . }}
- Update
ocscp_custom_values_24.3.0.yaml
with the following annotations:Note:
Update other values such as DB details and service account as created in the previous steps.global: customExtension: allResources: annotations: sidecar.istio.io/inject: "true" lbDeployments: annotations: sidecar.istio.io/inject: "true" oracle.com/cnc: "true" nonlbDeployments: annotations: sidecar.istio.io/inject: "true" oracle.com/cnc: "true" scpServiceAccountName: <"ocscp-release-1-10-2-scp-serviceaccount"> database: dbHost: <"scp-db-connectivity-service"> #DB Service FQDN scpc-configuration: service: type: ClusterIP scp-worker: tracingenable: false service: type: ClusterIP
Note:
- The
Sidecar inject = "false"
annotation on all resources prevents sidecar injection on pods created by Helm jobs or hooks. - Deployment overrides re-enable auto sidecar injection on all deployments.
- SCP-Worker override disables automatic sidecar injection for the SCP-Worker microservice because it is done manually in later stages. This override is only required for ASM release 1.4 or 1.5. If integrating with ASM release 1.6 or later, it must be removed.
- The
oracle.com/cnc
annotation is required for integration with OSO services. - Jaeger tracing must be disabled because it may interfere with SM end-to-end traces.
- The
- To set sidecar resources for each microservice in the
ocscp_custom_values_24.3.0.yaml
file underdeployment.customExtension.annotations
, configure the following ASM annotations with the resource values for the services:SCP uses these annotations to assign the resources of the sidecar containers.
sidecar.istio.io/proxyMemory
: Indicates the memory requested for the sidecar.sidecar.istio.io/proxyMemoryLimit
: Indicates the maximum memory limit for the sidecar.sidecar.istio.io/proxyCPU
: Indicates the CPU requested for the sidecar.sidecar.istio.io/proxyCPULimit
: Indicates the CPU limit for the sidecar.
- Define the concurrency setting for the sidecar container. A sidecar
container concurrency value must be atleast equal to number of maximum vCPUs allocated
to the sidecar container as
follows:
proxy.istio.io/config: |- concurrency: 6
2.2.1.9.3 Deployment Configurations
ASM Configuration to Allow XFCC Header
Envoy Filter should be added to allow the XFCC header on ASM sidecar.
Sample file:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: <name>
namespace: <namespace>
spec:
workloadSelector:
labels:
app.kubernetes.io/instance: <SCP Deployment name>
configPatches:
- applyTo: NETWORK_FILTER
match:
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
'@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
forward_client_cert_details: ALWAYS_FORWARD_ONLY
use_remote_address: true
xff_num_trusted_hops: 1
Inter-NF Communication
For every new NF participating in new call flows, DestinationRule and ServiceEntry must be created in SCP namespace to enable communication. This can be done in the same way as done earlier for known call flows.
Run the following command to create DestinationRule and ServiceEntry:
kubectl apply -f new-nf-se-dr.yaml
Sample
new-nf-se-dr.yaml file for DestinationRule and
ServiceEntry:apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: <unique DR name for NR>
namespace: <scp-namespace>
spec:
exportTo:
- .
host: <NF-public-FQDN>
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/cert-chain.pem
privateKey: /etc/certs/key.pem
caCertificates: /etc/certs/root-cert.pem
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: <unique SE name for NR>
namespace: <scp-namespace>
spec:
exportTo:
- .
hosts:
- <NF-public-FQDN>
ports:
- number: <NF-public-port>
name: http2
protocol: HTTP2
location: MESH_EXTERNAL
resolution: NONE
Operations Services Overlay Installation
Note:
If OSO is deployed in the same namespace as SCP, ensure that all deployments of OSO have the annotation to skip sidecar injection as OSO does not support ASM sidecar proxy.CNE Common Services for Logging
Note:
If CNE is deployed in the same namespace as SCP, ensure that all deployments of CNE have the annotation to skip sidecar injection as CNE does not support ASM sidecar proxy.2.2.1.9.4 Deleting ASM
This section describes the steps to delete ASM.
To delete ASM, run the following command:
helm delete <helm-release-name> -n <namespace>
Where,
<helm-release-name>
is the release name used by the Helm command. This release name must be the same as the release name used for ServiceMesh.<namespace>
is the deployment namespcae used by the Helm command.
For example:
helm delete ocscp-servicemesh-config -n ocscp
To disable ASM, run the following command:
kubectl label --overwrite namespace ocscp istio-injection=disabled
To verify if ASM is disabled, run the following command:
kubectl get se,dr,peerauthentication,envoyfilter,vs -n ocscp
2.2.1.10 Configuring Network Policies for SCP
Note:
Configuring network policies is a recommended step. Based on the security requirements, network policies may or may not be configured.Note:
- If the traffic is blocked or unblocked between the pods even after applying network policies, check if any existing policy is impacting the same pod or set of pods that might alter the overall cumulative behavior.
- If changing default ports of services such as Prometheus, Database, Jaegar, or if Ingress or Egress Gateway names is overridden, update them in the corresponding network policies.
Configuring Network Policies
Following are the various operations that can be performed for network policies:
2.2.1.10.1 Installing Network Policies
Prerequisite
Note:
For a fresh installation, it is recommended to install Network Policies before installing SCP. However, if SCP is already installed, you can still install the Network Policies.- Open the
ocscp-network-policy-custom-values-24.3.0.yaml
file provided in the release package zip file. For downloading the file, see Downloading the SCP Package and Pushing the Images to Customer Docker Registry. - The file is provided with the default network policies. If required,
update the
ocscp-network-policy-custom-values-24.3.0.yaml
file. For more information on the parameters, see the Configuration Parameters for network policy parameter table.Note:
To run ATS, uncomment the following policies fromocscp-network-policy-custom-values-24.3.0.yaml
:- allow-ingress-traffic-to-notification
- allow-egress-for-ats
- allow-ingress-to-ats
- To connect with CNC Console, update the below parameter in
the allow-ingress-from-console network policy in the
ocscp-network-policy-custom-values-24.3.0.yaml
file:kubernetes.io/metadata.name
: <namespace in which CNCC is deployed>
- In allow-ingress-prometheus policy,
kubernetes.io/metadata.name
parameter must contain the value for the namespace where Prometheus is deployed, andapp.kubernetes.io/name
parameter value should match the label from Prometheus pod.
- Run the following command to install the network
policies:
helm install <helm-release-name> <network-policy>/ -n <namepsace> -f <custom-value-file>
For example:helm install ocscp-network-policy ocscp-network-policy/ -n scpsvc -f ocscp-network-policy-custom-values-24.3.0.yaml
helm-release-name
: ocscp-network-policy Helm release name.custom-value-file
: ocscp-network-policy custom value file.namespace
: SCP namespace.network-policy
: location where the network-policy package is stored.
Note:
- Connections that were created before installing network policy and still persist are not impacted by the new network policy. Only the new connections would be impacted.
- If you are using ATS suite along with network policies, it is required to install the <NF acronym> and ATS in the same namespace.
2.2.1.10.2 Upgrading Network Policies
- Modify the
ocscp-network-policy-custom-values-24.3.0.yaml
file to update, add, and delete the network policies. - Run the following command to upgrade the network
policies:
helm upgrade <helm-release-name> <network-policy>/ -n <namespace> -f <values.yaml>
For example:where,helm upgrade ocscp-network-policy ocscp-network-policy/ -n ocscp -f ocscp-network-policy-custom-values-24.3.0.yaml
helm-release-name
: ocscp-network-policy Helm release name.custom-value-file
: ocscp-network-policy custom value file.namespace
: SCP namespace.network-policy
: location where the network-policy package is stored.
2.2.1.10.3 Verifying Network Policies
kubectl get <helm-release-name> -n <namespace>
kubectl get ocscp-network-policy -n ocscp
helm-release-name
: ocscp-network-policy Helm release name.namespace
: SCP namespace.
2.2.1.10.4 Uninstalling Network Policies
helm uninstall <release_name> --namespace <namespace>
helm uninstall occncc-network-policy --scp
cncc
Note:
While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.
2.2.1.10.5 Configuration Parameters for Network Policies
Table 2-14 Supported Kubernetes Resource for Configuring Network Policies
Parameter | Description | Details |
---|---|---|
apiVersion |
This is a mandatory parameter. Specifies the Kubernetes version for access control. Note: This is the supported api version for network policy. This is a read-only parameter. |
Data Type: string Default Value: |
kind |
This is a mandatory parameter. Represents the REST resource this object represents. Note: This is a read-only parameter. |
Data Type: string Default Value: NetworkPolicy |
Table 2-15 Configuration Parameters for Network Policy
Parameter | Description | Details |
---|---|---|
metadata.name |
This is a mandatory parameter. Specifies a unique name for the network policy. |
{{ .metadata.name }} |
spec.{} |
This is a mandatory parameter. This consists of all the information needed to define a particular network policy in the given namespace. Note: SCP supports the spec parameters defined in "Supported Kubernetes Resource for Configuring Network Policies". |
Default Value: NA |
For more information about this functionality, see "Network Policies" in Oracle Communications Cloud Native Core, Service Communication Proxy User Guide.
2.2.2 Installation Tasks
This section provides installation procedures to install Oracle Communications Cloud Native Core, Service Communication Proxy (SCP).
Before installing SCP, you must complete Prerequisites and Preinstallation Tasks tasks for both the deployment methods.
2.2.2.1 Installing SCP Package
Note:
For each SCP deployment in the network, use a unique SCP database name during the installation.- Run the following command to access the extracted
package:
cd ocscp-<release_number>
Example:
cd ocscp-24.3.0
- Customize the
ocscp_values_24.3.0.yaml
file with the required deployment parameters. See the Customizing SCP chapter to customize the file. For more information about predeployment parameter configurations, see Preinstallation Tasks.Note:
In case NRF configuration is required, see Configuring Network Repository Function Details. - (Optional) If you want to install SCP with Aspen Service Mesh (ASM), perform the predeployment tasks as described in Configuring SCP to Support Aspen Service Mesh.
- Open the
ocscp_values_24.3.0.yaml
file and enable Release 16 with Model C Indirect 5G SBI Communication support by adding- rel16
manually underreleaseVersion
, and then uncommentscpProfileInfo.servingScope
andscpProfileInfo.nfSetIdList
parameters.Note:
- rel16
is the default release version. For more information about Release 16, see 3GPP TS 23.501.Sample
custom-values.yaml
file output:global: domain: svc.cluster.local clusterDomain: cluster.local # If ingress gateway is available then set ingressGWAvailable flag to true # and provide ingress gateway IP and Port in publicSignalingIP and publicSignalingPort respectively. # If ingressGWAvailable flag is true then service type for scp-worker will be ClusterIP # otherwise it will be LoadBalancer. # We can not set ingressGWAvailable flag true and at the same time publicSignalingIPSpecified flag as false. # If you want to assign a load balancer IP,set loadbalanceripenbled flag to true and # provide value for flag loadbalancerip # else a random IP will be assigned if loadbalanceripenbled is false # and it will not use loadbalancerip flag adminport: 8001 # enable or disable jaeger tracing tracingenable: &scpworkerTracingEnabled true enablejaegerbody: &scpworkerJaegerBodyEnabled false #Support for Release15 and Release16 #atleast one param needs should be there #values can be rel15 or rel16 #Default is rel15 releaseVersion: # when running R16, SCP should be deployed with rel16 enabled and rel15 commented. whereas for running R15 features, SCP should be deployed with rel15 enabled and rel16 commented. Both rel15 and rel16 cannot be enabled together. #- rel15 - rel16
Note:
Release 15 deployment model is not supported from SCP 23.4.0.
- Run the following command to install SCP using charts from the
Helm
repository:
helm install <release name> -f <custom_values.yaml> --namespace <namespace> <helm-repo>/chart_name --version <helm_version>
- In case charts are
extracted:
helm install <release name> -f <custom_values.yaml> --namespace <namespace> <chartpath>
Example:
helm install ocscp-helm-repo/ocscp -f <custom values.yaml> ocscp --namespace scpsvc --version <helm version>
Caution:
Do not exit from thehelm install
command manually. After running thehelm install
command, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out from thehelm install
command. It leads to some anomalous behavior. - In case charts are
extracted:
2.2.3 Postinstallation Tasks
This section explains the postinstallation tasks for SCP.
2.2.3.2 Performing Helm Test
This section describes how to perform sanity check for SCP installation through Helm test. The pods to be checked should be based on the namespace and label selector configured for the Helm test configurations.
Helm Test is a feature that validates installation of SCP and determines if the NF is ready to accept traffic.
Note:
Helm Test can be performed only on Helm3.- Configure the Helm test configurations under the global parameters section
of the
ocscp_custom_values_24.3.0.yaml
file as follows:nfName: ocscp image: name: nf_test tag: <string> pullPolicy: Always config: logLevel: WARN timeout: 180 resources: - horizontalpodautoscalers/v1 - deployments/v1 - configmaps/v1 - serviceaccounts/v1 - roles/v1 - services/v1 - rolebindings/v1
For more information, see Customizing SCP.
- Run the following Helm test
command:
helm test <release_name> -n <namespace>
Example:
helm test ocscp -n ocscp
Sample Output:NAME: ocscp LAST DEPLOYED: Fri Sep 18 10:08:03 2020 NAMESPACE: ocscp STATUS: deployed REVISION: 1 TEST SUITE: ocscp-test Last Started: Fri Sep 18 10:41:25 2020 Last Completed: Fri Sep 18 10:41:34 2020 Phase: Succeeded NOTES: # Copyright 2020 (C), Oracle and/or its affiliates. All rights reserved.
Note:
- After running the helm test, the pod moves to a completed state. Hence, to remove the
pod, run the following command:
kubectl delete pod <releaseName>-test -n <namespace>
- The Helm test only verifies whether all pods running in the namespace are in the Ready state, such as 1/1 or 2/2 states. It does not check the deployment.
- If the Helm test fails, see Oracle Communications Cloud Native Core, Service Communication Proxy Troubleshooting Guide.
2.2.4 Configuring Network Repository Function Details
values.yaml
file. You must update the NRF
details in the values.yaml
file.
Note:
You can configure a primary NRF and an optional secondary NRF. NRFs must have the back-end DB synchronized.An IPv4 or IPv6 address of NRF must be configured in case NRF is outside the Kubernetes cluster. If NRF is inside the Kubernetes cluster, you can configure FQDN. If both IP address (IPv4 or IPv6) and FQDN are provided, IP address takes precedence over FQDN.
Note:
- You must configure or remove the apiPrefix parameter based on the APIPrefix supported or not supported by NRF.
- You must update the FQDN, IP address, and Port of NRF to point to NRF's FQDN or IP and Port. The primary NRF profile must be always set to higher, that is, 0. Ensure that the priority value of both primary and secondary profiles are not set to the same priority.
2.2.5 Configuring SCP as HTTP Proxy
<FQDN or IP Address>:<PORT of
SCP-Worker>
of scp-worker in the http_proxy/HTTP_PROXY
configuration.
Note:
Run the following commands from where SCP worker and FQDN can be accessed.- To test successful deployment of SCP, run the following curl
command:
$ curl -v -X GET --url 'http://<FQDN:PORT of SCP-Worker>/nnrf-nfm/v1/subscriptions/' --header 'Host:<FQDN:PORT of NRF>'
- Fetch the current subscription list as a client from NRF by
sending the request to NRF through SCP:
Example:
$ curl -v -X GET --url 'http://scp-worker.scpsvc:8000/nnrf-nfm/v1/subscriptions/' --header 'Host:ocnrf-ambassador.nrfsvc:80'
2.2.6 Configuring Multus Container Network Interface
Note:
To verify whether this feature is enabled, see "Verifying the Availability of Multus Container Network Interface" in Oracle Communications Cloud Native Core, Service Communication Proxy Troubleshooting Guide.2.2.7 Adding and Removing IP-based Signaling Services
The following subsections describe how to add and remove IP-based Signaling Services as part of the Support for Multiple Signaling Service IPs feature.
2.2.7.1 Adding a Signaling Service
Perform the following procedure to add an IP-based signaling service.
2.2.7.2 Removing a Signaling Service
Perform the following procedure to remove an IP-based signaling service.