2 Installing SEPP

This chapter provides information about installing SEPP in a cloud native environment using Continuous Delivery Control Server (CDCS) or Command Line Interface (CLI) procedures.

CDCS is a centralized server that automates SEPP, Remote Hub, or Hosted SEPP deployment processes. CLI provides an interface to run various commands required for SEPP deployment processes.

Note:

SEPP supports fresh installation, and it can also be upgraded from 23.1.x and 23.2.x. For more information on how to upgrade SEPP, see Upgrading SEPP section.

The user can install either SEPP or Roaming Hub/Hosted SEPP. The installation procedure comprises of prerequisites, predeployment configuration, installation, and postinstallation tasks. You must perform the installation tasks in the same sequence as outlined in the following table:

Note:

The postinstallation configurations such as ASM configuration, inter NF communication, OSO installation, and Helm test remain the same for CDCS and CLI installation methods.

Table 2-1 SEPP or Roaming Hub/Hosted SEPP Installation Sequence

Task Sub tasks ReferenceLinks Applicable for SEPP Installation Applicable for Roaming Hub and Hosted SEPP Installation
Prerequisites: This section describes how to set up the installation environment.   Prerequisites Yes Yes
  Software Requirements Software Requirements Yes Yes
  Environment Setup Requirements Environment Setup Requirements Yes Yes
  Resource Requirements Resource Requirements SEPP Resource Requirements Roaming Hub or Hosted SEPP Resource Requirements
Predeployment Configuration: This section describes how to create namespace and database and configure Kubernetes secrets.   Preinstallation Tasks Yes Yes
  Verifying and Creating SEPP Namespace Verifying and Creating SEPP Namespace Yes Yes
  Configuring Database, Creating Users, and Granting Permissions Configuring Database, Creating Users, and Granting Permissions Yes Yes
  Configuring Kubernetes Secrets for Accessing SEPP Database Configuring Kubernetes Secrets for Accessing SEPP Database Yes Yes
  Configuring Kubernetes Secret for Enabling HTTPS/HTTP over TLS Configuring Kubernetes Secret for Enabling HTTPS/ HTTP over TLS Yes Yes
Installation Tasks: This section describes how to download the SEPP package, install SEPP, and verify the installation.   Installation Tasks    
Downloading SEPP package   Downloading SEPP package Yes Yes
Installing SEPP / Roaming Hub   Installing SEPP/Roaming Hub/Hosted SEPP Installing SEPP Installing Roaming Hub or Hosted SEPP
Verifying SEPP Installation   Verifying SEPP Installation Yes Yes
PodDisruptionBudget Kubernetes Resource   PodDisruptionBudget Kubernetes Resource Yes Yes
Customizing SEPP   Customizing SEPP Yes Yes
Upgrading SEPP   Upgrading SEPP Yes Yes
Rollback SEPP deployment   Rollback SEPP deployment Yes Yes
Uninstalling SEPP   Uninstalling SEPP Yes Yes

2.1 Prerequisites

Before installing and configuring SEPP, ensure that the following prerequisites are met.

2.1.1 Software Requirements

This section lists the software that must be installed before installing SEPP:

The following software must be installed before installing SEPP:

Table 2-2 Preinstalled Software

Software Versions
Kubernetes 1.27.x, 1.26.x, 1.25.x
Helm 3.12.x, 3.8.x, 3.6.3
Podman 4.4.1, 4.2.0, 4.0.2
To check the versions for the preinstalled software in the cloud native environment, run the following commands:
kubectl version
helm version 
podman version 
docker version 
The following software are available if SEPP is deployed in CNE. If you are deploying SEPP in any other cloud native environment, these additional software must be installed before installing SEPP. To check the installed software, run the following command:
helm ls -A

The list of additional software items, along with the supported versions and usage, is provided in the following table:

Table 2-3 Additional Software

Software Version Required for
containerd 1.7.5 Logging
Calico 3.25.2 Logging
MetalLB 0.13.11 Logging
Prometheus 2.44.0 Metrics
Grafana 9.5.3 Metrics
Jaeger 1.45.0 Logging
Istio 1.18.2 Logging
Kyverno 1.9.0 Logging
cert-manager 1.12.4 Logging
Oracle OpenSearch 2.3.0 Logging
Oracle OpenSearch Dashboard 2.3.0 Logging
Fluentd OpenSearch 1.16.2 Logging
Velero 1.12.0 Logging

2.1.2 Environment Setup Requirements

This section describes the environment setup requirements for installing SEPP.

2.1.2.1 Client Machine Requirements

This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.

The client machine should have:
  • Helm repository configured.
  • network access to the Helm repository and Docker image repository.
  • network access to the Kubernetes cluster.
  • required environment settings to run the kubectl, podman, and docker commands. The environment should have privileges to create namespace in the Kubernetes cluster.
  • Helm client installed with the push plugin. Configure the environment in such a manner that the helm install command deploys the software in the Kubernetes cluster.

Note:

The recommended helm release name is ocsepp-release. If some other release name is to be configured while installing, see the Deployment Configuration for Config-mgr-svc section.
2.1.2.2 Network Access Requirement

The Kubernetes cluster hosts must have network access to the following repositories:

  • Local Helm repository: It contains the SEPP helm charts.
    To check if the Kubernetes cluster hosts have network access to the local helm repository, run the following command:
    helm repo update
  • Local Docker image repository: It contains the SEPP Docker images.
    To check if the Kubernetes cluster hosts can access the local docker image repository, pull any image with an image-tag using either of the following commands:
    docker pull docker-repo/image-name:image-tag
    podman pull docker-repo/image-name:image-tag
    Where:
    • <docker-repo> is the IP address or host name of the Docker repository.
    • <Podman-repo> is the IP address or host name of the Podman repository.
    • <image-name> is the Docker image name.
    • <image-tag> is the tag assigned to the Docker image used for the SEPP pod.

    Example:

    docker pull CUSTOMER_REPO/oc-app-info:23.4.0
    podman pull occne-repo-host:5000/occnp/oc-app-info:23.4.0

Note:

Run the kubectl and helm commands on a system based on the deployment infrastructure. For instance, they can be run on a client machine such as VM, server, local desktop, and so on.

2.1.2.3 Server or Space Requirement

For information about server or space requirements, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation Upgrade, and Fault Recovery Guide.

2.1.2.4 CNE Requirement

This section is applicable only if you are installing SEPP on Cloud Native Environment (CNE).

SEPP supports CNE 23.4.x, 23.3.x, and 23.2.x.

To check the CNE version, run the following command:

echo $OCCNE_VERSION

For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

2.1.2.5 cnDBTier Requirement
SEPP supports cnDBTier 23.4.0 cnDBTier must be configured and running before installing SEPP. For more information about cnDBTier installation, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

Note:

If cnDBTier version starting from 23.2.x is used during installation, set the ndb_allow_copying_alter_table parameter to 'ON' in the occndbtier_custom_values_23.x.y.yaml file before installing SEPP.

2.1.2.6 OSO Requirement

SEPP supports Operations Services Overlay (OSO) 23.4.x for common operation services (Prometheus and components such as alertmanager, pushgateway) on a Kubernetes cluster, which does not have these common services. For more information about OSO installation, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation and Upgrade Guide.

2.1.2.7 CNC Console Requirements

SEPP supports CNC Console 23.4.x to configure and manage Network Functions. For more information, see Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide .

2.1.2.8 OCCM Requirements

SEPP supports OCCM 24.2.x. To support automated certificate lifecycle management, SEPP integrates with Oracle Communications Cloud Native Core, Certificate Management (OCCM) in compliance with 3GPP security recommendations. For more information about OCCM in SEPP, see the "Support for Automated Certificate Lifecycle Management" section in Oracle Communications Cloud Native Core, Security Edge Protection Proxy User Guide.

For more information about OCCM, see the following guides:

  • Oracle Communications Cloud Native Core, Certificate Manager Installation, Upgrade, and Fault Recovery Guide
  • Oracle Communications Cloud Native Core, Certificate Manager User Guide
2.1.2.9 OCI Requirements

SEPP can be deployed in OCI.

While deploying SEPP in OCI, the user must use the Operator instance/VM instead of Bastion Host.

For more information about OCI deployment, see Oracle Communications Cloud Native Core, OCI Adaptor Deployment Guide.

2.1.3 SEPP Resource Requirements

This section lists the resource requirements to install and run SEPP.

Note:

The performance and capacity of the SEPP system may vary based on the call model, Feature/Interface configuration, and underlying CNE and hardware environment.
2.1.3.1 SEPP Services

The following table lists resource requirement for SEPP Services:

Table 2-4 SEPP Services

Service Name CPU Memory (GB) POD Ephemeral Storage
Min Max Min Max Min Max Min(Gi) Max(Gi)
Helm Test 1 1 1 1 1 1 70Mi 1
Helm Hook 1 1 1 1 1 1 0 1
<helm-release-name>-n32-ingress-gateway 6 6 5 5 8 8 1 1
<helm-release-name>-n32-egress-gateway 5 5 5 5 8 8 1 1
<helm-release-name>-plmn-ingress-gateway 5 5 5 5 8 8 1 1
<helm-release-name>-plmn-egress-gateway 5 5 5 5 8 8 1 1
<helm-release-name>-pn32f-svc 5 5 8 8 6 6 2 2
<helm-release-name>-cn32f-svc 5 5 8 8 6 6 2 2
<helm-release-name>-cn32c-svc 2 2 2 2 2 2 1 1
<helm-release-name>-pn32c-svc 2 2 2 2 2 2 1 1
<helm-release-name>-config-mgr-svc 2 2 2 2 1 1 1 1
<helm-release-name>-sepp-nrf-client-nfdiscovery 1 1 2 2 2 2 1 1
<helm-release-name>-sepp-nrf-client-nfmanagement 1 1 1 1 1 1 1 1
<helm-release-name>-ocpm-config 1 1 1 1 2 2 1 1
<helm-release-name>-appinfo 1 1 1 2 2 2 1 1
<helm-release-name>-perf-info 2 2 200Mi 4 2 2 1 1
<helm-release-name>-nf-mediation 8 8 8 8 2 2 NA NA
<helm-release-name>-coherence-svc 1 1 2 2 1 1 NA NA
<helm-release-name>-alternate-route 2 2 4 4 2 2 NA NA
Total 56 56 63.200 68 65 65 16.7 Gi 18

Note:

  • #: <helm-release-name> will be prefixed in each microservice name. Example: if Helm release name is "ocsepp", then cn32f-svc microservice name will be "ocsepp-cn32f-svc".
  • Init-service container's and Common Configuration Client Hook's resources are not counted because the container gets terminated after initialization completes.
  • Helm Hooks Jobs: These are pre and post jobs that are invoked during installation, upgrade, rollback, and uninstallation of the deployment. These are short span jobs that get terminated after the deployment completion.
  • Helm Test Job: This job is run on demand when the Helm test command is initiated. This job runs the helm test and stops after completion. These are short-lived jobs that get terminated after the deployment is done. They are not part of active deployment resource, but are considered only during helm test procedures.
2.1.3.2 Upgrade

Following is the resource requirement for upgrading SEPP:

Table 2-5 Upgrade

Service Name CPU Memory (GB) POD Ephemeral Storage
Min Max Min Max Min Max Min(Gi) Max(Gi)
Helm test 0 0 0 0 0 0 0 0
Helm Hook 0 0 0 0 0 0 0 0
<helm-release-name>-n32-ingress-gateway 6 6 5 5 1 2 1 1
<helm-release-name>-n32-egress-gateway 5 5 5 5 1 2 1 1
<helm-release-name>-plmn-ingress-gateway 5 5 5 5 1 2 1 1
<helm-release-name>-plmn-egress-gateway 5 5 5 5 1 2 1 1
<helm-release-name>-pn32f-svc 5 5 8 8 1 2 2 1
<helm-release-name>-cn32f-svc 5 5 8 8 1 2 2 1
<helm-release-name>-cn32c-svc 2 2 2 2 1 1 1 1
<helm-release-name>-pn32c-svc 2 2 2 2 1 1 1 1
<helm-release-name>-config-mgr-svc 2 2 2 2 1 1 1 1
<helm-release-name>-sepp-nrf-client-nfdiscovery 1 1 2 2 1 1 1 1
<helm-release-name>-sepp-nrf-client-nfmanagement 1 1 1 1 1 1 1 1
<helm-release-name>-ocpm-config 1 1 1 1 1 1 1 1
<helm-release-name>-appinfo 1 1 1 2 1 1 1 1
<helm-release-name>-perf-info 2 2 200Mi 4 1 1 1 1
<helm-release-name>-nf-mediation 8 8 8 8 1 1 1 1
<helm-release-name>-coherence-svc 1 1 2 2 1 1 NA NA
<helm-release-name>-alternate-route 2 2 4 4 1 1 NA NA
Total 54 54 61.2 66 17 23 17 15 Gi

Note:

<helm-release-name> is the Helm release name. Example: if helm release name is "ocsepp", then cn32f-svc microservice name will be "ocsepp-cn32f-svc"
2.1.3.3 Common Services Container

Following is the resource requirement for Common Services Container:

Table 2-6 Common Services Container

Container Name CPU Memory (GB) Kubernetes Init Container
init-service 1 1 Y
common_config_hook 1 1 N
  • Update Container service: Ingress or Egress Gateway services use this container service to periodically refresh NRF Private Key or Certificate and CA Root Certificate for TLS.
  • Init Container service: Ingress or Egress Gateway services use this container to get NRF Private Key or Certificate and CA Root Certificate for TLS during start up.
  • Common Configuration Hook: It is used for creating database for common service configuration.
2.1.3.4 ASM Sidecar

SEPP leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external TLS communication. If ASM Sidecar injection is enabled during SEPP deployment or upgrade, this container is injected to each pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist.

Table 2-7 ASM Sidecar

Service Name CPU Memory (GB) Ephemeral Storage
Min Max Min Max Min(Mi) Max(Gi)
<helm-release-name>-n32-ingress-gateway 2 2 1 1 70 1
<helm-release-name>-n32-egress-gateway 2 2 1 1 70 1
<helm-release-name>-plmn-ingress-gateway 2 2 1 1 70 1
<helm-release-name>-plmn-egress-gateway 2 2 1 1 70 1
<helm-release-name>-pn32f-svc 2 2 1 1 70 1
<helm-release-name>-cn32f-svc 2 2 1 1 70 1
<helm-release-name>-cn32c-svc 2 2 1 1 70 1
<helm-release-name>-pn32c-svc 2 2 1 1 70 1
<helm-release-name>-config-mgr-svc 2 2 1 1 70 1
<helm-release-name>-sepp-nrf-client-nfdiscovery 2 2 1 1 70 1
<helm-release-name>-sepp-nrf-client-nfmanagement 2 2 1 1 70 1
<helm-release-name>-ocpm-config 2 2 1 1 70 1
<helm-release-name>-appinfo 2 2 1 1 70 1
<helm-release-name>-perf-info 2 2 1 1 70 1
<helm-release-name>-nf-mediation 2 2 1 1 70 1
<helm-release-name>-coherence-svc 2 2 1 1 NA NA
<helm-release-name>-alternate-route 2 2 1 1 NA NA
Total 34 34 17 17 1050 Mi 15 Gi

Note:

<helm-release-name> is the Helm release name. Example: if helm release name is "ocsepp", then cn32f-svc microservice name will be "ocsepp-cn32f-svc"
2.1.3.5 Debug Tool Container

The Debug Tool provides third-party troubleshooting tools for debugging the runtime issues in a lab environment. If Debug Tool Container injection is enabled during SEPP deployment or upgrade, this container is injected to each SEPP pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about configuring Debug Tool, see Oracle Communications Cloud Native Core, Security Edge Protection Proxy Troubleshooting Guide.

Table 2-8 Debug Tool Container

Service Name CPU Memory (GB) Ephemeral Storage
Min Max Min(Gi) Max(Gi) Min(Mi) Max(Mi)
Helm Test 0 0 0 0 512 512
Helm Test 0 0 0 0 512 512
<helm-release-name>-n32-ingress-gateway 0.5 1 4 4 512 512
<helm-release-name>-n32-egress-gateway 0.5 1 4 4 512 512
<helm-release-name>-plmn-ingress-gateway 0.5 1 4 4 512 512
<helm-release-name>-plmn-egress-gateway 0.5 1 4 4 512 512
<helm-release-name>-pn32f-svc 0.5 1 4 4 512 512
<helm-release-name>-cn32f-svc 0.5 1 4 4 512 512
<helm-release-name>-cn32c-svc 0.5 1 4 4 512 512
<helm-release-name>-pn32c-svc 0.5 1 4 4 512 512
<helm-release-name>-config-mgr-svc 0.5 1 4 4 512 512
<helm-release-name>-sepp-nrf-client-nfdiscovery 0.5 1 4 4 512 512
<helm-release-name>-sepp-nrf-client-nfmanagement 0.5 1 4 4 512 512
<helm-release-name>-ocpm-config 0.5 1 4 4 512 512
<helm-release-name>-appinfo 0.5 1 4 4 512 512
<helm-release-name>-perf-info 0.5 1 4 4 512 512
<helm-release-name>-nf-mediation 0.5 1 4 4 512 512
<helm-release-name>-coherence-svc NA NA NA NA NA NA
<helm-release-name>-alternate-route 0.5 1 4 4 NA NA
Total 8 16 64 64 8704 Mi 8704 Mi

Note:

<helm_release_name> is the Helm release name. For example, if Helm release name is "ocsepp", then plmn-egress-gateway microservice name will be "ocplmn-egress-gateway".

2.1.3.6 SEPP Hooks

Following is the resource requirement for SEPP Hooks:

Table 2-9 SEPP Hooks

Hook Name CPU Memory (GB)
  Min Max Min Max
<helm-release-name>-update-db-pre-install 1 1 1 1
<helm-release-name>-update-db-<post-install> 1 1 1 1
<helm-release-name>-update-db-<pre-upgrade> 1 1 1 1
<helm-release-name>-update-db-<post-upgrade> 1 1 1 1
<helm-release-name>-update-db-<pre-rollback> 1 1 1 1
<helm-release-name>-update-db-<post-rollback> 1 1 1 1
<helm-release-name>-pn32f-svc-pre-install 1 1 1 1
<helm-release-name>-pn32f-svc-post-install 1 1 1 1
<helm-release-name>-pn32f-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-pn32f-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-pn32f-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-pn32f-svc-<post-rollback> 1 1 1 1
<helm-release-name>-cn32f-svc-pre-install 1 1 1 1
<helm-release-name>-cn32f-svc-<post-install> 1 1 1 1
<helm-release-name>-cn32f-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-cn32f-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-cn32f-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-cn32f-svc-<post-rollback> 1 1 1 1
<helm-release-name>-cn32c-svc-pre-install 1 1 1 1
<helm-release-name>-cn32c-svc-<post-install> 1 1 1 1
<helm-release-name>-cn32c-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-cn32c-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-cn32c-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-cn32c-svc-<post-rollback> 1 1 1 1
<helm-release-name>-pn32c-svc-pre-install 1 1 1 1
<helm-release-name>-pn32c-svc-<post-install> 1 1 1 1
<helm-release-name>-pn32c-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-pn32c-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-pn32c-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-pn32c-svc-<post-rollback> 1 1 1 1
<helm-release-name>-config-mgr-svc-pre-install 1 1 1 1
<helm-release-name>-config-mgr-svc-<post-install> 1 1 1 1
<helm-release-name>-config-mgr-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-config-mgr-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-config-mgr-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-config-mgr-svc-<post-rollback> 1 1 1 1

Note:

<helm-release-name> is the Helm release name.

2.1.4 Roaming Hub or Hosted SEPP Resource Requirements

This section lists the resource requirements to install and run Roaming Hub or Hosted SEPP.

2.1.4.1 Roaming Hub or Hosted SEPP Services

The following table lists resource requirement for SEPP Services for Roaming Hub or Hosted SEPP:

Table 2-10 SEPP Services for Roaming Hub or Hosted SEPP

Service Name CPU Memory (GB) POD Ephemeral Storage
Min Max Min Max Min Max Min(Mi) Max(Gi)
Helm Test 1 1 1 1 1 1 70 1
Helm Hook 1 1 1 1 1 1 0 1
<helm-release-name>-n32-ingress-gateway 6 6 5 5 2 2 70 1
<helm-release-name>-n32-egress-gateway 5 5 5 5 2 2 70 1
<helm-release-name>-plmn-ingress-gateway 5 5 5 5 2 2 70 1
<helm-release-name>-plmn-egress-gateway 5 5 5 5 2 2 70 1
<helm-release-name>-pn32f-svc 5 5 8 8 2 2 70 1
<helm-release-name>-cn32f-svc 5 5 8 8 2 2 70 1
<helm-release-name>-cn32c-svc 2 2 2 2 2 2 70 1
<helm-release-name>-pn32c-svc 2 2 2 2 2 2 70 1
<helm-release-name>-config-mgr-svc 2 2 2 2 1 1 70 1
<helm-release-name>-perf-info 2 2 200Mi 4 2 2 1 1
<helm-release-name>-nf-mediation 8 8 8 8 2 2 NA NA
<helm-release-name>-appinfo 1 1 1 2 2 2 1 1
Total 50 50 53.2 58 25 25 702 Mi 13 Gi

Note:

  • #: <helm-release-name> will be prefixed in each microservice name. Example: if Helm release name is "ocsepp", then cn32f-svc microservice name will be "ocsepp-cn32f-svc"
  • Init-service container's and Common Configuration Client Hook's resources are not counted because the container gets terminated after initialisation completes.
  • Helm Hooks Jobs: These are pre and post jobs that are invoked during installation, upgrade, rollback, and uninstallation of the deployment. These are short span jobs that get terminated after the deployment completion.
  • Helm Test Job: This job is run on demand when the helm test command is initiated. This job runs the helm test and stops after completion. These are short-lived jobs that get terminated after the deployment is done. They are not part of active deployment resource, but are considered only during helm test procedures.
2.1.4.2 Upgrade

Following is the resource requirement for upgrading Roaming Hub or Hosted SEPP:

Table 2-11 Upgrade

Service Name CPU Memory (GB) POD Ephemeral Storage
Min Max Min Max Min Max Min(Mi) Max(Gi)
Helm test 0 0 0 0 0 0 0 0
Helm Hook 0 0 0 0 0 0 0 0
<helm-release-name>-n32-ingress-gateway 6 6 5 5 1 2 70 1
<helm-release-name>-n32-egress-gateway 5 5 5 5 1 2 70 1
<helm-release-name>-plmn-ingress-gateway 5 5 5 5 1 2 70 1
<helm-release-name>-plmn-egress-gateway 5 5 5 5 1 2 70 1
<helm-release-name>-pn32f-svc 5 5 8 8 1 2 70 1
<helm-release-name>-cn32f-svc 5 5 8 8 1 3 70 1
<helm-release-name>-cn32c-svc 2 2 2 2 1 1 70 1
<helm-release-name>-pn32c-svc 2 2 2 2 1 1 70 1
<helm-release-name>-config-mgr-svc 2 2 2 2 1 1 70 1
<helm-release-name>-perf-info 2 2 200Mi 4 1 1 70 1
<helm-release-name>-nf-mediation 8 8 8 8 1 1 70 1
<helm-release-name>-appinfo 1 1 1 2 1 1 70 1
Total 48 48 51.2 56 12 19 840 Mi 12 Gi

Note:

<helm-release-name> is the Helm release name. Example: if Helm release name is "ocsepp", then cn32f-svc microservice name will be "ocsepp-cn32f-svc"
2.1.4.3 Common Services Container

Following is the resource requirement for Common Services Container:

Table 2-12 Common Services Container

Container Name CPU Memory (GB) Kubernetes Init Container
init-service 1 1 Y
common_config_hook 1 1 N

Note:

  • Update Container service: Ingress or Egress Gateway services use this container service to periodically refresh NRF Private Key or Certificate and CA Root Certificate for TLS.
  • Init Container service: Ingress or Egress Gateway services use this container to get NRF Private Key or Certificate and CA Root Certificate for TLS during start up.
  • Common Configuration Hook: It is used for creating database for common service configuration.
2.1.4.4 ASM Sidecar

Note:

In Roaming Hub or Hosted SEPP mode, ASM is not supported.
2.1.4.5 Debug Tool Container

The Debug Tool provides third-party troubleshooting tools for debugging the runtime issues in a lab environment. If Debug Tool Container injection is enabled during Roaming Hub/Hosted SEPP deployment or upgrade, this container is injected to each Roaming Hub/Hosted SEPP pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about configuring the Debug Tool, see Oracle Communications Cloud Native Core, Security Edge Protection Proxy Troubleshooting Guide.

Table 2-13 Debug Tool Container

Service Name CPU Memory (GB) Ephemeral Storage
Min Max Min(Gi) Max(Gi) Min(Mi) Max(Mi)
Helm Test 0 0 0 0 512 512
Helm Test 0 0 0 0 512 512
<helm-release-name>-n32-ingress-gateway 0.5 1 4 4 512 512
<helm-release-name>-n32-egress-gateway 0.5 1 4 4 512 512
<helm-release-name>-plmn-ingress-gateway 0.5 1 4 4 512 512
<helm-release-name>-plmn-egress-gateway 0.5 1 4 4 512 512
<helm-release-name>-pn32f-svc 0.5 1 4 4 512 512
<helm-release-name>-cn32f-svc 0.5 1 4 4 512 512
<helm-release-name>-cn32c-svc 0.5 1 4 4 512 512
<helm-release-name>-pn32c-svc 0.5 1 4 4 512 512
<helm-release-name>-config-mgr-svc 0.5 1 4 4 512 512
<helm-release-name>-perf-info 0.5 1 4 4 512 512
<helm-release-name>-nf-mediation 0.5 1 4 4 512 512
<helm-release-name>-appinfo 0.5 1 4 4 512 512
Total 6 12 48 48 7168 Mi 7168 Mi

Note:

<helm-release-name> is the Helm release name. Example: if helm release name is "ocsepp", then cn32f-svc microservice name will be "ocsepp-cn32f-svc"
2.1.4.6 SEPP Hooks

Following is the resource requirement for SEPP Hooks.

Table 2-14 SEPP Hooks

Hook Name CPU Memory (GB)
  Min Max Min Max
<helm-release-name>-update-db-pre-install 1 1 1 1
<helm-release-name>-update-db-<post-install> 1 1 1 1
<helm-release-name>-update-db-<pre-upgrade> 1 1 1 1
<helm-release-name>-update-db-<post-upgrade> 1 1 1 1
<helm-release-name>-update-db-<pre-rollback> 1 1 1 1
<helm-release-name>-update-db-<post-rollback> 1 1 1 1
<helm-release-name>-pn32f-svc-pre-install 1 1 1 1
<helm-release-name>-pn32f-svc-post-install 1 1 1 1
<helm-release-name>-pn32f-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-pn32f-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-pn32f-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-pn32f-svc-<post-rollback> 1 1 1 1
<helm-release-name>-cn32f-svc-pre-install 1 1 1 1
<helm-release-name>-cn32f-svc-<post-install> 1 1 1 1
<helm-release-name>-cn32f-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-cn32f-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-cn32f-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-cn32f-svc-<post-rollback> 1 1 1 1
<helm-release-name>-cn32c-svc-pre-install 1 1 1 1
<helm-release-name>-cn32c-svc-<post-install> 1 1 1 1
<helm-release-name>-cn32c-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-cn32c-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-cn32c-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-cn32c-svc-<post-rollback> 1 1 1 1
<helm-release-name>-pn32c-svc-pre-install 1 1 1 1
<helm-release-name>-pn32c-svc-<post-install> 1 1 1 1
<helm-release-name>-pn32c-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-pn32c-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-pn32c-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-pn32c-svc-<post-rollback> 1 1 1 1
<helm-release-name>-config-mgr-svc-pre-install 1 1 1 1
<helm-release-name>-config-mgr-svc-<post-install> 1 1 1 1
<helm-release-name>-config-mgr-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-config-mgr-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-config-mgr-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-config-mgr-svc-<post-rollback> 1 1 1 1

Note:

<helm-release-name> is the Helm release name.

2.2 Installation Sequence

This section describes preinstallation, installation, and postinstallation tasks for SEPP (SEPP, Roaming Hub, or Hosted SEPP).

You must perform these tasks after completing prerequisites and in the same sequence as outlined in the following table for CDCS and CLI installation methods as applicable.

Note:

This section does not provide instructions to download the SEPP package and install SEPP using CDCS. For more information, see Oracle Communications CD Control Server Installation and Upgrade Guide and Oracle Communications CD Control Server User Guide. The postinstallation configurations such as ASM configuration, inter NF communication, OSO installation, Helm test, and so on, remain the same for CDCS and CLI installation methods.
SEPP Installation Sequence

Table 2-15 SEPP Installation Sequence

Sequence Installation Task Reference Applicable for CDCS Applicable for CLI
1 Preinstallation tasks Predeployment Configuration Yes Yes
2 Installation tasks InstallationTasks See Oracle Communications Continuous Delivery Control Server User Guide Yes
3 Verification of installation Verifying SEPP Installation Yes Yes

2.2.1 Preinstallation Tasks

Before installing SEPP, perform the tasks described in this section.

2.2.1.1 Downloading SEPP package

Perform the following procedure to download the Oracle Communications Cloud Native Core, Security Edge Protection Proxy (SEPP) release package from My Oracle Support:

  1. Log in to My Oracle Support using the appropriate credentials.
  2. Click the Patches & Updates tab to locate the patch.
  3. In Patch Search console, click the Product or Family (Advanced) tab.
  4. Enter Oracle Communications Cloud Native Core - 5G in Product field and select SEPP from the Product drop-down list.
  5. From the Release drop-down, select "Oracle Communications Cloud Native Core Security Edge Protection Proxy <release_number>".

    Where, <release-number> indicates the required release number of Cloud Native Core, Security Edge Protection Proxy.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required patch from the list. The Patch Details window appears.
  8. Click Download. File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to download the package. Where, <p********> is the MOS patch number and <release_number> is the release number of SEPP.
2.2.1.2 Pushing the SEPP Images to Customer Docker Registry

SEPP deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.

The following table lists the docker images for SEPP:

Table 2-16 SEPP Images

Services Image Tag
<helm-helm-release-name>-appinfo appinfo 23.4.12
<helm-release-name>-cn32c-svc cn32c-svc 23.4.3
<helm-release-name>-cn32f-svc cn32f-svc 23.4.3
<helm-release-name>-config-mgr-svc config-mgr-svc 23.4.3
<helm-release-name>-oc-config-server oc-config-server 23.4.12
<helm-release-name>-n32-egress-gateway n32-egress-gateway 23.4.10
<helm-release-name>-n32-ingress-gateway n32-ingress-gateway 23.4.10
<helm-release-name>-ocpm-config ocpm-config 23.4.3
<helm-release-name>-oc-perf-info oc-perf-info 23.4.12
<helm-release-name>-plmn-egress-gateway plmn-egress-gateway 23.4.10
<helm-release-name>-plmn-ingress-gateway plmn-ingress-gateway 23.4.10
<helm-release-name>-pn32c-svc pn32c-svc 23.4.3
<helm-release-name>-pn32f-svc pn32f-svc 23.4.3
<helm-release-name>- sepp-nrf-client-nfdiscovery sepp-nrf-client-nfdiscovery 23.4.7
<helm-release-name>- sepp-nrf-client-nfmanagement sepp-nrf-client-nfmanagement 23.4.7
<helm-release-name>-ocsepp-pre-install-hook ocsepp-pre-install-hook 23.4.3
<helm-release-name>-nf-mediation nf-mediation 23.4.3
<helm-release-name>-nf-test nf-test 23.4.3
<helm-release-name>ocdebugtool/ocdebug-tools ocdebugtool/ocdebug-tools 23.4.3
<helm-release-name>coherence-svc coherence-svc 23.4.3
<helm-release-name>alternate-route alternate-route 23.4.10

To push the images to the registry:

  1. Navigate to the location where you want to install SEPP. Unzip the SEPP release package (<p********>_<release_number>_Tekelec.zip) to retrieve the following CSAR package.

    The SEPP package is as follows:

    ReleaseName-csar-Releasenumber.zip

    Where:
    • ReleaseName is a name that is used to track this installation instance.
    • Releasenumber is the release number.

      For example:
    ocsepp_csar_23_4_3_0_0.zip
  2. Unzip the SEPP package file to get SEPP docker image tar file:
    unzip ReleaseName-csar-Releasenumber.zip
    For example:
    unzip ocsepp_csar_23_4_3_0_0.zip
  3. The directory ocsepp_csar_23_4_3_0_0.zip consists of the following:
    ├── Definitions
    │   ├── ocsepp_cne_compatibility.yaml
    │   └── ocsepp.yaml
    ├── Files
    │   ├── alternate_route-23.4.5.tar
    │   ├── ChangeLog.txt
    │   ├── common_config_hook-23.4.5.tar
    │   ├── configurationinit-23.4.5.tar
    │   ├── configurationupdate-23.4.5.tar
    │   ├── Helm
    │   │   ├── ocsepp-23.4.3.tgz
    │   │   ├── ocsepp-network-policy-23.4.3.tgz
    │   │   └── ocsepp-servicemesh-config-23.4.3.tgz
    │   ├── Licenses
    │   ├── mediation-ocmed-nfmediation-23.4.3.tar
    │   ├── nf_test-23.4.3.tar
    │   ├── nrf-client-23.4.5.tar
    │   ├── occnp-oc-app-info-23.4.3.tar
    │   ├── occnp-oc-config-server-23.4.3.tar
    │   ├── occnp-oc-perf-info-23.4.3.tar
    │   ├── ocdebugtool-ocdebug-tools-23.4.3.tar
    │   ├── ocegress_gateway-23.4.5.tar
    │   ├── ocingress_gateway-23.4.5.tar
    │   ├── ocsepp-cn32c-svc-23.4.3.tar
    │   ├── ocsepp-cn32f-svc-23.4.3.tar
    │   ├── ocsepp-coherence-svc-23.4.3.tar
    │   ├── ocsepp-config-mgr-svc-23.4.3.tar
    │   ├── ocsepp-pn32c-svc-23.4.3.tar
    │   ├── ocsepp-pn32f-svc-23.4.3.tar
    │   ├── ocsepp-pre-install-hook-23.4.3.tar
    │   ├── ocsepp-update-db-23.4.3.tar
    │   ├── Oracle.cert
    │   └── Tests
    ├── ocsepp.mf
    ├── Scripts
    │   ├── ocsepp-alerting-rules-promha.yaml
    │   ├── ocsepp_configuration_openapi_23.4.3.yaml
    │   ├── ocsepp_custom_values_23.4.3.yaml
    │   ├── ocsepp_custom_values_asm_23.4.3.yaml
    │   ├── ocsepp_custom_values_roaming_hub_23.4.3.yaml
    │   ├── ocsepp_dashboard.json
    │   ├── ocsepp_dashboard_promha.json
    │   ├── ocsepp_network_policies_custom_values_23.4.3.yaml
    │   ├── ocsepp_servicemesh_config_custom_values_23.4.3.yaml
    │   ├── SeppAlerts.yaml
    │   ├── SEPP-MIB.mib
    │   ├── SEPP-MIB-TC.mib
    │   └── toplevel.mib
    └── TOSCA-Metadata
        └── TOSCA.meta
  4. Open the Files folder and run one of the following commands to load the SEPP images:
    podman load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
    docker load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
    Where, IMAGE_PATH is the location where the SEPP docker image tar file is archived.

    Sample command:
    podman load --input /IMAGE_PATH/ocsepp-pn32c-svc-23.4.3.tar

    Note:

    The docker/podman load command needs to be executed seperately for each tar file/docker image.
  5. Run one of the following commands to verify that the image is loaded:
    docker images
    podman images

    Note:

    Verify the list of images shown in the output with the list of images shown in the table SEPP Images. If the list does not match, reload the image tar file.
  6. Run one of the following commands to tag the images to the registry:
    docker tag <image-name>:<image-tag> <docker-repo>/ <image-name>:<image-tag>
    podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
  7. Run one of the following commands to push the image to the registry:
    docker push <docker-repo>/<image-name>:<image-tag> 
    podman push <docker-repo>/<image-name>:<image-tag

Note:

It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise docker push command may fail.
2.2.1.3 Pushing the Roaming Hub or Hosted SEPP Images to Customer Docker Registry

Roaming Hub or Hosted SEPP deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.

The following table lists the docker images for Roaming Hub or Hosted SEPP:

Table 2-17 Roaming Hub or Hosted SEPP

Services Image Tag
<helm-release-name>-cn32c-svc cn32c-svc 23.4.3
<helm-release-name>-cn32f-svc cn32f-svc 23.4.3
<helm-release-name>-config-mgr-svc config-mgr-svc 23.4.3
<helm-release-name>-n32-egress-gateway n32-egress-gateway 23.4.10
<helm-release-name>-n32-ingress-gateway n32-ingress-gateway 23.4.10
<helm-release-name>-oc-perf-info oc-perf-info 23.4.12
<helm-release-name>-plmn-egress-gateway plmn-egress-gateway 23.4.10
<helm-release-name>-plmn-ingress-gateway plmn-ingress-gateway 23.4.10
<helm-release-name>-pn32c-svc pn32c-svc 23.4.3
<helm-release-name>-pn32f-svc pn32f-svc 23.4.3
<helm-release-name>-ocsepp-pre-install-hook ocsepp-pre-install-hook 23.4.3
<helm-release-name>-nf-mediation nf-mediation 23.4.3
<helm-release-name>-nf-test nf-test 23.4.3
<helm-release-name>ocdebugtool/ocdebug-tools ocdebugtool/ocdebug-tools 23.4.3
<helm-release-name>alternate-route alternate-route 23.4.7

Note:

<helm-helm-release-name> will be prefixed in each microservice name. Example: if Helm release name is "ocsepp", then cn32f-svc microservice name will be "ocsepp-cn32f-svc".

To push the images to the registry:

  1. Navigate to the location where you want to install SEPP. Unzip the SEPP release package (<p********>_<release_number>_Tekelec.zip) to retrieve the following CSAR package.

    The SEPP package is as follows:

    ReleaseName-csar-Releasenumber.zip

    Where:
    • ReleaseName is a name that is used to track this installation instance.
    • Releasenumber is the release number.

      For example:
    ocsepp_csar_23_4_3_0_0.zip
  2. Unzip the SEPP package file to get SEPP docker image tar file:
    unzip ReleaseName-csar-Releasenumber.zip
    For example:
    unzip ocsepp_csar_23_4_3_0_0.zip
  3. The directory ocsepp_csar_23_4_3_0_0.zip consists of the following:
    ├── Definitions
    │   ├── ocsepp_cne_compatibility.yaml
    │   └── ocsepp.yaml
    ├── Files
    │   ├── alternate_route-23.3.5.tar
    │   ├── ChangeLog.txt
    │   ├── common_config_hook-23.3.5.tar
    │   ├── configurationinit-23.3.5.tar
    │   ├── configurationupdate-23.3.5.tar
    │   ├── Helm
    │   │   ├── ocsepp-23.4.3.tgz
    │   │   ├── ocsepp-network-policy-23.4.3.tgz
    │   │   └── ocsepp-servicemesh-config-23.4.3.tgz
    │   ├── Licenses
    │   ├── mediation-ocmed-nfmediation-23.4.3.tar
    │   ├── nf_test-23.3.2.tar
    │   ├── nrf-client-23.3.5.tar
    │   ├── occnp-oc-app-info-23.3.2.tar
    │   ├── occnp-oc-config-server-23.3.2.tar
    │   ├── occnp-oc-perf-info-23.3.2.tar
    │   ├── ocdebugtool-ocdebug-tools-23.4.3.tar
    │   ├── ocegress_gateway-23.3.5.tar
    │   ├── ocingress_gateway-23.3.5.tar
    │   ├── ocsepp-cn32c-svc-23.4.3.tar
    │   ├── ocsepp-cn32f-svc-23.4.3.tar
    │   ├── ocsepp-coherence-svc-23.4.3.tar
    │   ├── ocsepp-config-mgr-svc-23.4.3.tar
    │   ├── ocsepp-pn32c-svc-23.4.3.tar
    │   ├── ocsepp-pn32f-svc-23.4.3.tar
    │   ├── ocsepp-pre-install-hook-23.4.3.tar
    │   ├── ocsepp-update-db-23.4.3.tar
    │   ├── Oracle.cert
    │   └── Tests
    ├── ocsepp.mf
    ├── Scripts
    │   ├── ocsepp-alerting-rules-promha.yaml
    │   ├── ocsepp_configuration_openapi_23.4.3.yaml
    │   ├── ocsepp_custom_values_23.4.3.yaml
    │   ├── ocsepp_custom_values_asm_23.4.3.yaml
    │   ├── ocsepp_custom_values_roaming_hub_23.4.3.yaml
    │   ├── ocsepp_dashboard.json
    │   ├── ocsepp_dashboard_promha.json
    │   ├── ocsepp_network_policies_custom_values_23.4.3.yaml
    │   ├── ocsepp_servicemesh_config_custom_values_23.4.3.yaml
    │   ├── SeppAlerts.yaml
    │   ├── SEPP-MIB.mib
    │   ├── SEPP-MIB-TC.mib
    │   └── toplevel.mib
    └── TOSCA-Metadata
        └── TOSCA.meta
  4. Open the Files folder and run one of the following commands to load the SEPP images:
    podman load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
    docker load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
    Where, IMAGE_PATH is the location where the SEPP docker image tar file is archived.

    Sample command:
    podman load --input /IMAGE_PATH/ocsepp-pn32c-svc-23.4.3.tar

    Note:

    The docker/ podman load command needs to be executed seperately for each tar file/ docker image.
  5. Run one of the following commands to verify that the image is loaded:
    docker images
    podman images

    Note:

    Verify the list of images shown in the output with the list of images shown in the table SEPP Images. If the list does not match, reload the image tar file.
  6. Run one of the following commands to tag the images to the registry:
    docker tag <image-name>:<image-tag> <docker-repo>/ <image-name>:<image-tag>
    podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
  7. Run one of the following commands to push the image to the registry:
    docker push <docker-repo>/<image-name>:<image-tag> 
    podman push <docker-repo>/<image-name>:<image-tag

Note:

  • It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise, docker push command may fail.
  • (optional) Timeout duration: If it is not specified, the default value will be 5m (5 minutes) in Helm3. It sets the time to wait for any individual Kubernetes operation (like jobs for hooks). The default value is 5ms. If the helm install command fails at any point to create a Kubernetes object, it will internally call the purge to delete after the timeout value (default: 300s). Here, timeout value is not for overall installation but automatic purge on installation failure.

Caution:

Do not exit from helm install command manually. After running the helm install command, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out from helm install command. It leads to some anomalous behavior.
2.2.1.4 Verifying and Creating Namespace
This section explains how to verify and create a namespace in the system.

Note:

This is a mandatory procedure, run this before proceeding further with the installation. The namespace created or verified in this procedure is an input for the next procedures.
To verify and create a namespace:
  1. Run the following command to verify if the required namespace already exists in system:
    kubectl get namespaces 

    In the output of the above command, if the namespace exists, continue with the Creating Service Account, Role and RoleBinding section.

  2. If the required namespace is not available, create the namespace using the following command:
    kubectl create namespace <required namespace>
    Example:
    kubectl create namespace seppsvc
    Sample output:
    namespace/seppsvc created

Naming Convension for Namespace

The namespace should:

  • start and end with an alphanumeric character.
  • contain 63 characters or less.
  • contain only alphanumeric characters or '-'.

Note:

It is recommended to avoid using the prefix kube- when creating a namespace. The prefix is reserved for Kubernetes system namespaces.
2.2.1.5 Configuring Database, Creating Users, and Granting Permissions

This section explains how database administrators can create users and database in a single and multisite deployment.

SEPP supports single database (provisional Database) and single type of user.

Note:

  • Before running the procedure for georedundant sites, ensure that the cnDBTier for georedundant sites is up and replication channels are enabled.
  • While performing a fresh installation, if SEPP is already deployed, purge the deployment and remove the database and users that were used for the previous deployment. For uninstallation procedure, see Uninstalling SEPP.

SEPP Users

SEPP supports a single type of user:

This user has a complete set of permissions. This user can perform create, alter, or drop operations on tables to perform install, upgrade, rollback, or delete operations.

SEPP Database

Provisional Database: Provisional Database contains configuration information. The same configuration must be done on each site by the operator. In case of multisite georedundant setups, each site must have a unique provisional database, which is replicated to other sites. SEPP sites can access only the information in their unique provisional database.

For example:

  • For Site 1: SEPP-1DB
  • For Site 2:SEPP-2DB
  • For Site 3: SEPP-3DB
  1. Log in to the machine where ssh keys are stored. The machine must have permission to access the SQL nodes of NDB cluster.
  2. Connect to the ndbmysqld-0 pod of the primary NDB cluster.
  3. Log in to the MySQL prompt using root permission or root user, who has permission to create users with conditions as mentioned below:

    Example:
    mysql -h 127.0.0.1 -uroot -p
  4. Check whether SEPP database user already exists. If the user does not exist, create a SEPP database user by running the following queries:
    1. Run the following command to to list the users:
      $ SELECT User FROM mysql.user;
    2. If the user does not exist, run the following command to create the new user :
      $ CREATE USER IF NOT EXISTS '<SEPP User Name>'@'%' IDENTIFIED BY '<SEPP Password>';
      Example:
      $ CREATE USER IF NOT EXISTS 'seppusr'@'%' IDENTIFIED BY 'sepppasswd';
  5. Check if SEPP database already exists. If it does not exist, run the following commands to create a SEPP database and provide permissions to SEPP username created in the previous step:

    Note:

    Ensure that the Database name in not bigger than 15 characters.
    1. Run the following command to check if database exists:
      $ show databases; 
    2. If database does not exist, run the following command for database creation:
      $ CREATE DATABASE IF NOT EXISTS <SEPP Database> CHARACTER SET utf8;
      Example:
      $ CREATE DATABASE IF NOT EXISTS seppdb CHARACTER SET utf8; 
    3. If backup database does not exist, run the following command for database creation:
      $ CREATE DATABASE IF NOT EXISTS <SEPP Backup Database> CHARACTER SET utf8;
      Example:
      CREATE DATABASE IF NOT EXISTS seppbackupdb CHARACTER SET utf8;
    1. Run the following command to grant permission to user:
      $ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON <SEPP Database>.* TO '<SEPP User Name>'@'%';
      Example:
      GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON seppdb.* TO 'seppusr'@'%';
    2. Run the following command to grant permission to user for backup db:
      $ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON <SEPP backup Database>.* TO '<SEPP User Name>'@'%';
      Example:
      GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON seppbackupdb.* TO 'sepp_usr'@'%';
  6. Run the following command to grant the permission to NDB_STORED_USER:
    GRANT NDB_STORED_USER ON *.* TO '<SEPP_username>'@'%' WITH GRANT OPTION;
    Example:
    GRANT NDB_STORED_USER ON *.* TO 'seppusr'@'%' WITH GRANT OPTION;
  7. Run the following command to update or alter the user name:
    ALTER USER '<username>'@'%' IDENTIFIED WITH mysql_native_password BY '<password>'; 
2.2.1.5.1 Single Site

This section explains how a database administrator can create database and users for a single site deployment.

  1. Log in to the machine where ssh keys are stored. The machine must have permission to access the SQL nodes of NDB cluster.
  2. Connect to the ndbmysqld-0 pod of the primary NDB cluster.
  3. Log in to the MySQL prompt using root permission or user, which has permission to create users with conditions as mentioned below:

    Example:
    mysql -h 127.0.0.1 -uroot -p
  4. Check whether OCSEPP database user already exists. If the user does not exist, create an OCSEPP database user by running the following queries:
    1. Run the following command to to list the users:
      $ SELECT User FROM mysql.user;
    2. If the user does not exist, run the following command to create the new user :
      $ CREATE USER IF NOT EXISTS '<OCSEPP User Name>'@'%' IDENTIFIED BY '<OCSEPP Password>';
    3. Example:
      $ CREATE USER IF NOT EXISTS 'seppusr'@'%' IDENTIFIED BY 'sepppasswd';
  5. Check if OCSEPP database already exists. If it does not exist, run the following commands to create an OCSEPP database and provide permissions to OCSEPP username created in the previous step:
    1. Run the following command to check if database exists:
      $ show databases; 
    2. If database does not exist, run the following command for database creation:
      $ CREATE DATABASE IF NOT EXISTS <OCSEPP Database> CHARACTER SET utf8;
      Example:
      $ CREATE DATABASE IF NOT EXISTS seppdb CHARACTER SET utf8; 
    3. If backup database does not exist, run the following command for database creation:
      $ CREATE DATABASE IF NOT EXISTS <OCSEPP Backup Database> CHARACTER SET utf8;
      Example:
      CREATE DATABASE IF NOT EXISTS seppbackupdb CHARACTER SET utf8;
    1. Run the following command to grand permission to user:
      $ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON <OCSEPP Database>.* TO '<OCSEPP User Name>'@'%';
      Example:
      GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON seppdb.* TO 'seppusr'@'%';
    2. Run the following command to grand permission to user for backup db:
      $ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON <OCSEPP backup Database>.* TO '<OCSEPP User Name>'@'%';
      Example:
      GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON seppbackupdb.* TO 'sepp_usr'@'%';
  6. Run the following command to grand the permission to NDB_STORED_USER:
    GRANT NDB_STORED_USER ON *.* TO '<ocsepp_username>'@'%' WITH GRANT OPTION;
    Example:
    GRANT NDB_STORED_USER ON *.* TO 'seppusr'@'%' WITH GRANT OPTION;
  7. Run the following command to update or alter the user name:
    ALTER USER '<username>'@'%' IDENTIFIED WITH mysql_native_password BY '<password>'; 
2.2.1.5.2 Multisite

This section explains how database administrator can create the databases and users for a multisite deployment.

  1. Log in to the machine where ssh keys are stored. The machine must have permission to access the SQL nodes of NDB cluster.
  2. Connect to the ndbmysqld-0 pod of the primary NDB cluster.
  3. Log in to the MySQL prompt using root permission or user, which has permission to create users with conditions as mentioned below:

    Example:
    mysql -h 127.0.0.1 -uroot -p
  4. Check whether SEPP database user already exists. If the user does not exist, create an SEPP database user by running the following queries:
    1. Run the following command to to list the users:
      $ SELECT User FROM mysql.user;
    2. If the user does not exist, run the following command to create the new user :
      $ CREATE USER IF NOT EXISTS '<SEPP User Name>'@'%' IDENTIFIED BY '<SEPP Password>';
    3. Example:
      $ CREATE USER IF NOT EXISTS 'seppusr'@'%' IDENTIFIED BY 'sepppasswd';
  5. Check if SEPP database already exists. If it does not exist, run the following commands to create an SEPP database and provide permissions to SEPP username created in the previous step:

    Note:

    Create database for each site. For SITE-2 or SITE-3, ensure that the database name is different from the previous site names.
    1. Run the following command to check if database exists:
      $ show databases; 
    2. If database does not exist, run the following command for database creation:
      $ CREATE DATABASE IF NOT EXISTS <SEPP Database> CHARACTER SET utf8;
      Example:
      $ CREATE DATABASE IF NOT EXISTS seppdb CHARACTER SET utf8; 
    3. If backup database does not exist, run the following command for database creation:
      $ CREATE DATABASE IF NOT EXISTS <SEPP Backup Database> CHARACTER SET utf8;
      Example:
      CREATE DATABASE IF NOT EXISTS seppbackupdb CHARACTER SET utf8;
    1. Run the following command to grand permission to user:
      $ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON <SEPP Database>.* TO '<SEPP User Name>'@'%';
      Example:
      GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON seppdb.* TO 'seppusr'@'%';
    2. Run the following command to grand permission to user for backup db:
      $ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON <SEPP backup Database>.* TO '<SEPP User Name>'@'%';
      Example:
      GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON seppbackupdb.* TO 'sepp_usr'@'%';
  6. Run the following command to grand the permission to NDB_STORED_USER:
    GRANT NDB_STORED_USER ON *.* TO '<SEPP_username>'@'%' WITH GRANT OPTION;
    Example:
    GRANT NDB_STORED_USER ON *.* TO 'seppusr'@'%' WITH GRANT OPTION;
  7. Run the following command to update or alter the user name:
    ALTER USER '<username>'@'%' IDENTIFIED WITH mysql_native_password BY '<password>'; 
2.2.1.6 Configuring Kubernetes Secrets for Accessing Database

This section explains how to configure Kubernetes secrets for accessing SEPP database.

  1. Run the following command to create a Kubernetes secret for the users:
    kubectl create secret generic <secret name> --from-literal=DB_USERNAME='<user name>' --from-literal=DB_PASSWORD='<user password>' --from-literal=DB_NAME=<DB Name> -n <sepp namespace>

    Example:

    kubectl create secret generic ocsepp-mysql-cred --from-literal=mysql-username='<USR_NAME>' --from-literal=mysql-password='<PWD>' --from-literal=dbName='<Db Name>' -n seppsvc
2.2.1.7 Configuring Kubernetes Secret for Enabling HTTPS

This section explains the steps to configure HTTPS at Ingress and Egress Gateways.

2.2.1.7.1 Configuring Secrets at PLMN SEPP Egress and Ingress Gateway

This section explains the steps to configure secrets for enabling HTTPS/HTTP over TLS in Public Land Mobile Network (PLMN) Ingress and Egress Gateways. This procedure must be performed before deploying SEPP.

Note:

The passwords for TrustStore and KeyStore are stored in respective password files.
To create kubernetes secret for HTTPS/HTTP over TLS, the following files are required:
  • ECDSA private key and CA signed certificate of SEPP, if initialAlgorithm is ES256
  • RSA private key and CA signed certificate of SEPP, if initialAlgorithm is RS256
  • TrustStore password file
  • KeyStore password file
  • Certificate chain for trust store
  • Signed server certificate or Signed client certificate

Note:

The creation process for private keys, certificates, and passwords are at the discretion of the user or operator.

  1. Run the following command to create secret:
    $ kubectl create secret generic < ocsepp-plmn-secret > --fromfile=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --fromfile=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --fromfile=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of SEPP deployment>

    Note:

    • Note down the command used during the creation of kubernetes secret, this command will be used for updates in future.
    • It is recommended to use the same secret name as mentioned in the example. In case you change <ocsepp-plmn-secret>, then update the k8SecretName parameter under plmn-ingress-gateway and plmn-egress-gateway section in the ocsepp-custom-values-<version>.yaml file. For more information, see the plmn-ingress-gateway and plmn-egress-gateway section.
    • For multiple CA root partners, the SEPP CA certificate should contain the CA information in the particular format. The CAs of the roaming partners should be concatenated in the single file separated by eight hyphens as given below:
      CA1 content
      --------
      CA2 content
      --------
      CA3 content
    Example:
    $ kubectl create secret generic  ocsepp-plmn-secret --fromfile=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --fromfile=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --fromfile=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n seppsvc
  2. Run the following command to verify the secret:
    $ kubectl describe secret <ocsepp-plmn-secret> -n <Namespace of SEPP ocegress Gateway secret>
    Example:
    $ kubectl describe secret  ocsepp-plmn-secret -n seppsvc
2.2.1.7.2 Configuring Secrets at N32 SEPP Egress and Ingress Gateway

This section explains the steps to configure secrets for enabling HTTP over TLS in N32 Ingress and Egress Gateways. This procedure must be performed before deploying SEPP.

Note:

The passwords for TrustStore and KeyStore are stored in respective password files.
To create Kubernetes secret for HTTP over TLS, the following files are required:
  • ECDSA private key and CA signed certificate of SEPP (if initialAlgorithm is ES256)
  • RSA private key and CA signed certificate of SEPP (if initialAlgorithm is RSA256)
  • TrustStore password file
  • KeyStore password file
  • CA certificate

Note:

Creation process for private keys, certificates, and passwords is on discretion of user/operator.

  1. Run the following command to create secret:
    $ kubectl create secret generic < ocsepp-n32-secret > --fromfile=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --fromfile=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --fromfile=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of SEPP deployment>

    Note:

    • Note down the command used during the creation of Kubernetes secret, this command will be used for updates in future.
    • It is recommended to use the same secret name as mentioned in the example. In case you change <ocsepp-n32-secret>, then update the k8SecretName parameter under n32-ingress-gateway and n32-egress-gateway section in the ocsepp-custom-values-<version>.yaml file. For more information, see the n32-ingress-gateway and n32-egress-gateway section.
    Example:
    $ kubectl create secret generic ocsepp-n32-secret --fromfile=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --fromfile=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --fromfile=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n seppsvc
  2. Run the following command to verify the secret:
    $ kubectl describe secret <ocsepp-n32-secret> -n <Namespace of SEPP ocegress Gateway secret>
    Example:
    $ kubectl describe secret ocsepp-n32-secret -n seppsvc
2.2.1.8 Configuring SEPP to Support Aspen Service Mesh

SEPP leverages Aspen Service Mesh (ASM) for all the internal and external TLS communication. The service mesh integration provides inter-NF communication and allows API gateway co-working with service mesh. The service mesh integration supports the services by deploying a special sidecar proxy in each pod to intercept all the network communication between microservices.

Supported ASM version: 1.14.6-am1

For ASM installation and configuration, refer to official Aspen Service Mesh website for details.

Aspen Service Mesh (ASM) configurations are categorized as follows:

  • Control Plane: It involves adding labels or annotations to inject sidecar. The control plane configurations are part of the NF Helm chart.
  • Data Plane: It helps in traffic management, such as handling NF call flows by adding Service Entries (SE), Destination Rules (DR), Envoy Filters (EF), and other resource changes such as apiVersion change between different versions. This configuration is done manually by considering each NF requirement and ASM deployment.

Data Plane Configuration

Data Plane configuration consists of following Custom Resource Definitions (CRDs):

  • Service Entry (SE)
  • Destination Rule (DR)

Note:

Use Helm charts to add or remove CRDs that you may require due to ASM upgrades to configure features across different releases.

The data plane configuration is applicable in the following scenarios:

  • NF to NF Communication: During NF to NF communication, where sidecar is injected on both the NFs, SE and DR must communicate with the corresponding SE and DR of the other NF. Otherwise, the sidecar rejects the communication. All egress communications of NFs must have a configured entry for SE and DR.

    Note:

    Configure the core DNS with the producer NF endpoint to enable the sidecar access for establishing communication between cluster.
  • Kube-api-server: For Kube-api-server, there are a few NFs that require access to the Kubernetes API server. The ASM proxy (mTLS enabled) may block this. As per F5 recommendation, the NF must add SE for the Kubernetes API server for its own namespace.

ASM Configuration File

A sample ocsepp-servicemesh-config-custom-values-<version>.yaml is available in Custom_Templates file. For downloading the file, see Customizing SEPP.

Note:

To connect to vDBTier, create an SE and DR for MySQL connectivity service if the database is in different cluster. Else, the sidecar rejects request as vDBTier does not support sidecars.
2.2.1.8.1 Predeployment Configurations

This section explains the predeployment configuration procedure to install SEPP with ASM support.

Follow the procedure as mentioned below:

  1. Creating SEPP namespace
    1. Run the following command to verify if the required namespace already exists in the system:
      kubectl get namespaces
    2. In the output of the above command, check if required namespace is available. If not available, then create the namespace using the following command:
      kubectl create namespace <namespace>

      Where,

      <Namespace> is the SEPP namespace.

      Example:
      kubectl create namespace seppsvc

ASM Resources Specific Changes

In the ocsepp_servicemesh_config_custom_values_23.4.3.yaml file, do the following changes:

  1. Uncomment the DestinationRule section in the ocsepp_servicemesh_config_custom_values_23.4.3.yaml file.

    Note:

    If the cnDBTier does not have istio sidecar injection, the create the destination rule. Else, skip this step.
    DestinationRule section
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: ocsepp-db-service-dr
      namespace: <ocsepp-namespace>
    spec:
      exportTo:
      - "."
      host: <db-service-fqdn>.<db-namespace>.svc.<domain>
      trafficPolicy:
        tls:
          mode: DISABLE
  2. Uncomment the service entry in pod networking, so that the pods can access Kubernetes api- server:
    kube-api-se
    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: kube-api-server
      namespace: <ocsepp-namespace>
    spec:
      hosts:
      - kubernetes.default.svc.<domain>
      exportTo:
      - "."
      addresses:
      - <10.96.0.1> # cluster IP of kubernetes api server
      location: MESH_INTERNAL
      ports:
      - number: 443
        name: https
        protocol: HTTPS
      resolution: NONE
  3. Uncomment the PeerAuthentication section and update as following:

    peerauth

    
    apiVersion: security.istio.io/v1beta1
    kind: PeerAuthentication
    metadata:
     name: ocsepp-peerauthentication
     namespace: ocsepp-namespace
    spec:
     selector:
      matchLabels:
       app.kubernetes.io/part-of: ocsepp
     mtls:
      mode: STRICT

    Note:

    After the successful deployment, you can change the PeerAuthentication mtls mode to STRICT from PERMISSIVE.
  4. Optional: Uncomment the SE section below, if deploying OCSEPP in ASM mode (only for ATS):
    apiVersion: networking.istio.io/v1beta1
    kind: ServiceEntry
    metadata:
      name: stub-serviceentry
      namespace: seppsvc
    spec:
      exportTo:
      - '*'
      hosts:
      - '*.svc.cluster.local'
      - '*.3gppnetwork.org'
      location: MESH_INTERNAL
      ports:
      - name: http2
        number: 8080
        protocol: HTTP2
      resolution: NONE
  5. Optional: The user can configure the resources assigned to the Aspen Mesh (istio-proxy) sidecars for all supported microservices in their respective sections:
    
    istioResources:
        limits:
          cpu: 2
          memory: 1Gi
        requests:
          cpu: 100m
          memory: 128Mi

    Note:

    It is preferable to use the default values of the resources.
  6. Run the following command to install ASM specific resources in your namespace:
    helm install -f ocsepp_servicemesh_config_custom_values_23.4.3.yaml <release-name> ocsepp_servicemesh_config.tgz --namespace <ns>

    Example:

     helm install -f ocsepp_servicemesh_config_custom_values_23.4.3.yaml ocsepp-servicemesh ocsepp_servicemesh_config.tgz --namespace seppsvc

SEPP Specific Changes

Note:

These changes are not required if deploying only in ATS mode.

In the ocsepp-custom-values-asm-23.4.3.yaml, do the following changes:

  1. In the N32 Egress Gateway section, add sanValues: ["init.sepp.oracle.com"]. This value should match with FQDN in the ssl.conf file, which is used for creating certificate for C-side.
  2. In N32 Ingress Gateway section, change the parameters to the following values:
    
    xfccHeaderValidation:
     validation:
       enabled: false
     extract:
       enabled: true
       certextractindex: 0
       extractfield: DNS
       extractindex: 0
    This change is done to extract the SAN from DNS header which is received from N32 Egress Gateway.
  3. In the PN32C-svc microservice, do the following changes for SAN header name, regex and delimiter:
    sanHeaderName: "oc-xfcc-dns"
    extractSANRegex: "(.*)"
    extractSANDelimiter: " "
    
  4. In the PN32F-svc microservice, do the following changes for SAN header name, regex and delimiter:
    sanHeaderName: "oc-xfcc-dns"
    extractSANRegex: "(.*)"
    extractSANDelimiter: " "
    
2.2.1.8.2 Deploying SEPP with ASM
This section explains how to deploy SEPP using ASM.
  1. Create namespace label for auto sidecar injection to automatically add the sidecars in all of the pods spawned in SEPP namespace:
    kubectl label ns <ocsepp-namespace> istio-injection=enabled
    Example:
    kubectl label ns seppsvc istio-injection=enabled
  2. To deploy and use SEPP with ASM, ensure to use ocsepp-custom-values-asm.yaml file while performing helm install or upgrade.

    The file must have all the necessary changes such as annotations required for deploying SEPP with ASM.

    Note:

    Set mediationEnable flag to false, as mediation feature is not supported for ASM.
  3. (Optional) If SEPP is deployed with OSO, the pods need to have an annotation oracle.com/cnc: true:
    #**************************************************************************
    # ********  Sub-Section Start: Custom Extension Global Parameters ********
    #**************************************************************************
    
      customExtension:
        allResources:
          labels: {}
          annotations: {}
    
        lbServices:
          labels: {}
          annotations: {}
    
        lbDeployments:
          labels: {}
          annotations: {}
          # The annotation oracle.com/cnc: "true" is required if OSO is used
          #oracle.com/cnc: '"true"'
    
        nonlbServices:
          labels: {}
          annotations: {}
    
        nonlbDeployments:
          labels: {}
          annotations: {}
          # The annotation oracle.com/cnc: "true" is required if OSO is used
          #oracle.com/cnc: '"true"'
2.2.1.8.3 Postdeployment Configuration
This section explains the postdeployment configurations.

Note:

The following steps are not required if SEPP is deployed only in ATS mode.
  1. Apply the following yaml on C-SEPP side:
    
    apiVersion: networking.istio.io/v1beta1
    kind: ServiceEntry
    metadata:
      annotations:
      name: service-entry-csepp
      namespace: <namespace>
         
    spec:
    #N32 IGW service IP of P-side
      addresses:
      - 10.233.12.232/32
      endpoints:
      - address: 10.233.12.232
    #Add the FQDN of P-side used in deployment
      hosts:
      - prod.sepp.inter.oracle.com
      location: MESH_INTERNAL
      ports:
      - name: http2
        number: 80 #N32 IGW port exposed
        protocol: HTTP2
      resolution: STATIC
  2. Apply the following Virtual Service yaml on the P-SEPP side:
    
    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      name: n32-igw
      namespace: <namespace>
    spec:
      hosts:
    #FQDN of P-side
      - prod.sepp.inter.oracle.com
      http:
      - match:
        - uri:
            prefix: /
        route:
        - destination:
    #Host of N32 IGW svc name at P-side and port number
            host: <release-name>-n32-ingress-gateway
            port:
              number: 80
  3. OSO deployment

    No additional steps are required. For more information on OSO deployment, see Oracle Communications Operations Services Overlay Installation and Upgrade Guide.

    Note:

    If OSO is deployed in the same namespace as SEPP, make sure all deployments of OSO have the following annotations to skip sidecar injection as OSO currently does not support ASM sidecar proxy.
    sidecar.istio.io/inject: "\"false\""
2.2.1.8.4 Deleting ASM

This section describes the steps to disable or delete the ASM.

To disable ASM, run the following command:

kubectl label --overwrite namespace seppsvc istio-injection=disabled

To verify if ASM is disabled, run the following command:

kubectl get se,dr,peerauthentication,envoyfilter,vs -n ocns
To delete ASM, run the following command:
helm delete <helm-release-name> -n <namespace>
Where,
  • <helm-release-name> is the release name used by the helm command. This release name must be the same as the release name used for ServiceMesh.
  • <namespace> is the deployment namespcae used by Helm command.
For example:
helm delete ocsepp-servicemesh -n seppsvc

Note:

The changes due to the disabling of ASM will be reflected only if SEPP is redeployed.
2.2.1.9 Configuring Network Policies

Kubernetes network policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application. These network policies enforce access restrictions for all the applicable data flows except communication from Kubernetes node to pod for invoking container probe.

Note:

Configuring network policies is an optional step. Based on the security requirements, network policies may or may not be configured.

For more information on network policies, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.

Note:

  • If the traffic is blocked or unblocked between the pods even after applying network policies, check if any existing policy is impacting the same pod or set of pods that might alter the overall cumulative behavior.
  • If changing default ports of services such as Prometheus, Database, Jaegar, or if Ingress or Egress Gateway names are overridden, update them in the corresponding network policies.

Configuring Network Policies

The following are the various operations that can be performed for network policies:

2.2.1.9.1 Installing Network Policies

Prerequisite

Network Policies are implemented by using the network plug-in. To use network policies, you must be using a networking solution which supports Network Policy.

Note:

For a fresh installation, it is recommended to install Network Policies before installing SEPP. However, if SEPP is already installed, you can still install the Network Policies.

To install network policy:

  1. Open the ocsepp_network_policies_custom_values_23.4.3.yaml file provided in the release package zip file. For downloading the file, see Downloading SEPP Package, Pushing the SEPP Images to Customer Docker Registry, and Pushing the Roaming Hub or Hosted SEPP Images to Customer Docker Registry.
  2. The file is provided with the default network policies. If required, update the ocsepp_network_policies_custom_values_23.4.3.yaml file. For more information on the parameters, see the Configuration Parameters for network policy parameter table.

    Note:

    • To run ATS, uncomment the following policies from ocsepp_network_policies_custom_values_23.4.3.yaml file:
      • allow-ingress-traffic-to-notification
      • allow-egress-ats
      • allow-ingress-ats
    • To connect with CNC Console, update the below parameter in the allow-ingress-from-console policy in the ocsepp_network_policies_custom_values_23.4.3.yaml file:
      • kubernetes.io/metadata.name: <namespace in which CNCC is deployed>
    • In allow-ingress-prometheus policy, kubernetes.io/metadata.name parameter must contain the value for the namespace where Prometheus is deployed, and app.kubernetes.io/name parameter value should match the label from Prometheus pod.
    • The following Network Policies require modification for ASM deployment. The required modifications are mentioned in the comments in ocsepp_custom_values_network_policies.yaml file. Update the policies as per the comments.
      • allow-ingress-sbi-n32-igw
      • allow-ingress-sbi-plmn-igw
  3. Run the following command to install the network policies:
    helm install <helm-release-name> <network-policy>/ -n <namepsace> -f
          <custom-value-file>

    For Example:

    helm install ocsepp-network-policy
            ocsepp-network-policy-23.4.3/ -n seppsvc -f
            ocsepp_network_policies_custom_values_23.4.3.yaml

    Where,

    • helm-release-name: ocsepp-network-policy helm release name.
    • custom-value-file: ocsepp-network-policy custom value file.
    • namespace: SEPP namespace.
    • network-policy: location where the network-policy package is stored.

Note:

  • Connections that were created before installing network policy and still persist are not impacted by the new network policy. Only the new connections would be impacted.
  • If you are using ATS suite along with network policies, it is required to install the SEPP and ATS in the same namespace.
2.2.1.9.2 Upgrading Network Policies

To add, delete, or update network policy:

  1. Modify the ocsepp_network_policies_custom_values_23.3.0.yaml file to update, add, and delete the network policy.
  2. Run the following command to upgrade the network policies:
helm upgrade <helm-release-name> <network-policy>/ -n <namespace> -f <custom-value-file>

For Example:

helm upgrade ocsepp-network-policy
        ocsepp-network-policy-23.3.0/ -n seppsvc -f
        ocsepp_network_policies_custom_values_23.3.0.yaml
Where,
  • helm-release-name: ocsepp-network-policy Helm release name.
  • custom-value-file: ocsepp-network-policy custom value file.
  • namespace: SEPP namespace.
  • network-policy: location where the network-policy package is stored
2.2.1.9.3 Verifying Network Policies

Run the following command to verify that the network policies have been applied successfully:

kubectl get <helm-release-name> -n <namespace>

For Example:

kubectl get seppsvc-network-policy -n seppsvc

Where,

  • helm-release-name: seppsvc-network-policy Helm release name
  • namespace: SEPP namespace
Sample output:
NAME	            POD-SELECTOR	                                                  AGE
allow-egress-ats	       app=sepp-ats-rel-bddclient	                                      2ml7s
allow-egress-database	  app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway)  2ml7s
allow-egress-dns  	     app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway)  2ml7s
allow-egress-jaeger	    app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway)  2ml7s
allow-egress-k8-api	    app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway)  2ml7s
allow-egress-to-sepp-pods     app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway)  2ml7s
allow-ingress-ats 	     app=sepp-ats-rel-bddclient	                                      2ml7s
allow-ingress-from-console    app.kubernetes.io/name=config-mgr-svc	                           2ml7s
allow-ingress-from-sepp-pods  app.kubernetes.io/part-of=ocsepp	                                2ml7s
allow-ingress-prometheus      app.kubernetes.io/part-of=ocsepp	                                2ml7s
allow-ingress-sbi-n32-igw     app.kubernetes.io/name=n32-ingress-gateway	                      2ml7s
allow-ingress-sbi-plmn-igw    app.kubernetes.io/name=plmn-ingress-gateway	                     2ml7s
deny-egress-all-except-egw    app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway)  2ml7s
deny-ingress-all	       app.kubernetes.io/part-of=ocsepp	                                 2ml7s
2.2.1.9.4 Uninstalling Network Policies
  1. Run the following command to uninstall the network policies:
helm uninstall <helm-release-name> -n<namespace>

For Example:

helm uninstall ocsepp-network-policy -n seppsvc

Note:

While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.
2.2.1.9.5 Configuration Parameters for Network Policies

This section includes information about the supported Kubernetes resource and configuration parameters for configuring Network Policies.

Table 2-18 Supported Kubernetes Resource for Configuring Network Policies

Parameter Description Details
apiVersion This is a mandatory parameter.

Specifies the Kubernetes version for access control.

Note: This is the supported API version for network policy. This is a read-only parameter.

DataType: String

Default Value: networking.k8s.io/v1
kind This is a mandatory parameter.

Represents the kind of REST resource this object represents.

Note: This is a read only parameter.

DataType: String

Default Value: NetworkPolicy

Table 2-19 Configuration Parameters for Network Policies

Parameter Description Details
metadata.name This is a mandatory parameter.

Specifies a unique name for the network policy.
DataType: String

Default Value: {{ .metadata.name }}
spec.{} This is a mandatory parameter.

This consists of all the information needed to define a particular network policy in the given namespace.

Note: SEPP supports the spec parameters defined in Kubernetes Resource Category.

DataType: Object

Default Value: NA

For more information about this functionality, see "Network Policies" in the Cloud Native Core, Security Edge Protection Proxy User Guide.

2.2.2 Installation Tasks

This section provides installation procedures to install Security Edge Protection Proxy (SEPP) using Command Line Interface (CLI). To install SEPP using CDCS, see Oracle Communications Cloud Native Core, CD Control Server User Guide.

Before installing SEPP, you must complete Prerequisites and Preinstallation Tasks for both the deployment methods.

2.2.2.1 Installing SEPP Package

To install the SEPP package:

  1. Navigate to the Helm directory which is a part of Files directory of unzipped csar package. Run the following command:
    cd Files/Helm
  2. Run the following command to verify the SEPP Helm charts in the Helm directory:
    ls

    The output must be:

    • ocsepp-23.4.3.tgz
    • ocsepp-network-policy-23.4.3.tgz
    • ocsepp-servicemesh-config-23.4.3.tgz

  3. Customize the ocsepp_23.4.3.0.0.custom_values.yaml file with the required deployment parameters. See Customizing SEPP section to customize the file. For more information about predeployment parameter configurations, see Predeployment Configuration tasks.
  4. Run the following command to install SEPP:

    Note:

    The recommended <helm-release> value is ocsepp-release. If some other release name is to be configured while installing, see the Deployment Configuration for Config-mgr-svc section.
    helm install <helm-release> ocsepp-23.4.3.tgz --namespace <k8s namespace> -f <path to ocsepp_customized_values.yaml>
    Example:
    helm install ocsepp-release ocsepp-23.4.3.tgz --namespace seppsvc -f ocsepp-custom-values-23.4.3.yaml

    Note:

    • Ensure the following:

      <helm-release> must not exceed 20 characters.

      namespacename is the deployment namespace used by helm command.

      custom-values.yaml-file name is the name of the custom values yaml file (including location).

    • Timeout duration: Timeout duration is an optional parameter that can be used in the Helm install command. If it is not specified, the default value will be 5m (5 minutes) in Helm3. It sets the time to wait for any individual Kubernetes operation (like Jobs for hooks). The default value is 5ms. If the helm install command fails at any point to create a Kubernetes object, it will internally call the purge to delete after the timeout value (default: 300s). Here, timeout value is not for overall installation but automatic purge on installation failure.

    • In Georedundant deployment, if you want to add or remove a site, refer to Adding a Site to an Existing SEPP Georedundant Site and Removing a Site to from an Existing Georedundant Deployment.

    Caution:

    Do not exit from helm install command manually. After running the helm install command, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out from helm install command. It leads to some anomalous behavior.

2.2.2.2 Installing Roaming Hub or Hosted SEPP

This section describes how to install Roaming Hub or Hosted SEPP in the Cloud Native Environment.

Note:

This is applicable only for Roaming Hub or Hosted SEPP installation.
  1. Navigate to the Helm directory which is a part of Files directory of unzipped csar package. Run the following command:
    cd Files/Helm
  2. Run the following command to verify the SEPP Helm charts in the Helm directory:
    ls

    The output must be:

    • ocsepp-23.4.3.tgz
    • ocsepp-network-policy-23.4.3.tgz
    • ocsepp-servicemesh-config-23.4.3.tgz

  3. Customize the ocsepp_23.4.3.0.0.custom_values.yaml file with the required deployment parameters. See Customizing SEPP section to customize the file. For more information about predeployment parameter configurations, see Predeployment Configuration tasks.
  4. Run the following command to install SEPP:

    Note:

    The recommended <helm-release> value is ocsepp-release. If some other release name is to be configured while installing, see the Deployment Configuration for Config-mgr-svc section.
    helm install <helm-release> ocsepp-23.4.3.tgz --namespace <k8s namespace> -f <path to ocsepp-custom-values-Roaming-Hub-23.4.3.yaml>
    Example:
    helm install ocsepp-release ocsepp-23.4.3.tgz --namespace seppsvc -f ocsepp-custom-values-Roaming-Hub-23.4.3.yaml

    Note:

    • Ensure the following:

      <helm-release> must not exceed 20 characters.

      namespacename is the deployment namespace used by helm command.

      custom-values.yaml-file name is the name of the custom values yaml file (including location).

    • Timeout duration: Timeout duration is an optional parameter that can be used in the Helm install command. If it is not specified, the default value will be 5m (5 minutes) in Helm3. It sets the time to wait for any individual Kubernetes operation (like Jobs for hooks). The default value is 5ms. If the helm install command fails at any point to create a Kubernetes object, it will internally call the purge to delete after the timeout value (default: 300s). Here, timeout value is not for overall installation but automatic purge on installation failure.

    • In Georedundant deployment, if you want to add or remove a site, refer to Adding a Site to an Existing SEPP Georedundant Site and Removing a Site to from an Existing Georedundant Deployment.

    Caution:

    Do not exit from helm install command manually. After running the helm install command, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out from helm install command. It leads to some anomalous behavior.

2.2.3 Postinstallation Tasks

This section explains the postinstallation tasks for SEPP.

2.2.3.1 Verifying Installation

To verify the installation:

  1. To verify the deployment status, open a new terminal and run the following command:
    $ watch kubectl get pods -n <k8s namespace>

    The pod status gets updated at regular intervals.

  2. Run the following command to verify the installation status:
    helm status <helm-release> -n <namespace>
    Example:
    helm status ocsepp -n seppsvc

    Where,

    <release_name> is the Helm release name of SEPP.

    In the output, if status is showing as deployed, then the deployment is successful.
  3. Run the following command to check the status of the services:
    kubectl -n <k8s namespace> get services 
    Example:
      kubectl -n seppsvc get services 
  4. Run the following command to check the status of the pods:
    $ kubectl get pods -n <namespace> 

    The value in the STATUS column of all the pods must be Running.

    The value in the READY column of all the pods must be n/n, where n is the number of containers in the pod.

    Example:

    $ kubectl get pods -n seppsvc 
    NAME                                         READY  STATUS   RESTARTS  AGE
    ocsepp-release-appinfo-55b8d4f687-wqtgj                      1/1   Running   0     141m
    ocsepp-release-cn32c-svc-64cd9c555c-ftd8z                    1/1   Running   0     113m
    ocsepp-release-cn32f-svc-dd886fbcc-xr2z8                     1/1   Running   0     4m4s
    ocsepp-release-config-mgr-svc-6c8ddf4c4f-lb4zj               1/1   Running   0     141m
    ocsepp-release-n32-egress-gateway-5b575bbf5f-z5bbx           2/2   Running   0     131m
    ocsepp-release-n32-ingress-gateway-76874c967b-btp46          2/2   Running   0     131m
    ocsepp-release-ocpm-config-65978858dc-t4t5k                  1/1   Running   0     141m
    ocsepp-release-performance-67d76d9d58-llwmt                  1/1   Running   0     141m
    ocsepp-release-plmn-egress-gateway-6dc4759cc7-wn6r8          2/2   Running   0     31m
    ocsepp-release-plmn-ingress-gateway-56c9b45658-hfcxx         2/2   Running   0     131m
    ocsepp-release-pn32c-svc-57774fdc4-2qpvx                     1/1   Running   0     141m
    ocsepp-release-pn32f-svc-586cd87c7b-pxk6m                    1/1   Running   0     3m47s
    ocsepp-release-sepp-nrf-client-nfdiscovery-65747884cd-qblqn  1/1   Running   0     141m
    ocsepp-release-sepp-nrf-client-nfmanagement-5dd6ff98d6-cr7s7 1/1   Running   0     141m
    ocsepp-release-nf-mediation-74bd4dc799-d9ks2                 1/1   Running   0     141m
    ocsepp-release-coherence-svc-54f7987c4b-wv4h7                1/1   Running   0     141m
    
    

Note:

  • Take a backup of the following files that are required during fault recovery:
    • Updated ocsepp-custom-values-23.3.0.yaml file
    • Updated Helm charts
    • Secrets, certificates, and keys that are used during installation
  • If the installation is not successful or you do not see the status as Running for all the pods, perform the troubleshooting steps provided in Oracle Communications Cloud Native Core, Security Edge Protection Proxy Troubleshooting Guide.
2.2.3.2 Performing Helm Test

This section describes how to perform sanity check for SEPP installation through Helm test. The pods to be checked should be based on the namespace and label selector configured for the Helm test configurations.

Helm Test is a feature that validates successful installation of SEPP and determines if the NF is ready to take traffic.

This test also checks for all the PVCs to be in bound state under the Release namespace and label selector configured.

Note:

Helm Test can be performed only on Helm3.

Perform the following Helm test procedure:

  1. Configure the Helm test configurations under the global parameters section of the values.yaml file as follows:
    #helm test configuration
    test:
      imageRepository: cgbu-ocsepp-dev-docker.dockerhub-phx.oci.oraclecorp.com
      nfName: ocsepp
      image:
        name: nf_test
        tag: helm_sepp_test_tag
        pullPolicy: Always
      config:
        logLevel: WARN
        # Configure timeout in SECONDS.
        # Estimated total time required for SEPP deployment and helm test command completion
        timeout: 180
      resources:
        requests:
          cpu: 1
          memory: 1Gi
          #ephemeral-storage: 70Mi
        limits:
          cpu: 1
          memory: 1Gi
          #ephemeral-storage: 1Gi
      complianceEnable: true
      k8resources:
        - horizontalpodautoscalers/v1
        - deployments/v1
        - configmaps/v1
        - prometheusrules/v1
        - serviceaccounts/v1
        - poddisruptionbudgets/v1
        - roles/v1
        - services/v1
        - rolebindings/v1
    
  2. Run the following Helm test command:
    helm test <release_name> -n <namespace>

    Where,

    <release_name> is the release name.

    <namspace> is the deployment namespace where SEPP is installed.

    Example:
    helm test ocsepp -n seppsvc
    Sample Output:
    [admusr@cnejac0101-bastion-2 ocsepp-22.4.0-0]$  helm test ocsepp-release -n seppsvc
    NAME: ocsepp-release
    LAST DEPLOYED: Fri Aug 19 04:56:36 2022
    NAMESPACE: seppsvc
    STATUS: deployed
    REVISION: 1
    TEST SUITE:     ocsepp-release
    Last Started:   Fri Aug 19 05:02:03 2022
    Last Completed: Fri Aug 19 05:02:26 2022
    Phase:          Succeeded
       

If the Helm test fails, see Oracle Communications Cloud Native Core, Security Edge Protection Proxy Troubleshooting Guide.

2.2.3.3 Taking the Back Up
Take a backup of the following files, which are required during fault recovery:
  • Current custom-values.yaml file from which you are upgrading
  • Updated ocsepp_custom_values_<version>.yaml file
  • Updated Helm charts
  • Secrets, certificates, and keys that are used during installation