2 Installing SEPP

This chapter provides information about installing SEPP using Command Line Interface (CLI) procedures. CLI provides an interface to run various commands required for SEPP deployment processes.

The SEPP installation is supported over the following platforms:

  • Oracle Communications Cloud Native Core, Cloud Native Environment (CNE). For more information about CNE, see Oracle Communications Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
  • Oracle Cloud Infrastructure (OCI). For more information, see Oracle Communications Cloud Native Core, OCI Deployment Guide.
  • General Kubernetes environment.

Note:

SEPP supports fresh installation, and it can also be upgraded. For more information on how to upgrade SEPP, see Upgrading SEPP section.

The user can install either SEPP or Roaming Hub/Hosted SEPP. The installation procedure comprises of prerequisites, predeployment configuration, installation, and postinstallation tasks. You must perform the installation tasks in the same sequence as outlined in the following table:

Table 2-1 SEPP or Roaming Hub/Hosted SEPP Installation Sequence

Task Sub tasks Applicable for SEPP Installation (CNE Deployment) Applicable for Roaming Hub or Hosted SEPP Installation (CNE Deployment) Applicable for OCI Deployment
Prerequisites: This section describes how to set up the installation environment. Prerequisites Yes Yes Yes
- Software Requirements Yes Yes Yes
- Environment Setup Requirements Yes Yes Yes
- Resource Requirements SEPP Resource Requirements Roaming Hub or Hosted SEPP Resource Requirements SEPP Resource Requirements
Preinstallation Tasks: This section describes how to create namespace and database and configure Kubernetes secrets. Preinstallation Tasks Yes Yes Yes
- Downloading SEPP package Yes Yes Yes
- Pushing the SEPP Images to Customer Docker Registry Yes No No
- Pushing the Roaming Hub or Hosted SEPP Images to Customer Docker Registry No Yes Yes
- Pushing the SEPP Images to OCI Docker Registry No No Yes
- Verifying and Creating SEPP Namespace Yes Yes Yes
- Configuring Database, Creating Users, and Granting Permissions Yes Yes Yes
- Configuring Kubernetes Secrets for Accessing SEPP Database Yes Yes Yes
- Configuring Kubernetes Secret for Enabling HTTPS/ HTTP over TLS Yes Yes Yes
Installation Tasks: This section describes how to download the SEPP package, install SEPP, and verify the installation. Installation Tasks      
Installing SEPP / Roaming Hub Installing SEPP/Roaming Hub/Hosted SEPP Installing SEPP Installing Roaming Hub or Hosted SEPP Installing SEPP
Verifying SEPP Installation Verifying SEPP Installation Yes Yes Yes
PodDisruptionBudget Kubernetes Resource PodDisruptionBudget Kubernetes Resource Yes Yes Yes
Customizing SEPP Customizing SEPP Yes Yes Yes
Upgrading SEPP Upgrading SEPP Yes Yes Yes
Rollback SEPP deployment Rollback SEPP deployment Yes Yes Yes
Uninstalling SEPP Uninstalling SEPP Yes Yes Yes
Fault Recovery Fault Recovery Yes Yes Yes

2.1 Prerequisites

Before installing and configuring SEPP, ensure that the following prerequisites are met.

2.1.1 Software Requirements

This section lists the software that must be installed before installing SEPP:

Note:

Table 2-2 and Table 2-3 offer a comprehensive list of software necessary for the proper functioning of BSF during deployment. However, these tables are indicative, and the software used can vary based on the customer's specific requirements and solution.

The Software Requirement column in Table 2-2 and Table 2-3 indicates one of the following:

  • Mandatory: Absolutely essential; the software cannot function without it.
  • Recommended: Suggested for optimal performance or best practices but not strictly necessary.
  • Conditional: Required only under specific conditions or configurations.
  • Optional: Not essential; can be included based on specific use cases or preferences.

Table 2-2 Preinstalled Software Versions

Software 25.2.1xx 25.1.2xx 25.1.1xx Software Requirement Usage Description
Kubernetes 1.33.1 1.32.0 1.31.1 Mandatory

Kubernetes orchestrates scalable, automated NF deployments for high availability and efficient resource utilization.

Impact:

Preinstallation is required. Without orchestration capabilities, deploying and managing network functions (NFs) can become complex, leading to inefficient resource utilization and potential downtime.

Helm 3.18.x 3.17.1 3.16.2 Mandatory

Helm, a package manager, simplifies deploying and managing NFs on Kubernetes with reusable, versioned charts for easy automation and scaling.

Impact:

Preinstallation is required. Without this capability, management of NF versions and configurations becomes time-consuming and error-prone, impacting deployment consistency.

Podman 5.2.2 5.2.2 4.9.4 Recommended

Podman is a part of Oracle Linux. It manages and runs containerized NFs without requiring a daemon, offering flexibility and compatibility with Kubernetes.

Impact:

Preinstallation is required. Without efficient container management, the development and deployment of NFs could become cumbersome, impacting agility.

To check the current CNE, Helm, Kubernetes, or Podman version installed, run the following commands:
echo $OCCNE_VERSION
helm version
kubectl version
podman version

Note:

This guide covers the installation instructions for SEPP when Podman is the container platform with Helm as the Packaging Manager. For non-CNE, the operator can use commands based on their deployed Container Runtime Environment, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

Note:

Podman version or docker version based on the container engine installed.

If you are deploying SEPP in a cloud native environment, these following additional software are to be installed before installing SEPP.

Table 2-3 Additional Software Versions

Software 25.2.1xx 25.1.2xx 25.1.1xx Software Requirement Usage Description
AlertManager 0.28.0 0.28.0 0.27.0 Recommended

Alertmanager is a component that works in conjunction with Prometheus to manage and dispatch alerts. It handles the routing and notification of alerts to various receivers.

Impact:

Not implementing alerting mechanisms can lead to delayed responses to critical issues, potentially resulting in service outages or degraded performance.

Calico 3.29.3 3.29.1 3.28.1 Recommended

Calico provides networking and security for NFs in Kubernetes, ensuring scalable, policy-driven connectivity.

Impact:

Calico is a popular Container Network Interface (CNI) and CNI is mandatory for the functioning of 5G NFs. Without a CNI plugin, the network could witness security vulnerabilities and inadequate traffic management, impacting the reliability of NF communications.

cinder-csi-plugin 1.32.0 1.32.0 1.31.1 Mandatory

Cinder CSI (Container Storage Interface) plugin is used for provisioning and managing block storage in Kubernetes. It is often used in OpenStack environments to provide persistent storage for containerized applications

Impact:

Without the CSI plugin, provisioning block storage for NFs would be manual and inefficient, complicating storage management.

containerd 2.0.5 1.7.24 1.7.22 Recommended

Containerd manages container lifecycles to run NFs efficiently in Kubernetes.

Impact:

A lack of a reliable container runtime could lead to performance issues and instability in NF operations.

CoreDNS 1.12.0 1.11.13 1.11.1 Recommended

CoreDNS is the DNS server in Kubernetes, which provides DNS resolution services within the cluster.

Impact:

DNS is an essential part of deployment. Without proper service discovery, NFs would struggle to communicate with each other, leading to connectivity issues and operational failures.

Fluentd 1.17.1 1.17.1 1.17.1 Recommended

Fluentd is an open source data collector that streamlines data collection and consumption, ensuring improved data utilization and comprehension.

Impact:

Not utilizing centralized logging can hinder the ability to track NF activity and troubleshoot issues effectively, complicating maintenance and support.

Grafana 7.5.17 (OCI Grafana) 9.5.3 9.5.3 Recommended

Grafana is a popular open source platform for monitoring and observability. It provides a user-friendly interface for creating and viewing dashboards based on various data sources.

Impact:

Without visualization tools, interpreting complex metrics and gaining insights into NF performance would be cumbersome, affecting effective management.

Jaeger 1.69.0 1.65.0 1.60.0 Recommended

Jaeger provides distributed tracing for 5G NFs, enabling performance monitoring and troubleshooting across microservices.

Impact:

Not utilizing distributed tracing may hinder the ability to diagnose performance bottlenecks, making it challenging to optimize NF interactions and user experience.

Kyverno 1.13.4 1.13.4 1.12.5 Recommended

Kyverno is a Kubernetes policy engine that allows to manage and enforce policies for resource configurations within a Kubernetes cluster.

Impact:

Without the policy enforcement, there could be misconfigurations, resulting in security risks and instability in NF operations, affecting reliability.

MetalLB 0.14.4 0.14.4 0.14.4 Recommended

MetalLB is used as a load balancing solution in CNE, which is mandatory for the solution to work. MetalLB provides load balancing and external IP management for 5G NFs in Kubernetes environments.

Impact:

Without load balancing, traffic distribution among NFs may be inefficient, leading to potential bottlenecks and service degradation.

metrics-server 0.7.2 0.7.2 0.7.2 Recommended

Metrics server is used in Kubernetes for collecting resource usage data from pods and nodes.

Impact:

Without resource metrics, auto-scaling and resource optimization would be limited, potentially leading to resource contention or underutilization.

Multus 4.1.3 4.1.3 3.8.0 Recommended

Multus enables multiple network interfaces in Kubernetes pods, allowing custom configurations and isolated paths for advanced use cases such as NF deployments, ultimately supporting traffic segregation.

Impact:

Without this capability, connecting NFs to multiple networks could be limited, impacting network performance and isolation.

OpenSearch 2.19.1 2.15.0 2.11.0 Recommended

OpenSearch provides scalable search and analytics for 5G NFs, enabling efficient data exploration and visualization.

Impact:

Without a robust analytics solution, there would be difficulties in identifying performance issues and optimizing NF operations, affecting overall service quality.

OpenSearch Dashboard 2.19.1 2.15.0 2.11.0 Recommended

OpenSearch dashboard visualizes and analyzes data for 5G NFs, offering interactive insights and custom reporting.

Impact:

Without visualization capabilities, understanding NF performance metrics and trends would be difficult, limiting informed decision making.

Prometheus 3.4.1 3.2.0 2.52.0 Mandatory

Prometheus is a popular open source monitoring and alerting toolkit. It collects and stores metrics from various sources and allows for alerting and querying.

Impact:

Not employing this monitoring solution could result in a lack of visibility into NF performance, making it difficult to troubleshoot issues and optimize resource usage.

prometheus-kube-state-metric 2.16.0 2.15.0 2.13.0 Recommended

Kube-state-metrics is a service that generates metrics about the state of various resources in a Kubernetes cluster. It's commonly used for monitoring and alerting purposes.

Impact:

Without these metrics, monitoring the health and performance of NFs could be challenging, making it harder to proactively address issues.

prometheus-node-exporter 1.9.1 1.8.2 1.8.2 Recommended

Prometheus Node Exporter collects hardware and OS-level metrics from Linux hosts.

Impact:

Without node-level metrics, visibility into infrastructure performance would be limited, complicating the identification of resource bottlenecks.

Prometheus Operator 0.83.0 0.80.1 0.76.0 Recommended

The Prometheus Operator is used for managing Prometheus monitoring systems in Kubernetes. Prometheus Operator simplifies the configuration and management of Prometheus instances.

Impact:

Not using this operator could complicate the setup and management of monitoring solutions, increasing the risk of missed performance insights.

rook 1.16.7 1.16.6 1.15.2 Mandatory

Rook is the Ceph orchestrator for Kubernetes that provides storage solutions. It is used in BareMetal CNE solution.

Impact:

Not utilizing rook could increase the complexity of deploying and managing ceph, making it difficult to scale storage solutions in a Kubernetes environment.

snmp-notifier 2.0.0 1.6.1 1.5.0 Recommended

snmp-notifier sends SNMP alerts for 5G NFs, providing real-time notifications for network events.

Impact:

Without SNMP notifications, proactive monitoring of NF health and performance could be compromised, delaying response to critical issues.

Velero 1.13.2 1.13.2 1.13.2 Recommended

Velero backs up and restores Kubernetes clusters for 5G NFs, ensuring data protection and disaster recovery.

Impact:

Without backup and recovery capabilities, customers would witness a risk of data loss and extended downtime, requiring a full cluster reinstall in case of failure or upgrade.

Important:

If you are using NRF with SEPP, install it before proceeding with the SEPP installation. SEPP 25.2.1xx supports NRF 25.2.1xx.

2.1.2 Environment Setup Requirements

This section describes the environment setup requirements for installing SEPP.

2.1.2.1 Client Machine Requirements

This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.

The client machine should have:
  • Helm repository configured.
    • To add a Helm repository, run the following command:
      helm repo add <helm-repo-name> <helm-repo-address>
      Where, <helm-repo-name> is the name of the Helm repository and <helm-repo-address> is the URL of the Helm repository.
    • To verify that Helm repository has been added successfully, run the following command:
      helm repo list 
      The output must show the added Helm repository in the list.
  • network access to the Helm repository and Docker image registry.
  • network access to the Kubernetes cluster.
  • required environment settings to run the kubectl, podman, or docker commands. The environment should have privileges to create namespace in the Kubernetes cluster.
  • Helm client installed with the push plugin. Configure the environment in such a manner that the helm install command deploys the software in the Kubernetes cluster.
2.1.2.2 Network Access Requirement

The Kubernetes cluster hosts must have network access to the following repositories:

  • Local Helm repository: It contains the SEPP helm charts.
    To check if the Kubernetes cluster hosts have network access to the local helm repository, run the following command:
    helm repo update
  • Local Docker image registry: It contains the SEPP Docker images.
    To check if the Kubernetes cluster hosts can access the local docker image registry, pull any image with an image-tag using either of the following commands:
    docker pull <docker-repo>/<image-name>:<image-tag>
    podman pull <podman-repo>/<image-name>:<image-tag>
    Where:
    • <docker-repo> is the IP address or host name of the Docker registry.
    • <image-name> is the Docker image name.
    • <image-tag> is the tag assigned to the Docker image used for the SEPP pod.

    Example:

    docker pull CUSTOMER_REPO/oc-app-info:25.2.100
    podman pull occne-repo-host:5000/occnp/oc-app-info:25.2.100
2.1.2.3 Server or Space Requirement

For information about server or space requirements, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation Upgrade, and Fault Recovery Guide.

2.1.2.4 CNE Requirement

This section is applicable only if you are installing SEPP on Cloud Native Environment (CNE).

SEPP supports CNE 25.2.1xx, CNE 25.1.2xx, and 25.1.1xx.

To check the CNE version, run the following command:

echo $OCCNE_VERSION

For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

2.1.2.5 cnDBTier Requirement

SEPP supports cnDBTier 25.2.1xx, 25.1.2xx, and 25.1.1xx. cnDBTier must be configured and running before installing SEPP. For more information about cnDBTier installation, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

For more information about the cnDBTier customizations required for SEPP, see the ocsepp_dbtier_cnDBTier_version_custom_values_SEPP_version.yaml file.

If you have already installed a version of cnDBTier, run the following command to upgrade your current cnDBTier installation using the ocsepp_dbtier_<version>_custom_values_<version>.yaml file:
helm upgrade <release-name> <chart-path> -f <cndb-custom-values.yaml> -n <namespace> 
For example:
helm upgrade mysql-cluster occndbtier/ -f ocsepp_dbtier_25.2.100_custom_values_25.2.100.yaml -n ocsepp-cndb
For more information about cnDBTier installation and upgrade procedure, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

Note:

In georedundant deployment, a dedicated cnDBTier must be installed and configured for each SEPP site.

Note:

Starting from 25.1.100 onwards, the ndb_allow_copying_alter_table in cnDBTier should be set to OFF.
2.1.2.6 OSO Requirement

SEPP supports Operations Services Overlay (OSO) 25.2.1xx for common operation services (Prometheus and components such as alertmanager, pushgateway) on a Kubernetes cluster, which does not have these common services. For more information about OSO installation, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation and Upgrade Guide.

2.1.2.7 CNC Console Requirements

SEPP supports CNC Console 25.2.1xx to configure and manage Network Functions. For more information, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.

2.1.2.8 OCCM Requirements

SEPP supports OCCM 25.2.1xx. To support automated certificate lifecycle management, SEPP integrates with Oracle Communications Cloud Native Core, Certificate Management (OCCM) in compliance with 3GPP security recommendations. For more information about OCCM in SEPP, see the "Support for Automated Certificate Lifecycle Management" section in Oracle Communications Cloud Native Core, Security Edge Protection Proxy User Guide.

For more information about OCCM, see the following guides:

  • Oracle Communications Cloud Native Core, Certificate Manager Installation, Upgrade, and Fault Recovery Guide
  • Oracle Communications Cloud Native Core, Certificate Manager User Guide
2.1.2.9 OCI Requirements

SEPP can be deployed on OCI.

For more information about OCI deployment, see Oracle Communications Cloud Native Core, OCI Adaptor Deployment Guide.

2.1.3 SEPP Resource Requirements

This section lists the resource requirements to install and run SEPP.

Note:

The performance and capacity of the SEPP system may vary based on the call model, Feature/Interface configuration, and underlying CNE and hardware environment.
2.1.3.1 SEPP Services

The following table lists resource requirement for SEPP Services:

Table 2-4 SEPP Services

Service Name CPU CPU Memory(GB) Memory(GB) POD POD Ephemeral Storage Ephemeral Storage
Min/Max Min Max Min Max Min Max Min(Gi) Max(Gi)
Helm Test 1 1 1 1 1 1 70Mi 1
<helm-release-name>-n32-ingress-gateway 6 6 5 5 7 7 1 2
<helm-release-name>-n32-egress-gateway 5 5 5 5 7 7 1 1
<helm-release-name>-plmn-ingress-gateway 5 5 5 5 7 7 1 2
<helm-release-name>-plmn-egress-gateway 5 5 5 5 7 7 1 1
<helm-release-name>-pn32f-svc 5 5 8 8 7 7 2 2
<helm-release-name>-cn32f-svc 5 5 8 8 7 7 2 2
<helm-release-name>-cn32c-svc 2 2 2 2 2 2 1 1
<helm-release-name>-pn32c-svc 2 2 2 2 2 2 1 1
<helm-release-name>-config-mgr-svc 2 2 2 2 1 1 1 1
<helm-release-name>-sepp-nrf-client-nfdiscovery 2 2 2 2 2 2 1 1
<helm-release-name>-sepp-nrf-client-nfmanagement 2 2 2 2 2 2 1 1
<helm-release-name>-ocpm-config 1 1 1 1 2 2 1 1
<helm-release-name>-appinfo 1 1 1 2 2 2 1 1
<helm-release-name>-perf-info 2 2 4 4 2 2 1 1
<helm-release-name>-nf-mediation 8 8 8 8 2 2 NA NA
<helm-release-name>-coherence-svc 4 4 4 4 1 1 2 2
<helm-release-name>-alternate-route 2 2 4 4 2 2 NA NA
Total 60 60 70 70 63 63 17.7 Gi 20
Where,
  • #: <helm-release-name> will be prefixed in each microservice name. Example: if Helm release name is "ocsepp-release", then cn32f-svc microservice name will be "ocsepp-release-cn32f-svc".
  • Init-service container's and Common Configuration Client Hook's resources are not counted because the container gets terminated after initialization completes.
  • Helm Hooks Jobs: These are pre and post jobs that are invoked during installation, upgrade, rollback, and uninstallation of the deployment. These are short span jobs that get terminated after the deployment completion.
  • Helm Test Job: This job is run on demand when the Helm test command is initiated. This job runs the helm test and stops after completion. These are short-lived jobs that get terminated after the deployment is done. They are not part of active deployment resource, but are considered only during helm test procedures.
  • If you enable Message Feed feature at Ingress Gateway and Egress Gateway, approximately 33% pod capacity is impacted.

Note:

If you enable Message Feed feature at Ingress Gateway and Egress Gateway, approximately 33% pod capacity is impacted.
2.1.3.2 Upgrade

Following is the resource requirement for upgrading SEPP:

Table 2-5 Upgrade

Service Name CPU CPU Memory (GB) Memory (GB) POD POD Ephemeral Storage Ephemeral Storage
Min/Max Min Max Min Max Min Max Min(Gi) Max(Gi)
Helm test 0 0 0 0 0 0 0 0
Helm Hook 0 0 0 0 0 0 0 0
<helm-release-name>-n32-ingress-gateway 6 6 5 5 0 0 1 1
<helm-release-name>-n32-egress-gateway 5 5 5 5 0 0 1 1
<helm-release-name>-plmn-ingress-gateway 5 5 5 5 0 0 1 1
<helm-release-name>-plmn-egress-gateway 5 5 5 5 0 0 1 1
<helm-release-name>-pn32f-svc 5 5 8 8 0 0 2 2
<helm-release-name>-cn32f-svc 5 5 8 8 0 0 2 2
<helm-release-name>-cn32c-svc 2 2 2 2 0 0 1 1
<helm-release-name>-pn32c-svc 2 2 2 2 0 0 1 1
<helm-release-name>-config-mgr-svc 2 2 2 2 0 0 1 1
<helm-release-name>-sepp-nrf-client-nfdiscovery 1 1 2 2 0 0 1 1
<helm-release-name>-sepp-nrf-client-nfmanagement 1 1 1 1 0 0 1 1
<helm-release-name>-ocpm-config 1 1 1 1 0 0 1 1
<helm-release-name>-appinfo 1 1 1 2 0 0 1 1
<helm-release-name>-perf-info 2 2 200Mi 4 0 0 1 1
<helm-release-name>-nf-mediation 8 8 8 8 0 0 1 1
<helm-release-name>-coherence-svc 1 1 2 2 0 0 NA NA
<helm-release-name>-alternate-route 2 2 4 4 0 0 NA NA
Total 54 54 61.2 66 0 0 17 Gi 17 Gi

Note:

  • MaxSurge is set to 0.
  • <helm-release-name> is the Helm release name. For example, if Helm release name is "ocsepp-release", then cn32f-svc microservice name will be "ocsepp-release-cn32f-svc".
2.1.3.3 Common Services Container

Following is the resource requirement for Common Services Container:

Table 2-6 Common Services Container

Container Name CPU Memory (GB) Kubernetes Init Container
init-service 1 1 Y
common_config_hook 1 1 N
mediation_hook 2 2 N
  • Init Container service: Ingress or Egress Gateway services use this container to get OCSEPP Private Key or Certificate and CA Root Certificate for TLS during start up.
  • Common Configuration Hook: It is used for creating database for common service configuration.
2.1.3.4 Service Mesh Sidecar

SEPP leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external TLS communication. If ASM Sidecar injection is enabled during SEPP deployment or upgrade, this container is injected to each pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist.

Table 2-7 Service Mesh Sidecar

Service Name CPU CPU Memory (GB Memory (GB) Ephemeral Storage Ephemeral Storage
Min/Max Min Max Min Max Min(Mi) Max(Gi)
<helm-release-name>-n32-ingress-gateway 1 1 1 1 70 1
<helm-release-name>-n32-egress-gateway 1 1 1 1 70 1
<helm-release-name>-plmn-ingress-gateway 1 1 1 1 70 1
<helm-release-name>-plmn-egress-gateway 1 1 1 1 70 1
<helm-release-name>-pn32f-svc 1 1 1 1 70 1
<helm-release-name>-cn32f-svc 1 1 1 1 70 1
<helm-release-name>-cn32c-svc 1 1 1 1 70 1
<helm-release-name>-pn32c-svc 1 1 1 1 70 1
<helm-release-name>-config-mgr-svc 1 1 1 1 70 1
<helm-release-name>-sepp-nrf-client-nfdiscovery 1 1 1 1 70 1
<helm-release-name>-sepp-nrf-client-nfmanagement 1 1 1 1 70 1
<helm-release-name>-ocpm-config 1 1 1 1 70 1
<helm-release-name>-appinfo 1 1 1 1 70 1
<helm-release-name>-perf-info 1 1 1 1 70 1
<helm-release-name>-nf-mediation 1 1 1 1 70 1
<helm-release-name>-coherence-svc 1 1 1 1 NA NA
<helm-release-name>-alternate-route 1 1 1 1 NA NA
Total 17 17 17 17 1050 Mi 15 Gi

Note:

<helm-release-name> is the Helm release name. For example, if Helm release name is "ocsepp-release", then cn32f-svc microservice name will be "ocsepp-release-cn32f-svc".
2.1.3.5 Debug Tool Container

The Debug Tool provides third-party troubleshooting tools for debugging the runtime issues in a lab environment. If Debug Tool Container injection is enabled during SEPP deployment or upgrade, this container is injected to each SEPP pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about configuring Debug Tool, see Oracle Communications Cloud Native Core, Security Edge Protection Proxy Troubleshooting Guide.

Table 2-8 Debug Tool Container

Service Name CPU CPU Memory (GB) Memory (GB) Ephemeral Storage Ephemeral Storage
Min/Max Min Max Min(Gi) Max(Gi) Min(Mi) Max(Mi)
<helm-release-name>-n32-ingress-gateway 0.5 1 4 4 512 512
<helm-release-name>-n32-egress-gateway 0.5 1 4 4 512 512
<helm-release-name>-plmn-ingress-gateway 0.5 1 4 4 512 512
<helm-release-name>-plmn-egress-gateway 0.5 1 4 4 512 512
<helm-release-name>-pn32f-svc 0.5 1 4 4 512 512
<helm-release-name>-cn32f-svc 0.5 1 4 4 512 512
<helm-release-name>-cn32c-svc 0.5 1 4 4 512 512
<helm-release-name>-pn32c-svc 0.5 1 4 4 512 512
<helm-release-name>-config-mgr-svc 0.5 1 4 4 512 512
<helm-release-name>-sepp-nrf-client-nfdiscovery 0.5 1 4 4 512 512
<helm-release-name>-sepp-nrf-client-nfmanagement 0.5 1 4 4 512 512
<helm-release-name>-ocpm-config 0.5 1 4 4 512 512
<helm-release-name>-appinfo 0.5 1 4 4 512 512
<helm-release-name>-perf-info 0.5 1 4 4 512 512
<helm-release-name>-nf-mediation 0.5 1 4 4 512 512
<helm-release-name>-coherence-svc NA NA NA NA NA NA
<helm-release-name>-alternate-route 0.5 1 4 4 NA NA
Total 8 16 64 64 7680 Mi 7680 Mi

Note:

<helm_release_name> is the Helm release name. For example, if Helm release name is "ocsepp-release", then plmn-egress-gateway microservice name will be "ocsepp-release-plmn-egress-gateway".

2.1.3.6 SEPP Hooks

Following is the resource requirement for SEPP Hooks:

Table 2-9 SEPP Hooks

Hook Name CPU Memory (GB)
Min/Max Min Max Min Max
<helm-release-name>-update-db-pre-install 1 2 1 2
<helm-release-name>-update-db-<post-install> 1 2 1 2
<helm-release-name>-update-db-<pre-upgrade> 1 2 1 2
<helm-release-name>-update-db-<post-upgrade> 1 2 1 2
<helm-release-name>-update-db-<pre-rollback> 1 2 1 2
<helm-release-name>-update-db-<post-rollback> 1 2 1 2
<helm-release-name>-pn32f-svc-pre-install 1 2 1 2
<helm-release-name>-pn32f-svc-post-install 1 2 1 2
<helm-release-name>-pn32f-svc-<pre-upgrade> 1 2 1 2
<helm-release-name>-pn32f-svc-<post-upgrade> 1 2 1 2
<helm-release-name>-pn32f-svc-<pre-rollback> 1 2 1 2
<helm-release-name>-pn32f-svc-<post-rollback> 1 2 1 2
<helm-release-name>-cn32f-svc-pre-install 1 2 1 2
<helm-release-name>-cn32f-svc-<post-install> 1 2 1 2
<helm-release-name>-cn32f-svc-<pre-upgrade> 1 2 1 2
<helm-release-name>-cn32f-svc-<post-upgrade> 1 2 1 2
<helm-release-name>-cn32f-svc-<pre-rollback> 1 2 1 2
<helm-release-name>-cn32f-svc-<post-rollback> 1 2 1 2
<helm-release-name>-cn32c-svc-pre-install 1 2 1 2
<helm-release-name>-cn32c-svc-<post-install> 1 2 1 2
<helm-release-name>-cn32c-svc-<pre-upgrade> 1 2 1 2
<helm-release-name>-cn32c-svc-<post-upgrade> 1 2 1 2
<helm-release-name>-cn32c-svc-<pre-rollback> 1 2 1 2
<helm-release-name>-cn32c-svc-<post-rollback> 1 2 1 2
<helm-release-name>-pn32c-svc-pre-install 1 2 1 2
<helm-release-name>-pn32c-svc-<post-install> 1 2 1 2
<helm-release-name>-pn32c-svc-<pre-upgrade> 1 2 1 2
<helm-release-name>-pn32c-svc-<post-upgrade> 1 2 1 2
<helm-release-name>-pn32c-svc-<pre-rollback> 1 2 1 2
<helm-release-name>-pn32c-svc-<post-rollback> 1 2 1 2
<helm-release-name>-config-mgr-svc-pre-install 1 2 1 2
<helm-release-name>-config-mgr-svc-<post-install> 1 2 1 2
<helm-release-name>-config-mgr-svc-<pre-upgrade> 1 2 1 2
<helm-release-name>-config-mgr-svc-<post-upgrade> 1 2 1 2
<helm-release-name>-config-mgr-svc-<pre-rollback> 1 2 1 2
<helm-release-name>-config-mgr-svc-<post-rollback> 1 2 1 2

Note:

<helm-release-name> is the Helm release name.
2.1.3.7 CNC Console

CNC Console Oracle Communications Cloud Native Configuration Console (CNC Console) is a Graphical User Interface (GUI) for NFs and Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) common services.

For information about CNC Console resources required by SEPP, see "CNC Console Resource Requirement" section in Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.

2.1.3.8 cnDBTier

cnDBTier is the geodiverse database layer provided as part of the Cloud Native Environment. It provides persistent storage for the state data and subscriber data in a cloud environment. Any Kubernetes environment with dynamic Kubernetes storage supports cnDBTier installation.

Table 2-10 cnDBTier

Detailed DBTier Resources vCPU Memory (GB) Max Replicas Total vCPU Total Memory (GB) PVC Storage (GB) Ephemeral Storage (GB)
SQL - Replication (ndbmysqld) StatefulSet 3 4 4 12 16 60 0.1
MGMT (ndbmgmd) StatefulSet 3 4 2 6 8 15 0.1
DB (ndbmtd) StatefulSet 4 12 4 16 48 60 0.1
db-backup-manager-svc 1 1 1 1 1 0 0.1
db-replication-svc 1 2 1 1 2 60 0.1
db-monitor-svc 4 4 1 4 4 0 0.1
db-connectivity-service 0 0 0 0 0 0 0
SQL - Access (ndbappmysqld) StatefulSet 5 10 2 10 20 20 0.1
grrecoveryresources 2 12 2 4 24 0 0
Total 17 54 123 215 0.7

Note:

  • Node profiles in the above tables are for two-site replication cnDBTier clusters. Modify the ndbmysqld and Replication Service pods based on the number of georeplication sites.
  • In case, any of the service requires a vertical scaling of any of their PVC, see the respective sub-section in "Vertical Scaling" section in Oracle Communications Cloud Native Core, cnDBTier User Guide.
  • PVC shrinking (downsizing) is not supported. It is recommended to retain the existing vertically scaled up PVC sizes, eventhough cnDBTier is rolledback to previous releases.

For information about cnDBTier resources required by SEPP, see "Resource Requirement" section in Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

2.1.3.8.1 cnDBTier Sidecars

The following table indicates the sidecars for cnDBTier services.

Table 2-11 Sidecars per cnDBTier Service

Service Name init-sidecar db-executor-svc init-discover-sql-ips db-infra-monitor-svc
MGMT (ndbmgmd) No No No Yes
DB (ndbmtd) No Yes No Yes
SQL (ndbmysqld) Yes No No Yes
SQL (ndbappmysqld) Yes No No Yes
Monitor Service (db-monitor-svc) No No No No
Backup Manager Service (db-backup-manager-svc) No No No No
Replication Service No No Yes No

Table 2-12 cnDBTier Additional Containers

Sidecar CPU/Pod Memory/Pod (in Gi) PVC Size (in Gi) Ephemeral Storage
Min Max Min Max PVC1 PVC2 Min (Mi) Max(Gi)Min (Mi)
db-executor-svc 1 1 2 2 NA NA 90 1
init-sidecar 0.1 0.1 025 0.25 NA NA 90 1
init-discover-sql-ips 0.2 0.2 0.5 0.5 NA NA 90 1
db-infra-monitor-svc 0.1 0.1 0.25 0.25 NA NA 90 1
2.1.3.8.2 Service Mesh Sidecar

If SEPP is deployed with ASM, the user must add the following annotation in the ocsepp_dbtier_CNDBTIER_VERSION_custom_values_SEPP_VERSION.yaml file.

\

Table 2-13 Default Values for Service Mesh Specific Annotations

Parameter Name Annotations
db-monitor-svc.podAnnotations traffic.sidecar.istio.io/excludeInboundPorts: "8081,8080"
Example:

db-monitor-svc:
  podAnnotations:
    traffic.sidecar.istio.io/excludeInboundPorts: "8081,8080"

2.1.4 Roaming Hub or Hosted SEPP Resource Requirements

This section lists the resource requirements to install and run Roaming Hub or Hosted SEPP.

2.1.4.1 Roaming Hub or Hosted SEPP Services

The following table lists resource requirement for SEPP Services for Roaming Hub or Hosted SEPP:

Table 2-14 SEPP Services for Roaming Hub or Hosted SEPP

Service Name CPU CPU Memory (GB) Memory (GB) POD POD Ephemeral Storage Ephemeral Storage
Min/Max Min Max Min Max Min Max Min(Mi) Max(Gi)
Helm Test 1 1 1 1 1 1 70 1
<helm-release-name>-n32-ingress-gateway 6 6 5 5 2 2 1 2
<helm-release-name>-n32-egress-gateway 5 5 5 5 2 2 1 1
<helm-release-name>-plmn-ingress-gateway 5 5 5 5 2 2 1 2
<helm-release-name>-plmn-egress-gateway 5 5 5 5 2 2 2 2
<helm-release-name>-pn32f-svc 5 5 8 8 2 2 2 2
<helm-release-name>-cn32f-svc 5 5 8 8 2 2 1 1
<helm-release-name>-cn32c-svc 2 2 2 2 2 2 1 1
<helm-release-name>-pn32c-svc 2 2 2 2 2 2 1 1
<helm-release-name>-config-mgr-svc 2 2 2 2 1 1 1 1
<helm-release-name>-perf-info 2 2 200Mi 4 2 2 1 1
<helm-release-name>-nf-mediation 8 8 8 8 2 2 NA NA
Total 48 48 51.2 55 22 22 12.70 15 Gi

Note:

  • #: <helm-release-name> will be prefixed in each microservice name. Example: if Helm release name is "ocsepp-release", then cn32f-svc microservice name will be "ocsepp-release-cn32f-svc"
  • Init-service container's and Common Configuration Client Hook's resources are not counted because the container gets terminated after initialization completes.
  • Helm Hooks Jobs: These are pre and post jobs that are invoked during installation, upgrade, rollback, and uninstallation of the deployment. These are short span jobs that get terminated after the deployment completion.
  • Helm Test Job: This job is run on demand when the helm test command is initiated. This job runs the helm test and stops after completion. These are short-lived jobs that get terminated after the deployment is done. They are not part of active deployment resource, but are considered only during helm test procedures.
2.1.4.2 Upgrade

Following is the resource requirement for upgrading Roaming Hub or Hosted SEPP:

Table 2-15 Upgrade

Service Name CPU CPU Memory (GB) Memory (GB) POD POD Ephemeral Storage Ephemeral Storage
Min/Max Min Max Min Max Min Max Min(Mi) Max(Gi)
Helm test 0 0 0 0 0 0 0 0
Helm Hook 0 0 0 0 0 0 0 0
<helm-release-name>-n32-ingress-gateway 6 6 5 5 1 2 70 1
<helm-release-name>-n32-egress-gateway 5 5 5 5 1 2 70 1
<helm-release-name>-plmn-ingress-gateway 5 5 5 5 1 2 70 1
<helm-release-name>-plmn-egress-gateway 5 5 5 5 1 2 70 1
<helm-release-name>-pn32f-svc 5 5 8 8 1 2 70 1
<helm-release-name>-cn32f-svc 5 5 8 8 1 3 70 1
<helm-release-name>-cn32c-svc 2 2 2 2 1 1 70 1
<helm-release-name>-pn32c-svc 2 2 2 2 1 1 70 1
<helm-release-name>-config-mgr-svc 2 2 2 2 1 1 70 1
<helm-release-name>-perf-info 2 2 200Mi 4 1 1 70 1
<helm-release-name>-nf-mediation 8 8 8 8 1 1 70 1
Total 47 47 50.2 54 11 18 770 Mi 11 Gi

Note:

<helm-release-name> is the Helm release name. For example, if Helm release name is "ocsepp-release", then cn32f-svc microservice name will be "ocsepp-release-cn32f-svc"
2.1.4.3 Common Services Container

Following is the resource requirement for Common Services Container:

Table 2-16 Common Services Container

Container Name CPU Memory (GB) Kubernetes Init Container
init-service 1 1 Y
common_config_hook 1 1 N

Note:

  • Init Container service: Ingress or Egress Gateway services use this container to get SEPP Private Key or Certificate and CA Root Certificate for TLS during start up.
  • Common Configuration Hook: It is used for creating database for common service configuration.
2.1.4.4 ASM Sidecar

Note:

In Roaming Hub or Hosted SEPP mode, ASM is not supported.
2.1.4.5 Debug Tool Container

The Debug Tool provides third-party troubleshooting tools for debugging the runtime issues in a lab environment. If Debug Tool Container injection is enabled during Roaming Hub/Hosted SEPP deployment or upgrade, this container is injected to each Roaming Hub/Hosted SEPP pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about configuring the Debug Tool, see Oracle Communications Cloud Native Core, Security Edge Protection Proxy Troubleshooting Guide.

Table 2-17 Debug Tool Container

Service Name CPU CPU Memory (GB) Memory (GB) Ephemeral Storage Ephemeral Storage
Min/Max Min Max Min(Gi) Max(Gi) Min(Mi) Max(Mi)
Helm Test 0 0 0 0 512 512
Helm Test 0 0 0 0 512 512
<helm-release-name>-n32-ingress-gateway 0.5 1 4 4 512 512
<helm-release-name>-n32-egress-gateway 0.5 1 4 4 512 512
<helm-release-name>-plmn-ingress-gateway 0.5 1 4 4 512 512
<helm-release-name>-plmn-egress-gateway 0.5 1 4 4 512 512
<helm-release-name>-pn32f-svc 0.5 1 4 4 512 512
<helm-release-name>-cn32f-svc 0.5 1 4 4 512 512
<helm-release-name>-cn32c-svc 0.5 1 4 4 512 512
<helm-release-name>-pn32c-svc 0.5 1 4 4 512 512
<helm-release-name>-config-mgr-svc 0.5 1 4 4 512 512
<helm-release-name>-perf-info 0.5 1 4 4 512 512
<helm-release-name>-nf-mediation 0.5 1 4 4 512 512
Total 5.5 11 44 44 6656 Mi 6656 Mi

Note:

<helm-release-name> is the Helm release name. Example: if helm release name is "ocsepp", then cn32f-svc microservice name will be "ocsepp-cn32f-svc"
2.1.4.6 SEPP Hooks

Following is the resource requirement for SEPP Hooks.

Table 2-18 SEPP Hooks

Hook Name CPU CPU Memory (GB) Memory (GB)
Min/Max Min Max Min Max
<helm-release-name>-update-db-pre-install 1 1 1 1
<helm-release-name>-update-db-<post-install> 1 1 1 1
<helm-release-name>-update-db-<pre-upgrade> 1 1 1 1
<helm-release-name>-update-db-<post-upgrade> 1 1 1 1
<helm-release-name>-update-db-<pre-rollback> 1 1 1 1
<helm-release-name>-update-db-<post-rollback> 1 1 1 1
<helm-release-name>-pn32f-svc-pre-install 1 1 1 1
<helm-release-name>-pn32f-svc-post-install 1 1 1 1
<helm-release-name>-pn32f-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-pn32f-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-pn32f-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-pn32f-svc-<post-rollback> 1 1 1 1
<helm-release-name>-cn32f-svc-pre-install 1 1 1 1
<helm-release-name>-cn32f-svc-<post-install> 1 1 1 1
<helm-release-name>-cn32f-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-cn32f-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-cn32f-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-cn32f-svc-<post-rollback> 1 1 1 1
<helm-release-name>-cn32c-svc-pre-install 1 1 1 1
<helm-release-name>-cn32c-svc-<post-install> 1 1 1 1
<helm-release-name>-cn32c-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-cn32c-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-cn32c-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-cn32c-svc-<post-rollback> 1 1 1 1
<helm-release-name>-pn32c-svc-pre-install 1 1 1 1
<helm-release-name>-pn32c-svc-<post-install> 1 1 1 1
<helm-release-name>-pn32c-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-pn32c-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-pn32c-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-pn32c-svc-<post-rollback> 1 1 1 1
<helm-release-name>-config-mgr-svc-pre-install 1 1 1 1
<helm-release-name>-config-mgr-svc-<post-install> 1 1 1 1
<helm-release-name>-config-mgr-svc-<pre-upgrade> 1 1 1 1
<helm-release-name>-config-mgr-svc-<post-upgrade> 1 1 1 1
<helm-release-name>-config-mgr-svc-<pre-rollback> 1 1 1 1
<helm-release-name>-config-mgr-svc-<post-rollback> 1 1 1 1

Note:

<helm-release-name> is the Helm release name.

2.2 Installation Sequence

This section describes preinstallation, installation, and postinstallation tasks for SEPP (SEPP, Roaming Hub, or Hosted SEPP).

2.2.1 Preinstallation Tasks

Before installing SEPP, perform the tasks described in this section.

2.2.1.1 Downloading SEPP package

Perform the following procedure to download the Oracle Communications Cloud Native Core, Security Edge Protection Proxy (SEPP) release package from My Oracle Support:

  1. Log in to My Oracle Support using the appropriate credentials.
  2. Click the Patches & Updates tab to locate the patch.
  3. In Patch Search console, click the Product or Family (Advanced) tab.
  4. Enter Oracle Communications Cloud Native Core - 5G in Product field and select the product from the Product drop-down list.
  5. From the Release drop-down, select "Oracle Communications Cloud Native Core Security Edge Protection Proxy <release_number>".

    Where, <release-number> indicates the required release number of Cloud Native Core, Security Edge Protection Proxy.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required patch from the list. The Patch Details window appears.
  8. Click Download. File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to download the package. Where, <p********> is the MOS patch number and <release_number> is the release number of SEPP.
2.2.1.2 Pushing the SEPP Images to Customer Docker Registry

SEPP deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.

The following table lists the docker images for SEPP:

Table 2-19 SEPP Images

Services Image Tag
<helm-release-name>-alternate_route alternate_route 25.2.104
<helm-release-name>-common_config_hook common_config_hook 25.2.100
<helm-release-name>-configurationinit configurationinit 25.2.100
<helm-release-name>-mediation/ocmed-nfmediation mediation/ocmed-nfmediation 25.1.108
<helm-release-name>-nf_test nf_test 25.2.102
<helm-release-name>-nrf-client nrf-client 25.2.102
<helm-release-name>-occnp/oc-app-info occnp/oc-app-info 25.2.102
<helm-release-name>-occnp/oc-config-server occnp/oc-config-server 25.2.102
<helm-release-name>-performance occnp/oc-perf-info 25.2.102
<helm-release-name>-ocdebugtool/ocdebug-tools ocdebugtool/ocdebug-tools 25.2.102
<helm-release-name>-ocegress_gateway ocegress_gateway 25.2.104
<helm-release-name>-ocingress_gateway ocingress_gateway 25.2.104
<helm-release-name>-oocsepp-cn32c-svc ocsepp-cn32c-svc 25.2.100
<helm-release-name>- ocsepp-cn32f-svc ocsepp-cn32f-svc 25.2.100
<helm-release-name>-ocsepp-coherence-svc ocsepp-coherence-svc 25.2.100
<helm-release-name>-ocsepp-config-mgr-svc ocsepp-config-mgr-svc 25.2.100
<helm-release-name>-ocsepp-pn32c-svc ocsepp-pn32c-svc 25.2.100
<helm-release-name>-ocsepp-pn32f-svc ocsepp-pn32f-svc 25.2.100
<helm-release-name>-ocsepp-pre-install-hook ocsepp-pre-install-hook 25.2.100
<helm-release-name>-ocsepp-update-db ocsepp-update-db 25.2.100
<helm-release-name>-ocsepp-configurationupdate configurationupdate 25.2.100

To push the images to the registry:

  1. Navigate to the location where you want to install SEPP. Unzip the SEPP release package to retrieve the following CSAR package.

    The SEPP package is as follows:

    ReleaseName_csar_Releasenumber.zip

    Where:
    • ReleaseName is a name that is used to track this installation instance.
    • Releasenumber is the release number.

      For example:
    ocsepp_csar_25_2_100_0_0.zip
  2. Unzip the SEPP package file to get SEPP docker image tar file:
    unzip ocsepp_csar_Releasenumber.zip
    For example:
    unzip ocsepp_csar_25_2_100_0_0.zip
  3. The directory ocsepp_csar_25_2_100_0_0.zip consists of the following:
    ├── Definitions
    │   ├── ocsepp_cne_compatibility.yaml
    │   └── ocsepp.yaml
    ├── Files
    │   ├── alternate_route-25.2.104.tar
    │   ├── ChangeLog.txt
    │   ├── common_config_hook-25.2.100.tar
    │   ├── configurationinit-25.2.100.tar
    │   ├── Helm
    │   │   ├── ocsepp-25.2.100.tgz
    │   │   ├── ocsepp-network-policy-25.2.100.tgz
    │   │   └── ocsepp-servicemesh-config-25.2.100.tgz
    │   ├── Licenses
    │   ├── mediation-ocmed-nfmediation-25.1.108.tar
    │   ├── nf_test-25.2.102.tar
    │   ├── nrf-client-25.2.102.tar
    │   ├── occnp-oc-app-info-25.2.102.tar
    │   ├── occnp-oc-config-server-25.2.102.tar
    │   ├── occnp-oc-perf-info-25.2.102.tar
    │   ├── ocdebugtool-ocdebug-tools-25.2.102.tar
    │   ├── ocegress_gateway-25.2.104.tar
    │   ├── ocingress_gateway-25.2.104.tar
    │   ├── ocsepp-cn32c-svc-25.2.100.tar
    │   ├── ocsepp-cn32f-svc-25.2.100.tar
    │   ├── ocsepp-coherence-svc-25.2.100.tar
    │   ├── ocsepp-config-mgr-svc-25.2.100.tar
    │   ├── ocsepp-pn32c-svc-25.2.100.tar
    │   ├── ocsepp-pn32f-svc-25.2.100.tar
    │   ├── ocsepp-pre-install-hook-25.2.100.tar
    │   ├── ocsepp-update-db-25.2.100.tar
    │   ├── Oracle.cert
    │   └── Tests
    ├── ocsepp.mf
    ├── Scripts
    │   ├── ocsepp_alertrules_promha_25.2.100.yaml
    │   ├── ocsepp_configuration_openapi_25.2.100.yaml
    │   ├── ocsepp_custom_values_25.2.100.yaml
    │   ├── ocsepp_custom_values_roaming_hub_25.2.100.yaml
    │   ├── ocsepp_dashboard_25.2.100.json
    │   ├── ocsepp_dashboard_promha_25.2.100.json
    │   ├── ocsepp_network_policies_custom_values_25.2.100.yaml
    │   ├── ocsepp-servicemesh-config-25.2.100.tgz
    │   ├── ocsepp_alertrules_25.2.100.yaml
    │   ├── ocsepp_mib_25.2.100.mib
    │   ├── ocsepp_mib_tc_25.2.100.mib
    │   └── toplevel.mib
    │   └── ocsepp_oci_alertrules_25.2.100.zip
    │   └── ocsepp_oci_dashboard_25.2.100.json
    │   └── ocsepp_rollback_schema_25.2.100.sql  
    │   └── ocsepp_dbtier_25.2.100_custom_values_25.2.100.yaml
    │   └── ocsepp_single_service_account_config_25.2.100.yaml   
    └── TOSCA-Metadata
        └── TOSCA.meta
  4. Open the Files folder and based on the container engine installed, run one of the following command to load the SEPP images. Load all the images that are listed in SEPP Images (2-18) Table:
    podman load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
    docker load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
    Where, IMAGE_PATH is the location where the SEPP docker image tar file is archived.

    Sample command:
    podman load --input /IMAGE_PATH/ocsepp-pn32c-svc-25.2.100.tar

    Note:

    The docker/podman load command needs to be executed seperately for each tar file/docker image.
  5. Run one of the following commands to verify that the image is loaded:
    docker images | grep ocsepp
    podman images | grep ocsepp

    Note:

    Verify the list of images shown in the output with the list of images shown in the table SEPP Images. If the list does not match, reload the image tar file.
  6. Run one of the following commands to tag the images to the registry:
    docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
    podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
    Sample Tag commands:
    
    podman tag alternate_route:25.2.104 <customer repo>/alternate_route:25.2.104
    podman tag common_config_hook:25.2.100 <customer repo>/common_config_hook:25.2.100
    podman tag configurationinit:25.2.100 <customer repo>/configurationinit:25.2.100
    podman tag configurationupdate:25.2.100 <customer repo>/configurationupdate:25.2.100
    podman tag mediation/ocmed-nfmediation:25.1.108 <customer repo>/mediation/ocmed-nfmediation:25.1.108
    podman tag nf_test:25.2.102 <customer repo>/nf_test:25.2.102
    podman tag nrf-client:25.2.102 <customer repo>/nrf-client:25.2.102
    podman tag occnp/oc-app-info:25.2.102 <customer repo>/occnp/oc-app-info:25.2.102
    podman tag occnp/oc-config-server:25.2.102 <customer repo>/occnp/oc-config-server:25.2.102
    podman tag occnp/oc-perf-info:25.2.102 <customer repo>/occnp/oc-perf-info:25.2.102
    podman tag ocdebugtool/ocdebug-tools:25.2.102 <customer repo>/ocdebugtool/ocdebug-tools:25.2.102
    podman tag ocegress_gateway:25.2.104 <customer repo>/ocegress_gateway:25.2.104
    podman tag ocingress_gateway:25.2.104 <customer repo>/ocingress_gateway:25.2.104
    podman tag ocsepp-cn32c-svc:25.2.100 <customer repo>/ocsepp-cn32c-svc:25.2.100
    podman tag ocsepp-cn32f-svc:25.2.100 <customer repo>/ocsepp-cn32f-svc:25.2.100
    podman tag ocsepp-coherence-svc:25.2.100 <customer repo>/ocsepp-coherence-svc:25.2.100
    podman tag ocsepp-config-mgr-svc:25.2.100 <customer repo>/ocsepp-config-mgr-svc:25.2.100
    podman tag ocsepp-pn32c-svc:25.2.100 <customer repo>/ocsepp-pn32c-svc:25.2.100
    podman tag ocsepp-pn32f-svc:25.2.100 <customer repo>/ocsepp-pn32f-svc:25.2.100
    podman tag ocsepp-pre-install-hook:25.2.100 <customer repo>/ocsepp-pre-install-hook:25.2.100
    podman tag ocsepp-update-db:25.2.100 <customer repo>/ocsepp-update-db:25.2.100
  7. Run one of the following commands to push the image to the registry:
    docker push <docker-repo>/<image-name>:<image-tag> 
    podman push <docker-repo>/<image-name>:<image-tag>
Sample push commands:

podman push occne-repo-host:5000/alternate_route:25.2.104
podman push occne-repo-host:5000/common_config_hook:25.2.100
podman push occne-repo-host:5000/configurationinit:25.2.100
podman push occne-repo-host:5000/configurationupdate:25.2.100
podman push occne-repo-host:5000/mediation/ocmed-nfmediation:25.1.108
podman push occne-repo-host:5000/nf_test:25.2.102
podman push occne-repo-host:5000/nrf-client:25.2.102
podman push occne-repo-host:5000/occnp/oc-app-info:25.2.102
podman push occne-repo-host:5000/occnp/oc-config-server:25.2.102
podman push occne-repo-host:5000/occnp/oc-perf-info:25.2.102
podman push occne-repo-host:5000/ocdebugtool/ocdebug-tools:25.2.102
podman push occne-repo-host:5000/ocegress_gateway:25.2.104
podman push occne-repo-host:5000/ocingress_gateway:25.2.104
podman push occne-repo-host:5000/ocsepp-cn32c-svc:25.2.100
podman push occne-repo-host:5000/ocsepp-cn32f-svc:25.2.100
podman push occne-repo-host:5000/ocsepp-coherence-svc:25.2.100
podman push occne-repo-host:5000/ocsepp-config-mgr-svc:25.2.100
podman push occne-repo-host:5000/ocsepp-pn32c-svc:25.2.100
podman push occne-repo-host:5000/ocsepp-pn32f-svc:25.2.100
podman push occne-repo-host:5000/ocsepp-pre-install-hook:25.2.100
podman push occne-repo-host:5000/ocsepp-update-db:25.2.100

Note:

It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise docker push command may fail.
2.2.1.3 Pushing the Roaming Hub or Hosted SEPP Images to Customer Docker Registry

Roaming Hub or Hosted SEPP deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.

The following table lists the docker images for Roaming Hub or Hosted SEPP:

Table 2-20 Roaming Hub or Hosted SEPP

Services Image Tag
<helm-release-name>-alternate_route alternate_route 25.2.104
<helm-release-name>-common_config_hook common_config_hook 25.2.100
<helm-release-name>-configurationinit configurationinit 25.2.100
<helm-release-name>-mediation/ocmed-nfmediation mediation/ocmed-nfmediation 25.1.108
<helm-release-name>-nf_test nf_test 25.2.102
<helm-release-name>-performance occnp/oc-perf-info 25.2.102
<helm-release-name>-ocdebugtool/ocdebug-tools ocdebugtool/ocdebug-tools 25.2.102
<helm-release-name>-ocegress_gateway ocegress_gateway 25.2.104
<helm-release-name>-ocingress_gateway ocingress_gateway 25.2.104
<helm-release-name>-oocsepp-cn32c-svc ocsepp-cn32c-svc 25.2.100
<helm-release-name>- ocsepp-cn32f-svc ocsepp-cn32f-svc 25.2.100
<helm-release-name>-ocsepp-coherence-svc ocsepp-coherence-svc 25.2.100
<helm-release-name>-ocsepp-config-mgr-svc ocsepp-config-mgr-svc 25.2.100
<helm-release-name>-ocsepp-pn32c-svc ocsepp-pn32c-svc 25.2.100
<helm-release-name>-ocsepp-pn32f-svc ocsepp-pn32f-svc 25.2.100
<helm-release-name>-ocsepp-pre-install-hook ocsepp-pre-install-hook 25.2.100
<helm-release-name>-ocsepp-update-db ocsepp-update-db 25.2.100
<helm-release-name>-ocsepp-configurationupdate configurationupdate 25.2.100

Note:

<helm-release-name> will be prefixed in each microservice name. Example: if Helm release name is "ocsepp", then cn32f-svc microservice name will be "ocsepp-cn32f-svc".

To push the images to the registry:

  1. Navigate to the location where you want to install SEPP. Unzip the SEPP release package (<p********>_<release_number>_Tekelec.zip) to retrieve the following CSAR package.

    The SEPP package is as follows:

    ReleaseName_csar_Releasenumber.zip

    Where:
    • ReleaseName is a name that is used to track this installation instance.
    • Releasenumber is the release number.

      For example:
    ocsepp_csar_25_2_100_0_0.zip
  2. Unzip the SEPP package file to get SEPP docker image tar file:
    unzip ReleaseName_csar_Releasenumber.zip
    For example:
    unzip ocsepp_csar_25_2_100_0_0.zip
  3. The directory ocsepp_csar_25_2_100_0_0.zip consists of the following:
    ├── Definitions
    │   ├── ocsepp_cne_compatibility.yaml
    │   └── ocsepp.yaml
    ├── Files
    │   ├── alternate_route-25.2.104.tar
    │   ├── ChangeLog.txt
    │   ├── common_config_hook-25.2.100.tar
    │   ├── configurationinit-25.2.100.tar
    │   ├── Helm
    │   │   ├── ocsepp-25.2.100.tgz
    │   │   ├── ocsepp-network-policy-25.2.100.tgz
    │   │   └── ocsepp-servicemesh-config-25.2.100.tgz
    │   ├── Licenses
    │   ├── mediation-ocmed-nfmediation-25.1.108.tar
    │   ├── nf_test-25.2.102.tar
    │   ├── nrf-client-25.2.102.tar
    │   ├── occnp-oc-app-info-25.2.102.tar
    │   ├── occnp-oc-config-server-25.2.102.tar
    │   ├── occnp-oc-perf-info-25.2.102.tar
    │   ├── ocdebugtool-ocdebug-tools-25.2.102.tar
    │   ├── ocegress_gateway-25.2.104.tar
    │   ├── ocingress_gateway-25.2.104.tar
    │   ├── ocsepp-cn32c-svc-25.2.100.tar
    │   ├── ocsepp-cn32f-svc-25.2.100.tar
    │   ├── ocsepp-coherence-svc-25.2.100.tar
    │   ├── ocsepp-config-mgr-svc-25.2.100.tar
    │   ├── ocsepp-pn32c-svc-25.2.100.tar
    │   ├── ocsepp-pn32f-svc-25.2.100.tar
    │   ├── ocsepp-pre-install-hook-25.2.100.tar
    │   ├── ocsepp-update-db-25.2.100.tar
    │   ├── Oracle.cert
    │   └── Tests
    ├── ocsepp.mf
    ├── Scripts
    │   ├── ocsepp_alertrules_promha_25.2.100.yaml
    │   ├── ocsepp_configuration_openapi_25.2.100.yaml
    │   ├── ocsepp_custom_values_25.2.100.yaml
    │   ├── ocsepp_custom_values_roaming_hub_25.2.100.yaml
    │   ├── ocsepp_dashboard_25.2.100.json
    │   ├── ocsepp_dashboard_promha_25.2.100.json
    │   ├── ocsepp_network_policies_custom_values_25.2.100.yaml
    │   ├── ocsepp-servicemesh-config-25.2.100.tgz
    │   ├── ocsepp_alertrules_25.2.100.yaml
    │   ├── ocsepp_mib_25.2.100.mib
    │   ├── ocsepp_mib_tc_25.2.100.mib
    │   └── toplevel.mib
    │   └── ocsepp_oci_alertrules_25.2.100.zip
    │   └── ocsepp_oci_dashboard_25.2.100.json
    │   └── ocsepp_rollback_schema_25.2.100.sql  
    │   └── ocsepp_dbtier_25.2.100_custom_values_25.2.100.yaml
    │   └── ocsepp_single_service_account_config_25.2.100.yaml   
    └── TOSCA-Metadata
        └── TOSCA.meta
  4. Open the Files folder based on the container engine installed, run one of the following command to load the SEPP images. Load all the images that are listed in Roaming Hub or Hosted SEPP(2-19) Table:
    podman load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
    docker load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
    Where, IMAGE_PATH is the location where the SEPP docker image tar file is archived.

    Sample command:
    podman load --input /IMAGE_PATH/ocsepp-pn32c-svc-25.2.100.tar

    Note:

    The docker/ podman load command needs to be executed seperately for each tar file/ docker image.
  5. Run one of the following commands to verify that the image is loaded:
    docker images | grep ocsepp
    podman images | grep ocsepp

    Note:

    Verify the list of images shown in the output with the list of images shown in the table SEPP Images. If the list does not match, reload the image tar file.
  6. Run one of the following commands to tag the images to the registry:
    docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
    podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
    Sample Tag commands:
    
    podman tag alternate_route:25.2.104 <customer repo>/alternate_route:25.2.104
    podman tag common_config_hook:25.2.100 <customer repo>/common_config_hook:25.2.100
    podman tag configurationinit:25.2.100 <customer repo>/configurationinit:25.2.100
    podman tag configurationupdate:25.2.100 <customer repo>/configurationupdate:25.2.100
    podman tag mediation/ocmed-nfmediation:25.1.108 <customer repo>/mediation/ocmed-nfmediation:25.1.108
    podman tag nf_test:25.2.102 <customer repo>/nf_test:25.2.102
    podman tag nrf-client:25.2.102 <customer repo>/nrf-client:25.2.102
    podman tag occnp/oc-app-info:25.2.102 <customer repo>/occnp/oc-app-info:25.2.102
    podman tag occnp/oc-config-server:25.2.102 <customer repo>/occnp/oc-config-server:25.2.102
    podman tag occnp/oc-perf-info:25.2.102 <customer repo>/occnp/oc-perf-info:25.2.102
    podman tag ocdebugtool/ocdebug-tools:25.2.102 <customer repo>/ocdebugtool/ocdebug-tools:25.2.102
    podman tag ocegress_gateway:25.2.104 <customer repo>/ocegress_gateway:25.2.104
    podman tag ocingress_gateway:25.2.104 <customer repo>/ocingress_gateway:25.2.104
    podman tag ocsepp-cn32c-svc:25.2.100 <customer repo>/ocsepp-cn32c-svc:25.2.100
    podman tag ocsepp-cn32f-svc:25.2.100 <customer repo>/ocsepp-cn32f-svc:25.2.100
    podman tag ocsepp-coherence-svc:25.2.100 <customer repo>/ocsepp-coherence-svc:25.2.100
    podman tag ocsepp-config-mgr-svc:25.2.100 <customer repo>/ocsepp-config-mgr-svc:25.2.100
    podman tag ocsepp-pn32c-svc:25.2.100 <customer repo>/ocsepp-pn32c-svc:25.2.100
    podman tag ocsepp-pn32f-svc:25.2.100 <customer repo>/ocsepp-pn32f-svc:25.2.100
    podman tag ocsepp-pre-install-hook:25.2.100 <customer repo>/ocsepp-pre-install-hook:25.2.100
    podman tag ocsepp-update-db:25.2.100 <customer repo>/ocsepp-update-db:25.2.100
  7. Run one of the following commands to push the image to the registry:
    docker push <docker-repo>/<image-name>:<image-tag> 
    podman push <docker-repo>/<image-name>:<image-tag>
Sample push commands:

podman push occne-repo-host:5000/alternate_route:25.2.104
podman push occne-repo-host:5000/common_config_hook:25.2.100
podman push occne-repo-host:5000/configurationinit:25.2.100
podman push occne-repo-host:5000/configurationupdate:25.2.100
podman push occne-repo-host:5000/mediation/ocmed-nfmediation:25.1.108
podman push occne-repo-host:5000/nf_test:25.2.102
podman push occne-repo-host:5000/nrf-client:25.2.102
podman push occne-repo-host:5000/occnp/oc-app-info:25.2.102
podman push occne-repo-host:5000/occnp/oc-config-server:25.2.102
podman push occne-repo-host:5000/occnp/oc-perf-info:25.2.102
podman push occne-repo-host:5000/ocdebugtool/ocdebug-tools:25.2.102
podman push occne-repo-host:5000/ocegress_gateway:25.2.104
podman push occne-repo-host:5000/ocingress_gateway:25.2.104
podman push occne-repo-host:5000/ocsepp-cn32c-svc:25.2.100
podman push occne-repo-host:5000/ocsepp-cn32f-svc:25.2.100
podman push occne-repo-host:5000/ocsepp-coherence-svc:25.2.100
podman push occne-repo-host:5000/ocsepp-config-mgr-svc:25.2.100
podman push occne-repo-host:5000/ocsepp-pn32c-svc:25.2.100
podman push occne-repo-host:5000/ocsepp-pn32f-svc:25.2.100
podman push occne-repo-host:5000/ocsepp-pre-install-hook:25.2.100
podman push occne-repo-host:5000/ocsepp-update-db:25.2.100

Note:

  • It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise, docker push command may fail.
2.2.1.4 Pushing the SEPP Images to OCI Docker Registry

SEPP deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.

The following table lists the docker images for SEPP:

Table 2-21 SEPP Images

Services Image Tag
<helm-release-name>-alternate_route alternate_route 25.2.104
<helm-release-name>-common_config_hook common_config_hook 25.2.100
<helm-release-name>-configurationinit configurationinit 25.2.100
<helm-release-name>-mediation/ocmed-nfmediation mediation/ocmed-nfmediation 25.1.108
<helm-release-name>-nf_test nf_test 25.2.102
<helm-release-name>-nrf-client nrf-client 25.2.102
<helm-release-name>-occnp/oc-app-info occnp/oc-app-info 25.2.102
<helm-release-name>-occnp/oc-config-server occnp/oc-config-server 25.2.102
<helm-release-name>-performance occnp/oc-perf-info 25.2.102
<helm-release-name>-ocdebugtool/ocdebug-tools ocdebugtool/ocdebug-tools 25.2.102
<helm-release-name>-ocegress_gateway ocegress_gateway 25.2.104
<helm-release-name>-ocingress_gateway ocingress_gateway 25.2.104
<helm-release-name>-oocsepp-cn32c-svc ocsepp-cn32c-svc 25.2.100
<helm-release-name>- ocsepp-cn32f-svc ocsepp-cn32f-svc 25.2.100
<helm-release-name>-ocsepp-coherence-svc ocsepp-coherence-svc 25.2.100
<helm-release-name>-ocsepp-config-mgr-svc ocsepp-config-mgr-svc 25.2.100
<helm-release-name>-ocsepp-pn32c-svc ocsepp-pn32c-svc 25.2.100
<helm-release-name>-ocsepp-pn32f-svc ocsepp-pn32f-svc 25.2.100
<helm-release-name>-ocsepp-pre-install-hook ocsepp-pre-install-hook 25.2.100
<helm-release-name>-ocsepp-update-db ocsepp-update-db 25.2.100
<helm-release-name>-ocsepp-configurationupdate configurationupdate 25.2.100

To push the images to the registry:

  1. Navigate to the location where you want to install SEPP. Unzip the SEPP release package (<p********>_<release_number>_Tekelec.zip) to retrieve the following CSAR package.

    The SEPP package is as follows:

    ReleaseName_csar_Releasenumber.zip

    Where,
    • ReleaseName is a name that is used to track this installation instance.
    • Releasenumber is the release number.

      For example:
    ocsepp_csar_25_2_100_0_0.zip
  2. Unzip the SEPP package file to get SEPP docker image tar file:
    unzip ocsepp_csar_Releasenumber.zip
    For example:
    unzip ocsepp_csar_25_2_100_0_0.zip
  3. The directory ocsepp_csar_25_2_100_0_0.zip consists of the following:
    ├── Definitions
    │   ├── ocsepp_cne_compatibility.yaml
    │   └── ocsepp.yaml
    ├── Files
    │   ├── alternate_route-25.2.104.tar
    │   ├── ChangeLog.txt
    │   ├── common_config_hook-25.2.100.tar
    │   ├── configurationinit-25.2.100.tar
    │   ├── Helm
    │   │   ├── ocsepp-25.2.100.tgz
    │   │   ├── ocsepp-network-policy-25.2.100.tgz
    │   │   └── ocsepp-servicemesh-config-25.2.100.tgz
    │   ├── Licenses
    │   ├── mediation-ocmed-nfmediation-25.1.108.tar
    │   ├── nf_test-25.2.102.tar
    │   ├── nrf-client-25.2.102.tar
    │   ├── occnp-oc-app-info-25.2.102.tar
    │   ├── occnp-oc-config-server-25.2.102.tar
    │   ├── occnp-oc-perf-info-25.2.102.tar
    │   ├── ocdebugtool-ocdebug-tools-25.2.102.tar
    │   ├── ocegress_gateway-25.2.104.tar
    │   ├── ocingress_gateway-25.2.104.tar
    │   ├── ocsepp-cn32c-svc-25.2.100.tar
    │   ├── ocsepp-cn32f-svc-25.2.100.tar
    │   ├── ocsepp-coherence-svc-25.2.100.tar
    │   ├── ocsepp-config-mgr-svc-25.2.100.tar
    │   ├── ocsepp-pn32c-svc-25.2.100.tar
    │   ├── ocsepp-pn32f-svc-25.2.100.tar
    │   ├── ocsepp-pre-install-hook-25.2.100.tar
    │   ├── ocsepp-update-db-25.2.100.tar
    │   ├── Oracle.cert
    │   └── Tests
    ├── ocsepp.mf
    
    ├── Scripts
    │   ├── ocsepp_alertrules_promha_25.2.100.yaml
    │   ├── ocsepp_configuration_openapi_25.2.100.yaml
    │   ├── ocsepp_custom_values_25.2.100.yaml
    │   ├── ocsepp_custom_values_roaming_hub_25.2.100.yaml
    │   ├── ocsepp_dashboard_25.2.100.json
    │   ├── ocsepp_dashboard_promha_25.2.100.json
    │   ├── ocsepp_network_policies_custom_values_25.2.100.yaml
    │   ├── ocsepp-servicemesh-config-25.2.100.tgz
    │   ├── ocsepp_alertrules_25.2.100.yaml
    │   ├── ocsepp_mib_25.2.100.mib
    │   ├── ocsepp_mib_tc_25.2.100.mib
    │   └── toplevel.mib
    │   └── ocsepp_oci_alertrules_25.2.100.zip
    │   └── ocsepp_oci_dashboard_25.2.100.json
    │   └── ocsepp_rollback_schema_25.2.100.sql  
    │   └── ocsepp_dbtier_25.2.100_custom_values_25.2.100.yaml
    │   └── ocsepp_single_service_account_config_25.2.100.yaml    
    └── TOSCA-Metadata
        └── TOSCA.meta
  4. Open the Files folder and based on the container engine installed, run one of the following command to load the SEPP images. Load all the images that are listed in SEPP Images Table:
    podman load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
    docker load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
    Where, IMAGE_PATH is the location where the SEPP docker image tar file is archived.

    Sample command:
    podman load --input /IMAGE_PATH/ocsepp-pn32c-svc-25.2.100.tar

    Note:

    The docker/podman load command needs to be executed seperately for each tar file/docker image.
  5. Run one of the following commands to verify that the image is loaded:
    docker images | grep ocsepp
    podman images | grep ocsepp

    Note:

    Verify the list of images shown in the output with the list of images shown in the table SEPP Images. If the list does not match, reload the image tar file.
  6. Run the following commands to log in to the OCI Docker registry:
    podman login -u <REGISTRY_USERNAME> -p <REGISTRY_PASSWORD> <REGISTRY_NAME>
    docker login -u <REGISTRY_USERNAME> -p <REGISTRY_PASSWORD> <REGISTRY_NAME>
    where,
    • REGISTRY_NAME is <Region_Key>.ocir.io
    • REGISTRY_USERNAME is <Object Storage Namespace>/<identity_domain>/email_id
    • REGISTRY_PASSWORD is the Auth token generated by the user.
    For more information about OCIR configuration and creating auth token, see Cloud Native Core OCI Adaptor, NF Deployment in OCI Guide.
    • <Object Storage Namespace> is configured in OCI Console. To access it, navigate to OCI Console> Governanace & Administration> Account Management> Tenenancy Details> Object Storage Namespace.
    • <Identity Domain> is the domain where the user currently is present.
    • In OCI, each region is associated with a key. For the details about the <Region_Key>, refer to Regions and Availability Domains.
  7. Run one of the following commands to tag the images to the registry:
    docker tag <image-name>:<image-tag> <REGISTRY_NAME>/<Object Storage Namespace>/<image-name>:<image-tag>
    podman tag <image-name>:<image-tag> <REGISTRY_NAME>/<Object Storage Namespace>/<image-name>:<image-tag>
    Sample Tag commands:
    
    podman tag alternate_route:25.2.104 <customer repo>/alternate_route:25.2.104
    podman tag common_config_hook:25.2.100 <customer repo>/common_config_hook:25.2.100
    podman tag configurationinit:25.2.100 <customer repo>/configurationinit:25.2.100
    podman tag configurationupdate:25.2.100 <customer repo>/configurationupdate:25.2.100
    podman tag mediation/ocmed-nfmediation:25.1.107 <customer repo>/mediation/ocmed-nfmediation:25.1.108
    podman tag nf_test:25.2.102 <customer repo>/nf_test:25.2.102
    podman tag nrf-client:25.2.102 <customer repo>/nrf-client:25.2.102
    podman tag occnp/oc-app-info:25.2.102 <customer repo>/occnp/oc-app-info:25.2.102
    podman tag occnp/oc-config-server:25.2.102 <customer repo>/occnp/oc-config-server:25.2.102
    podman tag occnp/oc-perf-info:25.2.102 <customer repo>/occnp/oc-perf-info:25.2.102
    podman tag ocdebugtool/ocdebug-tools:25.2.102 <customer repo>/ocdebugtool/ocdebug-tools:25.2.102
    podman tag ocegress_gateway:25.2.104 <customer repo>/ocegress_gateway:25.2.104
    podman tag ocingress_gateway:25.2.104 <customer repo>/ocingress_gateway:25.2.104
    podman tag ocsepp-cn32c-svc:25.2.100 <customer repo>/ocsepp-cn32c-svc:25.2.100
    podman tag ocsepp-cn32f-svc:25.2.100 <customer repo>/ocsepp-cn32f-svc:25.2.100
    podman tag ocsepp-coherence-svc:25.2.100 <customer repo>/ocsepp-coherence-svc:25.2.100
    podman tag ocsepp-config-mgr-svc:25.2.100 <customer repo>/ocsepp-config-mgr-svc:25.2.100
    podman tag ocsepp-pn32c-svc:25.2.100 <customer repo>/ocsepp-pn32c-svc:25.2.100
    podman tag ocsepp-pn32f-svc:25.2.100 <customer repo>/ocsepp-pn32f-svc:25.2.100
    podman tag ocsepp-pre-install-hook:25.2.100 <customer repo>/ocsepp-pre-install-hook:25.2.100
    podman tag ocsepp-update-db:25.2.100 <customer repo>/ocsepp-update-db:25.2.100
  8. Run one of the following commands to push the image to the registry:
    docker push <REGISTRY_NAME>/<Object Storage Namespace>/<image-name>:<image-tag> 
    podman push <REGISTRY_NAME>/<Object Storage Namespace>/<image-name>:<image-tag
    Sample push commands:
    
    podman push occne-repo-host:5000/alternate_route:25.2.104
    podman push occne-repo-host:5000/common_config_hook:25.2.100
    podman push occne-repo-host:5000/configurationinit:25.2.100
    podman push occne-repo-host:5000/configurationupdate:25.2.100
    podman push occne-repo-host:5000/mediation/ocmed-nfmediation:25.1.108
    podman push occne-repo-host:5000/nf_test:25.2.102
    podman push occne-repo-host:5000/nrf-client:25.2.102
    podman push occne-repo-host:5000/occnp/oc-app-info:25.2.102
    podman push occne-repo-host:5000/occnp/oc-config-server:25.2.102
    podman push occne-repo-host:5000/occnp/oc-perf-info:25.2.102
    podman push occne-repo-host:5000/ocdebugtool/ocdebug-tools:25.2.102
    podman push occne-repo-host:5000/ocegress_gateway:25.2.104
    podman push occne-repo-host:5000/ocingress_gateway:25.2.104
    podman push occne-repo-host:5000/ocsepp-cn32c-svc:25.2.100
    podman push occne-repo-host:5000/ocsepp-cn32f-svc:25.2.100
    podman push occne-repo-host:5000/ocsepp-coherence-svc:25.2.100
    podman push occne-repo-host:5000/ocsepp-config-mgr-svc:25.2.100
    podman push occne-repo-host:5000/ocsepp-pn32c-svc:25.2.100
    podman push occne-repo-host:5000/ocsepp-pn32f-svc:25.2.100
    podman push occne-repo-host:5000/ocsepp-pre-install-hook:25.2.100
    podman push occne-repo-host:5000/ocsepp-update-db:25.2.100
  9. All the image repositories must be public. Run the following steps to make all image repositories public:
    1. Log in to OCI Console. Navigate to OCI Console> Developer Services > Containers & Artifacts> Container Registry.
    2. Select the root Compartment.
    3. In the Repositories and Images Seach option, the images will be listed. Select each image and click Change to Public. This step must be performed for all the images sequentially.

2.2.1.5 Verifying and Creating Namespace
This section explains how to verify and create a namespace in the system.

Note:

This is a mandatory procedure, run this before proceeding further with the installation. The namespace created or verified in this procedure is an input for the next procedures.
To verify and create a namespace:
  1. Run the following command to verify if the required namespace already exists in system:
    kubectl get namespaces 

    In the output of the above command, if the namespace exists, continue with the Creating Service Account, Role and RoleBinding section.

  2. If the required namespace is not available, create the namespace using the following command:
    kubectl create namespace <required namespace>
    Example:
    kubectl create namespace seppsvc
    Sample output:
    namespace/seppsvc created
  3. Update the namespace in the ocsepp custom-values.yaml file with the namespace created in the previous step.

    Example:

    Run the following kubectl command create the namespace seppsvc:
    kubectl create namespace seppsvc
    update the parameters : global: nameSpace: seppsvc # NameSpace where secret is deployed nameSpace: seppsvc

Naming Convention for Namespace

The namespace should:

  • start and end with an alphanumeric character.
  • contain 63 characters or less.
  • contain only alphanumeric characters or '-'.

Note:

It is recommended to avoid using the prefix kube- when creating a namespace. The prefix is reserved for Kubernetes system namespaces.
2.2.1.6 Creating Service Account, Role, and RoleBinding

This section explains how to create and use a single service account, role, and rolebinding resources which can be used by all the microservices of SEPP. This is an optional procedure and is only needed, if the user wants to use a single service account, role,and rolebinding by all the microservices of SEPP.

  1. Create a OCSEPP resource file:
    vi <ocsepp-resource-file>

    Example:

    vi ocsepp_single_service_account_config_<version>.yaml
  2. Update the ocsepp_single_service_account_config_<version>.yaml file with the correct namespace by replacing seppsvc with the user defined SEPP's namespace:

    Note:

    The user have an option to update the name of service account, role, and rolebinding.

    Note:

    If SEPP is deployed with ASM, the user must add the following annotation in the single service account: certificate.aspenmesh.io/customFields: '{ "SAN": { "DNS": [<SEPP inter PLMN FQDN>, <SEPP intra PLMN FQDN>], "URI": [<SEPP inter PLMN FQDN>, <SEPP intra PLMN FQDN>] } }
    Example:
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      annotations:
        certificate.aspenmesh.io/customFields: '{ "SAN": { "DNS": ["sepp1.inter.oracle.com", "sepp1.intra.oracle.com"], "URI": ["sepp1.inter.oracle.com", "sepp1.intra.oracle.com"] } }'
      name: sepp-sa
      namespace: seppsvc
    
    
    #sa-role-rolebinding.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      annotations: {}
      labels: {}
      name: sepp-sa
      namespace: seppsvc
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      annotations: {}
      labels: {}
      name: sepp-role
      namespace: seppsvc
    rules:
    - apiGroups:
      - ""
      resources:
      - services
      - configmaps
      - pods
      - secrets
      - endpoints
      - persistentvolumeclaims
      - pods/exec
      - serviceaccounts
      verbs:
      - get
      - watch
      - list
      - update
      - delete
      - deletecollection
      - create
      - patch
    # for cnDBtier
    - apiGroups:
      - apps
      resources:
      - deployments
      - statefulsets
      - replicasets
      verbs:
      - get
      - watch
      - list
      - update
      - delete
      - create
      - patch
    - apiGroups:
      - autoscaling
      resources:
      - horizontalpodautoscalers
      verbs:
      - get
      - watch
      - list
      - update
    # for job deletion
    - apiGroups:
      - batch
      resources:
      - jobs
      verbs:
      - get
      - delete
    - apiGroups:
      - ""
      resources:
      - events
      - pods/log
      verbs:
      - get
      - watch
      - list
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      annotations: {}
      labels: {}
      name: sepp-rolebinding
      namespace: seppsvc
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: sepp-role
    subjects:
    - kind: ServiceAccount
      name: sepp-sa
      namespace: seppsvc
    
    
  3. Run the following command to create service account, role, and role binding:
    $ kubectl -n <SEPP namespace> create -f ocsepp_single_service_account_config_<version>.yaml
    For example:
    $ kubectl -n seppsvc create -f ocsepp_single_service_account_config_<version>.yaml
  4. Update the serviceAccountName parameter in the ocsepp_custom_values_<version>.yaml file.
2.2.1.7 Configuring Database, Creating Users, and Granting Permissions

This section explains how database administrators can create users and database in a single and multisite deployment.

SEPP supports single database (provisional Database) and single type of user.

Note:

  • Before running the procedure for georedundant sites, ensure that the cnDBTier for georedundant sites is up and replication channels are enabled.
  • While performing a fresh installation, if SEPP is already deployed, purge the deployment and remove the database and users that were used for the previous deployment. For uninstallation procedure, see Uninstalling SEPP.
  • To install cnDBTier, refer Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

cnDBTier Parameter Values

For Single Cluster, Single Instance (Single SEPP Instances on Dedicated cnDBTier Cluster) deployment model, the cnDBTier resources can be taken from ocsepp_dbtier_<cnDBTier_version>_custom_values_<SEPP_version>.yaml file.

Example: ocsepp_dbtier_25.2.100_custom_values_25.2.100.yaml

For Single Cluster, Multiple Instance (multiple SEPP instances on shared cnDBTier cluster), the cnDBTier parameters from cnDBTier Parameter Values Table should be updated in ocsepp_dbtier_<cnDBTier_version>_custom_values_<SEPP_version>.yaml file.

Note:

Verify the value of the following parameters before deploying SEPP in 1+1 site GR with single cluster, multiple instance.

Table 2-22 cnDBTier Parameter Values

Parameter Default Values (in CV file) New Values (to be Updated in CV file)
MaxNoOfTables 1024 3000
MaxNoOfAttributes 5000 24000
MaxNoOfOrderedIndexes 1024 3700

SEPP Users

SEPP supports a single type of user:

This user has a complete set of permissions. This user can perform create, alter, or drop operations on tables to perform install, upgrade, rollback, or delete operations.

SEPP Database

SEPP Database contains configuration information. The same configuration must be done on each site by the operator. In case of multisite georedundant setups, each site must have a unique SEPP Database, which is replicated to other sites. SEPP sites can access only the information in their unique provisional database.

For example:

  • For Site 1: seppdb_site_1
  • For Site 2: seppdb_site_1
  • For Site 3: seppdb_site_1
2.2.1.7.1 Single Site

This section explains how database administrator can create the database, users, and grant permissions to the users for single SEPP site.

Follow the below steps to manually create SEPP Database and MySQL user required for the deployment:

  1. Log in to the machine that has permission to access the SQL nodes of NDB cluster.
  2. Run the following command to log in to one of the ndbappmysqld node pods of the primary NDB cluster:

    Connect to the SQL nodes.
    kubectl exec -it ndbappmysqld-0 -n <cndb-namespace> -- bash
    where, cndb-namespace is the namespace in which cnDBTier is installed.
  3. Log in to the MySQL prompt using root permission or user, which has permission to create users with conditions as mentioned (in the next step):

    Example:
    mysql -h127.0.0.1 -uroot -p<rootPassword>

    Note:

    This command varies between systems, path for MySQL binary, root user, and root password. After running this command, enter the password specific to the user mentioned in the command.
  4. Check whether OCSEPP database user already exists. If the user does not exist, create an OCSEPP database user by running the following queries:
    1. Run the following command to to list the users:
      $ SELECT User FROM mysql.user;
    2. If the SEPP user does not exist, run the following command to create the new user :
      $ CREATE USER IF NOT EXISTS '<OCSEPP User Name>'@'%' IDENTIFIED BY '<OCSEPP User Password>';
      Example:
      $ CREATE USER IF NOT EXISTS 'seppusr'@'%' IDENTIFIED BY 'sepppasswd';

    Note:

    You must create the user on all the SQL nodes for all georedundant sites.
  5. Check if OCSEPP database already exists. If it does not exist, run the following commands to create an OCSEPP database and provide permissions to OCSEPP username created in the previous step:

    Note:

    Naming Convention for SEPP Database

    As the SEPP instances cannot share the same database, user must provide a unique name to the SEPP database in the cnDBTier. The recommended format for SEPP database and SEPP-backup database name is as follows:

    <database-name>_<site-name>_<NF_INSTANCE_ID> where "-" in NF_INSTANCE_ID is replaced by "_".

    Example: seppdb_site1_9faf1bbc_6e4a_4454_a507_aef01a101a06

    The name of the database must:

    • starts and ends with an alphanumeric character
    • contains a maximum of 63 characters
    • contains only alphanumeric characters or '_'
    1. Run the following command to check if database exists:
      $ show databases; 
    2. If database does not exist, run the following command for database creation:
      $ CREATE DATABASE IF NOT EXISTS <OCSEPP Database>;
      Example:
      $ CREATE DATABASE IF NOT EXISTS seppdb; 
    3. If backup database does not exist, run the following command for database creation:
      $ CREATE DATABASE IF NOT EXISTS <OCSEPP Backup Database>;
      Example:
      CREATE DATABASE IF NOT EXISTS seppbackupdb;

    Note:

    Ensure that you use the same database names while creating database that you have used in the global parameters of ocsepp_custom_values_<version>.yaml file. Following is an example of what are the database names configured in the ocsepp_custom_values_<version>.yaml file:
    
    
    global:seppDbName: "seppdb"
    global:leaderPodDbName: "seppdb"
    global:networkDbName: "seppdb"
    global:nrfClientDbName: "seppdb"
    
    Backup Database
      global:seppBackupDbName: "seppbackupdb"
    1. Run the following command to grant permission to user to SEPP Database:
      $ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON <OCSEPP Database>.* TO '<OCSEPP User Name>'@'%';
      Example:
      GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON seppdb.* TO 'seppusr'@'%';
    2. Run the following command to grant permission to user for SEPP backup db:
      $ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON <OCSEPP backup Database>.* TO '<OCSEPP User Name>'@'%';
      Example:
      GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON seppbackupdb.* TO 'seppusr'@'%';
    3. Run the following command to grant permission for MySQL db:
      $ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON mysql.* TO '<OCSEPP User Name>'@'%';
      Example:
      GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON mysql.* TO 'seppusr'@'%';
  6. Run the following command to grand the permission to NDB_STORED_USER:
    GRANT NDB_STORED_USER ON *.* TO '<OCSEPP User Name>'@'%' WITH GRANT OPTION;
    Example:
    GRANT NDB_STORED_USER ON *.* TO 'seppusr'@'%' WITH GRANT OPTION;
  7. Run the following command to flush the privileges:
    flush privileges;
  8. Run the following command to verify SEPP database grants were correctly created:

    show grants for '<sepp user>'@'%'; 
    Example:
    show grants for 'seppusr'@'%';
    
    +-----------------------------------------------------------------------------------------------------------------------------------------------------------+ 
    | Grants for seppusr@% | 
    +-----------------------------------------------------------------------------------------------------------------------------------------------------------+ 
    | GRANT USAGE ON *.* TO `seppusr`@`%` | 
    | GRANT NDB_STORED_USER ON *.* TO `seppusr`@`%` WITH GRANT OPTION | 
    | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `mysql`.* TO `seppusr`@`%` | 
    | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `seppbackupdb`.* TO `seppusr`@`%` | 
    | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `seppdb`.* TO `seppusr`@`%` |
    +-----------------------------------------------------------------------------------------------------------------------------------------------------------+
  9. Exit from MySQL prompt and SQL nodes.
2.2.1.7.2 Multisite

This section explains how database administrator can create the database, users, and grant permissions to the users for multisite deployment.

Follow the below steps to manually create SEPP Database and MySQL user required for the deployment:
  1. Log in to the machine that has permission to access the SQL nodes of NDB cluster.
  2. Run the following command to log in to one of the ndbappmysqld node pods of the primary NDB cluster:

    Connect to the SQL nodes.
    kubectl exec -it ndbappmysqld-0 -n <cndb-namespace> -- bash
    where, cndb-namespace is the namespace in which CNDB is installed.
  3. Log in to the MySQL prompt using root permission or user, which has permission to create users with conditions as mentioned (in the next step):

    Example:
    mysql -h127.0.0.1 -uroot -p<rootPassword>

    Note:

    This command varies between systems, path for MySQL binary, root user, and root password. After running this command, enter the password specific to the user mentioned in the command.
  4. Check whether SEPP database user already exists. If the user does not exist, create an SEPP database user by running the following queries:
    1. Run the following command to to list the users:
      $ SELECT User FROM mysql.user;
    2. If the SEPP user does not exist, run the following command to create the new user :
      $ CREATE USER IF NOT EXISTS '<OCSEPP User Name>'@'%' IDENTIFIED BY '<OCSEPP User Password>';
      Example:
      $ CREATE USER IF NOT EXISTS 'seppusr'@'%' IDENTIFIED BY 'sepppasswd';

    Note:

    You must create the user on all the SQL nodes for all georedundant sites.
  5. Check if SEPP database already exists. If it does not exist, run the following commands to create an SEPP database and provide permissions to SEPP username created in the previous step:

    Note:

    Naming Convention for SEPP Database

    As the SEPP instances cannot share the same database, user must provide a unique name to the SEPP database in the cnDBTier. The recommended format for SEPP database and SEPP-backup database name is as follows:

    <database-name>_<site-name>_<NF_INSTANCE_ID> where "-" in NF_INSTANCE_ID is replaced by "_".

    Example: seppdb_site1_9faf1bbc_6e4a_4454_a507_aef01a101a06

    The name of the database must:

    • starts and ends with an alphanumeric character
    • contains a maximum of 63 characters
    • contains only alphanumeric characters or '_'

    Note:

    Create database for each site. For Site-2 or Site-3, ensure that the database name is different from the previous site names.
    1. Run the following command to check if database exists:
      $ show databases; 
    2. If database does not exist, run the following command for database creation:
      $ CREATE DATABASE IF NOT EXISTS <OCSEPP Database>;
      Example:
      $ CREATE DATABASE IF NOT EXISTS seppdb; 
    3. If backup database does not exist, run the following command for database creation:
      $ CREATE DATABASE IF NOT EXISTS <SEPP Backup Database>;
      Example:
      CREATE DATABASE IF NOT EXISTS seppbackupdb;

    Note:

    Ensure that you use the same database names while creating database that you have used in the global parameters of ocsepp_custom_values_<version>.yaml file. Following is an example of what are the database names configured in the ocsepp_custom_values_<version>.yaml file:
    
    
    global:seppDbName: "seppdb"
    global:leaderPodDbName: "seppdb"
    global:networkDbName: "seppdb"
    global:nrfClientDbName: "seppdb"
    
    Backup Database
      global:seppBackupDbName: "seppbackupdb"
    1. Run the following command to grant permission to user to SEPP DB:
      $ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON <OCSEPP Database>.* TO '<OCSEPP User Name>'@'%';
      Example:
      GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON seppdb.* TO 'seppusr'@'%';
    2. Run the following command to grant permission to user for backup db:
      $ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON <OCSEPP backup Database>.* TO '<OCSEPP User Name>'@'%';
      Example:
      GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON seppbackupdb.* TO 'seppusr'@'%';
    3. Run the following command to grant permission for MySQL db:
      $ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON mysql.* TO '<OCSEPP User Name>'@'%';
      Example:
      GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON mysql.* TO 'seppusr'@'%';
  6. Run the following command to grand the permission to NDB_STORED_USER:
    GRANT NDB_STORED_USER ON *.* TO '<OCSEPP User Name>'@'%' WITH GRANT OPTION;
    Example:
    GRANT NDB_STORED_USER ON *.* TO 'seppusr'@'%' WITH GRANT OPTION;
  7. Run the following command to flush the privileges:
    flush privileges;
  8. Run the following command to verify SEPP database grants were correctly created:

    show grants for '<sepp user>'@'%'; 
    Example:
    show grants for 'seppusr'@'%';
    
    +-----------------------------------------------------------------------------------------------------------------------------------------------------------+ 
    | Grants for seppusr@% | 
    +-----------------------------------------------------------------------------------------------------------------------------------------------------------+ 
    | GRANT USAGE ON *.* TO `seppusr`@`%` | 
    | GRANT NDB_STORED_USER ON *.* TO `seppusr`@`%` WITH GRANT OPTION | 
    | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `mysql`.* TO `seppusr`@`%` | 
    | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `seppbackupdb`.* TO `seppusr`@`%` | 
    | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `seppdb`.* TO `seppusr`@`%` |
    +-----------------------------------------------------------------------------------------------------------------------------------------------------------+
  9. Exit from MySQL prompt and SQL nodes.
2.2.1.8 Configuring Kubernetes Secrets for Accessing Database

This section explains how to configure Kubernetes secrets for accessing SEPP database.

  1. Run the following command to create a Kubernetes secret for the SEPP users:
    kubectl create secret generic <OCSEPP User secret name> --from-literal=mysql-username=<OCSEPP MySQL Database User Name> --from-literal=mysql-password=<OCSEPP Mysql User Password > -n <Namespace>
    Where,
    • <OCSEPP User secret name> is the secret name of the user.
    • <OCSEPP MySQL Database User Name> is the username of the ocsepp MySQL user.
    • <OCSEPP MySQL User Password> is the password of the ocsepp MySQL user.
    • <Namespace> is the namespace of SEPP deployment.

    Note:

    Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in future.

    Example:

    $ kubectl create secret generic ocsepp-mysql-cred  --from-literal=mysql-username=seppusr --from-literal=mysql-password=sepppasswd  -n seppsvc
  2. Run the following command to verify the secret created:
    
    $ kubectl describe secret <OCSEPP User secret name> -n <Namespace>
    
    
    Where,
    • <OCSEPP User secret name> is the secret name of the user.
    • <Namespace> is the namespace of SEPP deployment.

      For example:
    
    $ kubectl describe secret ocsepp-mysql-cred -n seppsvc
    Sample output:
    
    Name:  ocsepp-mysql-cred
    Namespace: seppsvc
    Labels: <none>
    Annotations: <none>
    Type: Opaque
    Data
    ====
    mysql-password: 10 bytes
    mysql-username: 7 bytes
    
    
    

    Note:

    If the secret name is anything other than ocsepp-mysql-cred, update the following parameter values in the ocsepp_custom_values_<version>.yaml file before deploying:
    • global:dbCredSecretName: &dbCredSecretNameRef 'ocsepp-mysql-cred'
    • global:privilegedDbCredSecretName: &privDbCredSecretNameRef 'ocsepp-mysql- cred'
  3. To update the Kubernetes secret, update the command used in step 1 with string "--dryrun -o yaml" and "kubectl replace -f - -n <Namespace>". After the update is performed, use the following command:
    
    
    $ kubectl create secret generic <OCSEPP User secret name> --fromliteral=mysql-username=<OCSEPP MySQL Database User Name> --fromliteral=mysql-password=<OCSEPP Mysql User Password> --dryrun -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>
    
    Where,
    • <OCSEPP User secret name> is the secret name of the user.
    • <OCSEPP MySQL Database User Name> is the username of the ocsepp MySQL user.
    • <OCSEPP MySQL User Password> is the password of the ocsepp MySQL user.
    • <Namespace> is the namespace of SEPP deployment.
  4. Run the updated command. The following message is displayed:
    secret/<OCSEPP User secret name> replaced
    Where, <OCSEPP User secret name> is the updated secret name of the application user.

    Example:
    secret/ocsepp-mysql-cred replaced
2.2.1.9 Configuring Kubernetes Secret for Enabling HTTPS

This section explains the steps to configure HTTPS at Ingress and Egress Gateways.

2.2.1.9.1 Configuring Secrets at N32 Gateway (n32-egress-gateway and n32-ingress-gateway)

This section explains the steps to configure secrets for enabling HTTP over TLS in N32 Ingress and Egress Gateways. This procedure must be performed before deploying SEPP.

Note:

The passwords for TrustStore and KeyStore are stored in respective password files.
To create Kubernetes secret for HTTP over TLS, the following files are required:
  • ECDSA private key and CA signed certificate of SEPP (if initialAlgorithm is ES256)
  • or, RSA private key and CA signed certificate of SEPP (if initialAlgorithm is RSA256)
  • TrustStore password file
  • KeyStore password file
  • CA certificate

Note:

  • Creation process for private keys, certificates, and passwords is on discretion of user/operator.
  • f the certificates are not available, then create them following the instructions given in the 'Creating Private Keys and Certificates for Gateways' section.
You can manage Kubernetes secrets for enabling HTTPS in SEPP using one of the following methods:
  • Managing secrets through OCCM
  • Managing secrets manually

Managing Secrets Through OCCM

To create the secrets using OCCM, see "Managing Certificates" in Oracle Communications Cloud Native Core, Certificate Management User Guide.

The secrets created by OCCM are then patched to add keyStore password and trustStore password files by running the following commands:
  1. To patch the secrets created with the keyStore password file:
    TLS_CRT=$(base64 < "key.txt" | tr -d '\n')
    kubectl patch secret server-primary-ocsepp-secret-occm -n seppsvc -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"
    Where,
    • key.txt is the password file that contains KeyStore password.
    • server-primary-ocsepp-secret-occm is the secret created by OCCM.
  2. To patch the secrets created with the trustStore password file:
    TLS_CRT=$(base64 < "trust.txt" | tr -d '\n')
    kubectl patch secret server-primary-ocsepp-secret-occm -n seppsvc -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"
    Where,
    • trust.txt is the password file that contains TrustStore password.
    • server-primary-ocsepp-secret-occm is the secret created by OCCM.

Note:

To monitor the lifecycle management of the certificates through OCCM, do not patch the Kubernetes secrets manually to update the TLS certificate or keys. It must be done through the OCCM GUI.

Managing Secrets Manually

  1. Run the following command to create secret:
    $ kubectl create secret generic <ocsepp-n32-secret> --fromfile=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --fromfile=<trust.txt> --from-file=<key.txt> --from-file=<caroot.cer> --fromfile=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of SEPP deployment>
    Where,
    
    <ocsepp-n32-secre> is the secret name for n32-egress-gateway and n32-ingress-agteway.
    <ssl_ecdsa_private_key.pem> is the ECDSA private key.
    <rsa_private_key_pkcs1.pem> is the RSA private key.
    <trust.txt> is the SSL Truststore file.
    <key.txt> is the SSL Keystore file.
    <caroot.cer> is the CA root certificate authority
    <ssl_rsa_certificate.crt> is the SSL RSA certificate.
    <ssl_ecdsa_certificate.crt> is the SSL ECDSA certificate.
    <Namespace> of SEPP deployment.
    

    Note:

    • Note down the command used during the creation of Kubernetes secret, this command will be used for updates in future.
    • It is recommended to use the same secret name as mentioned in the example. In case you change <ocsepp-n32-secret>, then update the k8SecretName parameter under n32-ingress-gateway and n32-egress-gateway section in the ocsepp_custom_values_<version>.yaml. For more information, see the n32-ingress-gateway and n32-egress-gateway section.
    Example:
    
    kubectl create secret generic ocsepp-n32-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=trust.txt --from-file=key.txt --from-file=caroot.cer --from-file=rsa_certificate.crt --from-file=ssl_ecdsa_private_key.pem --from-file=ocsepp.cer -n seppsvc
  2. Run the following command to verify the secret:
    $ kubectl describe secret <ocsepp-n32-secret> -n <Namespace>
    Where, <ocsepp-n32-secret> is the secret name for n32-egress-gateway and n32-ingress-agteway and <Namespace> is the namespace of SEPP deployment.
    Example:
    $ kubectl describe secret ocsepp-n32-secret -n seppsvc

Note:

If the certificates are not available, then create them following the instructions given in the 'Creating Private Keys and Certificates for Gateways' section.
2.2.1.9.2 Configuring Secrets at PLMN SEPP Egress and Ingress Gateway

This section explains the steps to configure secrets for enabling HTTPS/HTTP over TLS in Public Land Mobile Network (PLMN) Ingress and Egress Gateways. This procedure must be performed before deploying SEPP.

Note:

The passwords for TrustStore and KeyStore are stored in respective password files.
To create kubernetes secret for HTTPS/HTTP over TLS, the following files are required:
  • ECDSA private key and CA signed certificate of SEPP, if initialAlgorithm is ES256
  • RSA private key and CA signed certificate of SEPP, if initialAlgorithm is RS256
  • TrustStore password file
  • KeyStore password file
  • Certificate chain for trust store
  • Signed server certificate or Signed client certificate

Note:

The creation process for private keys, certificates, and passwords are at the discretion of the user or operator.
You can manage Kubernetes secrets for enabling HTTPS in SEPP using one of the following methods:
  • Managing secrets through OCCM
  • Managing secrets manually

Managing Secrets Through OCCM

To create the secrets using OCCM, see "Managing Certificates" in Oracle Communications Cloud Native Core, Certificate Management User Guide.

The secrets created by OCCM are then patched to add keyStore password and trustStore password files by running the following commands:
  1. To patch the secrets created with the keyStore password file:
    TLS_CRT=$(base64 < "key.txt" | tr -d '\n')
    kubectl patch secret server-primary-seppsvc-secret-occm -n seppsvc -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"
    Where,
    • key.txt is the password file that contains KeyStore password.
    • server-primary-ocsepp-secret-occm is the secret created by OCCM.
  2. To patch the secrets created with the trustStore password file:
    TLS_CRT=$(base64 < "trust.txt" | tr -d '\n')
    kubectl patch secret server-primary-ocsepp-secret-occm -n seppsvc -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"
    Where,
    • trust.txt is the password file that contains TrustStore password.
    • server-primary-ocsepp-secret-occm is the secret created by OCCM.

Note:

To monitor the lifecycle management of the certificates through OCCM, do not patch the Kubernetes secrets manually to update the TLS certificate or keys. It must be done through the OCCM GUI.

Managing Secrets Manually

  1. Run the following command to create secret:
    $ kubectl create secret generic <ocsepp-plmn-secret> --fromfile=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --fromfile=<trust.txt> --from-file=<key.txt> --from-file=<caroot.cer> --fromfile=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of SEPP deployment>
    Where,
    
    <ocsepp-plmn-secret> is the secret name for plmn-egress-gateway and plmn-ingress-agteway.
    <ssl_ecdsa_private_key.pem> is the ECDSA private key.
    <rsa_private_key_pkcs1.pem> is the RSA private key.
    <trust.txt> is the SSL Truststore file.
    <key.txt> is the SSL Keystore file.
    <caroot.cer> is the CA root certificate authority
    <ssl_rsa_certificate.crt> is the SSL RSA certificate.
    <ssl_ecdsa_certificate.crt> is the SSL ECDSA certificate.
    <Namespace> of SEPP deployment.

    Note:

    • Note down the command used during the creation of kubernetes secret, this command will be used for updates in future.
    • It is recommended to use the same secret name as mentioned in the example. In case you change <ocsepp-plmn-secret>, then update the k8SecretName parameter under plmn-ingress-gateway and plmn-egress-gateway section in the ocsepp-custom-values-<version>.yaml file. For more information, see the plmn-ingress-gateway and plmn-egress-gateway section.
    • For multiple CA root partners, the SEPP CA certificate should contain the CA information in the particular format. The CAs of the roaming partners should be concatenated in the single file separated by eight hyphens as given below:
      CA1 content
      --------
      CA2 content
      --------
      CA3 content
    Example:
    kubectl create secret generic ocsepp-plmn-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=trust.txt --from-file=key.txt --from-file=caroot.cer --from-file=rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --from-file=ocsepp.cer -n seppsvc
  2. Run the following command to verify the secret:
    $ kubectl describe secret <ocsepp-plmn-secret> -n <Namespace>
    Where,
    
    <ocsepp-plmn-secret> is the secret name for plmn-egress-gateway and plmn-ingress-gateway.
    <Namespace> is the namespace of SEPP deployment.
    Example:
    $ kubectl describe secret  ocsepp-plmn-secret -n seppsvc
2.2.1.10 SEPP Compatibility with Kubernetes, CNE and, Kyverno Policies
This section explains about SEPP compatibility with Kubernetes, CNE and, Kyverno Policies.
  1. If Istio or Aspen Service Mesh (ASM) is installed on CNE, run the following command to patch the "disallow-capabilities" clusterpolicy of CNE and exclude the NF namespace before the NF deployment:
    
    kubectl patch clusterpolicy disallow-capabilities --type "json" -p '[{"op":"add","path":"/spec/rules/0/exclude/any/0/resources/namespaces/-","value":"<namespace of NF>"}]'
    where, namespace of NF is the SEPP namespace used for deployment.

    Example:

    kubectl patch clusterpolicy disallow-capabilities --type "json" -p '[{"op":"add","path":"/spec/rules/0/exclude/any/0/resources/namespaces/-","value":"seppsvc"}]'
  2. Run the following command to verify that cluster policuies are updates with the SEPP namespace in exclude list:
    kubectl get clusterpolicies disallow-capabilities -oyaml
    Example:
    
    spec:
      background: true
      failurePolicy: Ignore
      rules:
      - exclude:
          any:
          - resources:
              kinds:
              - Pod
              - DaemonSet
              namespaces:
              - kube-system
              - occne-infra
              - rook-ceph
              - seppsvc
2.2.1.11 Configuring SEPP to Support Aspen Service Mesh

SEPP leverages Aspen Service Mesh (ASM) for all the internal and external TLS communication. The service mesh integration provides inter-NF communication and allows API gateway co-working with service mesh. The service mesh integration supports the services by deploying a special sidecar proxy in each pod to intercept all the network communication between microservices.

Supported ASM version:1.21.6, 1.14.6

For ASM installation and configuration, refer to official Aspen Service Mesh website for details.

Aspen Service Mesh (ASM) configurations are categorized as follows:

  • Control Plane: It involves adding labels or annotations to inject sidecar. The control plane configurations are part of the NF Helm chart.
  • Data Plane: It helps in traffic management, such as handling NF call flows by adding Service Entries (SE), Destination Rules (DR), Envoy Filters (EF), and other resource changes such as apiVersion change between different versions. This configuration is done manually by considering each NF requirement and ASM deployment. This configuration can be done using ocsepp_servicemesh_config_custom_values_<version>.yaml file.

Data Plane Configuration

Data Plane configuration consists of following Custom Resource Definitions (CRDs):

  • Service Entry (SE)
  • Destination Rule (DR)
  • Envoy Filter (EF)
  • Peer Authentication (PA)
  • Virtual Service (VS)

Note:

Use ocsepp_servicemesh_config_custom_values_<version>.yaml file and Helm charts to add or remove CRDs that you may require due to ASM upgrades to configure features across different releases.

The data plane configuration is applicable in the following scenarios:

  • Service Entry: Enables adding additional entries into Sidecar's internal service registry, so that auto-discovered services in the mesh can access or route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints).
  • Destination Rule: Defines policies that apply to traffic intended for service after routing has occurred. These rules specify configuration for load balancing, connection pool size from the sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load balancing pool.
  • Envoy Filters: Sidecars rewrite the header with its own default value. Therefore, the headers from back-end services are lost. You require Envoy Filters to help in passing the headers from back-end services to use it as it is, for eg. server header.
  • Peer Authentication: Used for service-to-service authentication to verify the client making the connection. This template can be used to change the default mTLS mode on the deployment. It allows values such as STRICT, PERMISSIVE, and DISABLE.
  • Virtual Service: Defines a set of traffic routing rules to apply when a host is addressed. Each routing rule defines matching criteria for the traffic of a specific protocol. If the traffic is matched, then it is sent to a named destination service (or subset or version of it) defined in the registry.

Service Mesh Configuration File

The following are supported fields in CRD:
  • Service Entry
    • hosts
    • exportTo
    • location
    • addresses
    • ports.name
    • ports.number
    • ports.protocol
  • Destination Rule
    • host
    • mode
    • name
    • exportTo
  • Envoy Filters
    • labelselector
    • applyTo
    • filtername
    • operation
    • typeconfig
    • configkey
    • configvalue
  • Peer Authentication
    • name
    • labelselector
    • tlsmode
  • Virtual Service
    • name
    • prefix
    • host
    • destinationhost
    • port
    • exportTo
    • attempts

For more information about the CRDs and the parameters, see Aspen Service Mesh.

A sample ocsepp_servicemesh_config_custom_values_<version>.yaml is available in Custom_Templates file. For downloading the file, see Customizing SEPP.

Note:

To connect to vDBTier, create an Service Entry (SE) and Destination Rule (DR) for MySQL connectivity service if the database is in different cluster. Else, the sidecar rejects request as vDBTier does not support sidecars.
2.2.1.11.1 Predeployment Configurations

This section explains the predeployment configuration procedure to install SEPP with Service Mesh support.

Note:

From Release 25.1.100 onwards, SEPP ASM supports mediation.

Prerequisites

Ensure that ASM is deployed on the cluster. Validate the following parameters:
  1. Run the following command to verify the certificateCustomFields parameter value in ASM deployed namespace. This parameter should be set to true.
    kubectl describe  cm istio-sidecar-injector -n <namespace in which ASM is deployed>| grep "certificateCustomFields"
    
    Example:
    
    
    kubectl describe  cm istio-sidecar-injector -nistio-system | grep "certificateCustomFields"
    
           "certificateCustomFields": true,
  2. If this parameter is set to false, update the ASM charts to set it to true and perform an upgrade.
    ./manifests/charts/istio-control/istio-discovery/values.yaml
             certificateCustomFields: true
  3. Run the following command to verify that istio-base and istiod are installed in the cluster.
    helm ls -nistio-system 
    Example:
    
    helm ls -nistio-system 
    NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
    istio-base      istio-system    1               2024-12-20 10:31:27.240210738 +0000 UTC deployed        base-1.14.6-am1         1.14.6-am1 
    istiod          istio-system    1               2024-12-20 10:32:45.905279498 +0000 UTC deployed        istiod-1.14.6-am1       1.14.6-am1

Predeployment Configuration

Follow the predeployment configuration procedure as mentioned below:

  1. Creating SEPP namespace
    1. Run the following command to verify if the required namespace already exists in the system:
      kubectl get namespaces
    2. In the output of the above command, check if required namespace is available. If not available, then create the namespace using the following command:
      kubectl create namespace <namespace>

      Where,

      <Namespace> is the SEPP namespace.

      Example:
      kubectl create namespace seppsvc

SEPP Specific Changes

In the ocsepp_custom_values_<version>.yaml, do the following changes:

  1. Modify serviceMeshCheck flag from false to true in all the sections.
  2. Modify all occurrence of serviceMeshEnabled flag from false to true .
  3. In the PLMN Ingress Gateway section, do the following:
    1. change initssl to false.
    2. change enableIncomingHttp to true.
    3. change enableIncomingHttps to false.
  4. In the N32 Egress Gateway section, do the following:
    1. Update the sanValues paramter to the SEPP Inter PLMN FQDN.

      Example : sanValues: ["sepp2.inter.oracle.com"]

      Note:

      This value should match with FQDN in the ssl.conf file, which is used for creating TLS certificate.
    2. change initssl to false.
    3. change enableOutgoingHttps to false.
    4. change the following to false:
      sepp:     
         tlsConnectionMode: false
      .
  5. In the N32 Ingress Gateway section, do the following:
    1. change initssl to false.
    2. change enableIncomingHttp to true.
    3. change enableIncomingHttps to false.
    4. Do the following to extract the SAN from DNS header which is received from N32 Egress Gateway (Skip this step, if deploying the SEPP only for ATS):
      xfccHeaderValidation:
       validation:
         enabled: false
       extract:
         enabled: true
         certextractindex: 0
         extractfield: DNS
         extractindex: -99
      
      
  6. In the PLMN Egress Gateway section, do the following:
    1. change initssl to false.
    2. change enableOutgoingHttps to false.
  7. In the PN32C microservice, do the following changes for SAN header name, regex and delimiter (Skip this step, if deploying the SEPP only for ATS):

    Note:

    If SEPP is deployed with single service account default value of the extractSANDelimiter:"," parameter should not be modified.
    sanHeaderName: "oc-xfcc-dns"
    extractSANRegex: "(.*)"
    extractSANDelimiter: " "
    
  8. In the PN32F microservice, do the following changes for SAN header name, regex and delimiter (Skip this step, if deploying the SEPP is only for ATS):

    Note:

    If SEPP is deployed with single service account default value of the extractSANDelimiter:"," parameter should not be modified.
    sanHeaderName: "oc-xfcc-dns"
    extractSANRegex: "(.*)"
    extractSANDelimiter: " "
    
2.2.1.11.2 Installation of ASM Configuration Charts
This section explains the installation of ASM configuration charts.

In the ocsepp_servicemesh_config_custom_values_<version>.yaml file, do the following changes:

  1. Create a destination rule to establish a connection between OCSEPP and cnDBTier. Sample template is given below:

    Note:

    If the cnDBTier does not have istio sidecar injection, then create the destination rule. Else, skip this step.
    Destination Rules:
    
    - host: "<db-service-fqdn>.<db-namespace>.svc.<domain>"
      mode: DISABLE
      name: ocsepp-db-service-dr
      exportTo: |-
        [ "." ]
      namespace: seppsvc 
    
    where,
    • host is the complete hostname of cnDBtier.

      For example, mysql-connectivity-service.cndb-ankit.svc.occne-24-2-cluster-user1"
    • namespace is the Namespace in which SEPP will be deployed.
    DestinationRule section:
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: ocsepp-db-service-dr
      namespace: <ocsepp-namespace>
    spec:
      exportTo:
      - "."
      host: "<db-service-fqdn>.<db-namespace>.svc.<domain>"          # Example:  mysql-connectivity-service.seppsvc.svc.cluster.local"
      trafficPolicy:
        tls:
          mode: DISABLE
  2. Modify the service entry in pod networking, so that the pods can access Kubernetes api- server. Update the following parameters value:
    • hosts
    • addresses
    kube-api-se
    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: kube-api-server
      namespace: <ocsepp-namespace>
    spec:
      hosts:
      - kubernetes.default.svc.<domain> # domain can be extracted using kubectl -n kube-system get configmap kubeadm-config -o yaml | grep -i dnsDomain
      exportTo:
      - "."
      addresses:
      - <20.96.0.1> # cluster IP of kubernetes api server, can be extracted using  this command --  kubectl get svc -n default
      location: MESH_INTERNAL
      ports:
      - number: 443
        name: https
        protocol: HTTPS
      resolution: NONE
  3. PeerAuthentication is created for the namespace with default mTLS mode as PERMISSIVE. If user wants to configure, modify the following parameter to STRICT or DISABLE.
     tlsmode: STRICT
    A sample template is as follows:

    peerauth

    
    apiVersion: security.istio.io/v1beta1
    kind: PeerAuthentication
    metadata:
     name: ocsepp-peerauthentication
    spec:
     selector:
      matchLabels:
       app.kubernetes.io/part-of: ocsepp
     mtls:
      mode: PERMISSIVE

    Note:

    • After the successful deployment, you can change the PeerAuthentication mtls mode to STRICT from PERMISSIVE and perform Helm upgrade.
    • Ensure that spec.selector.matchLabels must be app.kubernetes.io/part-of: ocsepp value. User should not change it.
  4. Optional: Uncomment the SE section below, if deploying OCSEPP in ASM mode (only for ATS):
    apiVersion: networking.istio.io/v1beta1
    kind: ServiceEntry
    metadata:
      name: stub-serviceentry
      namespace: seppsvc
    spec:
      exportTo:
      - '*'
      hosts:
      - '*.svc.cluster.local'
      - '*.3gppnetwork.org'
      location: MESH_INTERNAL
      ports:
      - name: http2
        number: 8080
        protocol: HTTP2
      resolution: NONE
  5. Envoy filter will be deployed with the following default configuration:
    
    envoyFilters_v_19x_111x:
      - name: serverheaderfilter
        configpatch:
          - applyTo: NETWORK_FILTER
            filtername: envoy.filters.network.http_connection_manager
            operation: MERGE
            typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
            configkey: server_header_transformation
            configvalue: PASS_THROUGH
  6. Run the following command to verify if all CRDs are installed:
    kubectl get <CRD-Name> -n <Namespace>
    Where,
    • CRD-Name are the resource names
    • Namespace is the namespace in which SEPP should be deployed
    Example:
    
    kubectl get se,dr,peerauthentication,envoyfilter,vs -n seppsvc

    Note:

    Any modification to the existing CRDs or adding CRDs can be done by updating the ocsepp_servicemesh_config_custom_values_<version>.yaml file and running Helm upgrade.
  7. Run the following command to install ASM specific resources in your namespace:
    helm install -f ocsepp_servicemesh_config_custom_values_<version>.yaml <release-name> ocsepp-servicemesh-config-<version>.tgz --namespace <sepp namespace>

    Example:

    helm install -f ocsepp_servicemesh_config_custom_values_25.2.100.yaml ocsepp-servicemesh ocsepp-servicemesh-config-25.2.100.tgz --namespace seppsvc
2.2.1.11.3 Deploying SEPP with ASM
This section explains how to deploy SEPP using ASM.
  1. Create namespace label for auto sidecar injection to automatically add the sidecars in all of the pods spawned in SEPP namespace:
    kubectl label ns <ocsepp-namespace> istio-injection=enabled
    Example:
    kubectl label ns seppsvc istio-injection=enabled
  2. Run the following command to verify that label is applied on the namespace:
    
    kubectl describe ns seppsvc
    
    Output:
    
    Name:         seppsvc
    Labels:       istio-injection=enabled
                  kubernetes.io/metadata.name=seppsvc
    Annotations:  <none>
    Status:       Active
  3. Update ocsepp_custom_values_<version>.yaml with the following:
    1. Update the following sidecar resource configuration in allResources section of customExtension in global section:
      • sidecar.istio.io/proxyCPULimit: "2"
      • sidecar.istio.io/proxyMemoryLimit: 1Gi
      • sidecar.istio.io/proxyCPU: 200m
      • sidecar.istio.io/proxyMemory: 128Mi
        
        customExtension:
          allResources:
            labels: {}
            annotations:
              sidecar.istio.io/proxyCPULimit: "2"
              sidecar.istio.io/proxyMemoryLimit: 1Gi
              sidecar.istio.io/proxyCPU: 200m
              sidecar.istio.io/proxyMemory: 128Mi
          lbServices:
            labels: {}
            annotations: {}
          lbDeployments:
            labels: {}
            annotations: {}
          nonlbServices:
            labels: {}
            annotations: {}
          nonlbDeployments:
            labels: {}
            annotations: {}
        
    2. To scrape metrics from SEPP pods, add oracle.com/cnc: "true" annotation under lbDeployments and nonlbDeployments section of customExtension in global section:

      Note:

      This step is required only if OSO is deployed.
      
      customExtension:
        allResources:
          labels: {}
          annotations: {}
        lbServices:
          labels: {}
          annotations: {}
        lbDeployments:
          labels: {}
          annotations:
            oracle.com/cnc: "true"
        nonlbServices:
          labels: {}
          annotations: {}
        nonlbDeployments:
          labels: {}
          annotations:
            oracle.com/cnc: "true"
      
      
    3. To enable Prometheus to scrape metrics from app-info, perf-info, and ocpm-config pods, add the following annotation in service specific customExtension section: traffic.sidecar.istio.io/excludeInboundPorts: "9000".

      To exclude additional inbound and outbound ports, the same procedure can be applied to these microservices.

      Note:

      The addition of excludeInboundPorts and excludeOutboundPorts in global sections like customExtension.lbDeployments or customExtension.nonlbDeployments is not allowed, as it can override essential ports and disrupt inter-microservice communication.
      
      nrfclient:
        perf-info:
          deployment:
            customExtension:
              labels: {}
              annotations:
      		  traffic.sidecar.istio.io/excludeInboundPorts: "9000"
        config-server:
          deployment:
            customExtension:
              labels: {}
              annotations:
      		  traffic.sidecar.istio.io/excludeInboundPorts: "9000"
        appinfo:
          deployment:
            customExtension:
              labels: {}
              annotations:
      		  traffic.sidecar.istio.io/excludeInboundPorts: "9000"

      Note:

      This step is required when OSO is deployed with ASM sidecar.
    4. Add the following sample configurations to exclude istio inbound and outbound ports in nfmanagement and nfdiscovery microservices:
      
      cacheServicePortStart: 8095
      cacheServicePortEnd: 8096
      istioExcludePorts: true
      cacheServiceLivenessPort: 7
      
    5. In the nrf-client-nfmanagement and nrf-client-nfdiscovery section, verify that the value 9091 is given to the istioExcludePorts parameter:
      istioExcludePorts: 53,9091
  4. To deploy and use SEPP with ASM, ensure to use ocsepp_custom_values_<version>.yaml file while performing helm install or upgrade.

    The file must have all the necessary changes mentioned in the Predeployment Configurations section for deploying SEPP with ASM.

2.2.1.11.4 Postdeployment Configuration

This section explains the postdeployment configurations.

Note:

The steps 1 to 3 are not required if SEPP is deployed only in ATS mode.
  1. In the ocsepp_servicemesh_config_custom_values_<version>.yaml file, do the following:
    1. Uncomment the section and add the following values for Service entry:
      1. In the hosts parameter, add the FQDN of the Remote SEPP. If the user has different N32C and N32F end points, user can create two seperate service entries.
      2. Add IP in the format <IP/32> for N32 Ingress Gateway of Remote SEPP in addresses parameter.
      3. Add IP in the endPointAddress parameter as given in step ii.
      4. Ensure that ports.number parameter is updated to the port number of N32 Ingress Gateway of Remote SEPP.

        service-entry

        - hosts: |-
            [ "prod.sepp.inter.oracle.com" ]         # FQDN of Remote SEPP
          exportTo: |-
            [ "." ]
          location: MESH_INTERNAL
          addresses: |-
            [ "10.233.12.232/32" ]
          ports:
          - number: 80
            name: http2
            protocol: HTTP2
          resolution: STATIC                  # should be NONE/DNS - if we are using wildchars in fqdn it must be NONE
          endPointAddress: 10.233.12.232
          name: seppendpoint
    2. Uncomment the section and add values for Virtual service as follows:
      1. In the host parameter, add the FQDN of the Remote SEPP. If the user has different N32C and N32F end points, user can create two seperate virtual services.
      2. In the destinationhost parameter, add the Remote SEPP fqdn , Remote SEPP namespace, and Remote SEPP domain name.
      3. Ensure that port parameter is updated to the port number of N32 Ingress Gateway of Remote SEPP.

        virtual Service

        
        virtualService:
          - name: remote-sepp-vs
            prefix: "/"
            host: prod.sepp.inter.oracle.com
            destinationhost: <remote-sepp-fqdn>.<remote-sepp-namespace>.svc.<domain>
            port: 80 
            attempts: "0"
            exportTo: |- 
              [ "." ]
        
      4. Uncomment the following sections to disable Istio retries on 503 response code for each microservice as needed:
        #Uncomment the below virtualservices to disable istio retries when 503 is received. User can add for services as per below templates. 
        #NOTE: Replace <ocsepp-release-name> with OCSEPP release name
          - name: no-istio-retries-for-plmn-ingress-gateway
            prefix: "/"
            host: <ocsepp-ocsepp-release-name>-plmn-ingress-gateway
            destinationhost: <ocsepp-release-name>-plmn-ingress-gateway
            port: 80 
            attempts: "0"
            exportTo: |- 
              [ "." ]
        
          - name: no-istio-retries-for-n32-ingress-gateway
            prefix: "/"
            host: <ocsepp-release-name>-n32-ingress-gateway
            destinationhost: <ocsepp-release-name>-n32-ingress-gateway
            port: 80 
            attempts: "0"
            exportTo: |- 
              [ "." ]
        
          - name: no-istio-retries-for-n32-egress-gateway
            host: <ocsepp-release-name>-n32-egress-gateway
            destinationhost: <ocsepp-release-name>-n32-egress-gateway
            attempts: "0"
            exportTo: |- 
              [ "." ]
        
          - name: no-istio-retries-for-plmn-egress-gateway
            prefix: "/"
            host: <ocsepp-release-name>-plmn-egress-gateway
            destinationhost: <ocsepp-release-name>-plmn-egress-gateway
            port: 8080
            attempts: "0"
            exportTo: |- 
              [ "." ]
        
          - name: no-istio-retries-for-pn32f
            host: <ocsepp-release-name>-pn32f-svc
            destinationhost: <ocsepp-release-name>-pn32f-svc
            attempts: "0"
            exportTo: |- 
              [ "." ]
        
          - name: no-istio-retries-for-cn32f
            host: <ocsepp-release-name>-cn32f-svc
            destinationhost: <ocsepp-release-name>-cn32f-svc
            attempts: "0"
            exportTo: |- 
              [ "." ]
        
          - name: no-istio-retries-for-config-mgr-svc
            host: <ocsepp-release-name>-config-mgr-svc
            destinationhost: <ocsepp-release-name>-config-mgr-svc
            attempts: "0"
            exportTo: |- 
              [ "." ]
        
          - name: no-istio-retries-for-nrf-client-nfdiscovery
            host: <ocsepp-release-name>-sepp-nrf-client-nfdiscovery
            destinationhost: <ocsepp-release-name>-sepp-nrf-client-nfdiscovery
            attempts: "0"
            exportTo: |- 
              [ "." ]
        
          - name: no-istio-retries-for-nrf-client-nfmanagement
            host: <ocsepp-release-name>-sepp-nrf-client-nfmanagement
            destinationhost: <ocsepp-release-name>-sepp-nrf-client-nfmanagement
            attempts: "0"
            exportTo: |- 
              [ "." ]
        
          - name: no-istio-retries-for-perf-info
            host: <ocsepp-release-name>-sepp-perf-info
            destinationhost: <ocsepp-release-name>-sepp-perf-info
            attempts: "0"
            exportTo: |-
              [ "." ]
        
          - name: no-istio-retries-for-cn32c-svc 
            host: <ocsepp-release-name>-cn32c-svc 
            destinationhost: <ocsepp-release-name>-cn32c-svc 
            attempts: "0"
            exportTo: |-
              [ "." ]
        
          - name: no-istio-retries-for-pn32c-svc 
            host: <ocsepp-release-name>-pn32c-svc 
            destinationhost: <ocsepp-release-name>-pn32c-svc 
            attempts: "0"
            exportTo: |-
              [ "." ]
        
          - name: no-istio-retries-for-alternate-route
            host: <ocsepp-release-name>-alternate-route
            destinationhost: <ocsepp-release-name>-alternate-route
            attempts: "0"
            exportTo: |-
              [ "." ]
        
          - name: no-istio-retries-for-nf-mediation
            host: <ocsepp-release-name>-nf-mediation
            destinationhost: <ocsepp-release-name>-nf-mediation
            attempts: "0"
            exportTo: |-
              [ "." ]
        
  2. The above changes will be done on C-SEPP as well as on P-SEPP. Ensure correct values are populated on both sides.
  3. Run the following command to add these changes:
    helm upgrade -f ocsepp_servicemesh_config_custom_values_<version>.yaml <release-name> ocsepp-servicemesh-config-25.2.100.tgz --namespace <ns>
    Example:
    helm upgrade -f ocsepp_servicemesh_config_custom_values_25.2.100.yaml ocsepp-servicemesh ocsepp-servicemesh-config-25.2.100.tgz --namespace seppsvc

    Enable Inter-NF communication

    For every new NF participating in call flows when SEPP is a client, DestinationRule, and ServiceEntry must be created in SEPP namespace to enable communication. Following is the inter-NF communication with SEPP:
    • SEPP to NRF communication ( for registration and heartbeat) Create CRDs as mentioned in above step.
    OSO deployment
  1. If OSO is deployed with service mesh, add this annotation in OSO ocoso_csar_vzw_<release-number>_prom_custom_values.yaml file file to exclude outbound ports of all the SEPP services.

    Example:

    traffic.sidecar.istio.io/excludeOutboundPorts: 9090, 9093, 9094, 8085, 9091,
    8091, 9000, 8081
    

    Note:

    This is applicable only when OSO is deployed with istio side car.
For more information on OSO deployment, see Oracle Communications Operations Services Overlay Installation and Upgrade Guide and Oracle Communications Operations Services Overlay User Guide.
2.2.1.11.5 Deleting Service Mesh

This section describes the steps to disable or delete the service mesh.

To disable service mesh, run the following command:

kubectl label --overwrite namespace seppsvc istio-injection=disabled

To verify if service mesh is disabled, run the following command:

kubectl get se,dr,peerauthentication,envoyfilter,vs -n seppsvc
To delete service mesh, run the following command:
helm delete <helm-release-name> -n <namespace>
Where,
  • <helm-release-name> is the release name used by the helm command. This release name must be the same as the release name used for ServiceMesh.
  • <namespace> is the deployment namespcae used by Helm command.
For example:
helm delete ocsepp-servicemesh -n seppsvc

Note:

The changes due to the disabling of service mesh will be reflected only if SEPP is redeployed.
2.2.1.12 Configuring Network Policies

Kubernetes network policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application. These network policies enforce access restrictions for all the applicable data flows except communication from Kubernetes node to pod for invoking container probe.

Note:

Configuring network policies is an optional step. Based on the security requirements, network policies may or may not be configured.

For more information on network policies, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.

Note:

  • If the traffic is blocked or unblocked between the pods even after applying network policies, check if any existing policy is impacting the same pod or set of pods that might alter the overall cumulative behavior.
  • If changing default ports of services such as Prometheus, Database, Jaegar, or if Ingress or Egress Gateway names are overridden, update them in the corresponding network policies.

Configuring Network Policies

The following are the various operations that can be performed for network policies:

2.2.1.12.1 Installing Network Policies

Prerequisite

Network Policies are implemented by using the network plug-in. To use network policies, you must use a networking solution that supports Network Policy.

Note:

For a fresh installation, it is recommended to install Network Policies before installing SEPP. However, if SEPP is already installed, you can still install the Network Policies.

To install network policy:

  1. Open the ocsepp_network_policies_custom_values_<version>.yaml file provided in the release package zip file. For downloading the file, see Downloading SEPP Package, Pushing the SEPP Images to Customer Docker Registry, and Pushing the Roaming Hub or Hosted SEPP Images to Customer Docker Registry.
  2. The file is provided with the default network policies. If required, update the ocsepp_network_policies_custom_values_<version>.yaml file. For more information on the parameters, see the Configuration Parameters for network policy parameter table.

    Note:

    • To run ATS, uncomment the following policies from ocsepp_network_policies_custom_values_<version>.yaml file:
      • allow-ingress-traffic-to-notification
      • allow-egress-ats
      • allow-ingress-ats
    • To connect with CNC Console, update the below parameter in the allow-ingress-from-console policy in the ocsepp_network_policies_custom_values_<version>.yaml file:
      • kubernetes.io/metadata.name: <namespace in which CNCC is deployed>
    • To copy messages from plmn-ingress-gateway and n32-ingress-gateway to kafka broker in Data Director, update the below parameter in the allow-egress-to-data-director-from-igw policy in the ocsepp_network_policies_custom_values_<version>.yaml file:
      • kubernetes.io/metadata.name: <namespace in which kafka broker is present>
    • In allow-ingress-prometheus and allow-egress-to-prometheus policies, kubernetes.io/metadata.name parameter must contain the value for the namespace where Prometheus is deployed, and app.kubernetes.io/name parameter value should match the label from Prometheus pod.
    • The following Network Policies require modification for ASM deployment. The required modifications are mentioned in the comments in ocsepp_custom_values_network_policies.yaml file. Update the policies as per the comments.
      • allow-ingress-sbi-n32-igw
      • allow-ingress-sbi-plmn-igw
  3. Run the following command to install the network policies:
    helm install <helm-release-name> <network-policy>/ -n <namepsace> -f
          <custom-value-file>

    For Example:

    helm install ocsepp-network-policy
            ocsepp-network-policy-25.2.100/ -n seppsvc -f
            ocsepp_network_policies_custom_values_25.2.100.yaml

    Where,

    • helm-release-name: ocsepp-network-policy helm release name.
    • custom-value-file: ocsepp-network-policy custom value file.
    • namespace: SEPP namespace.
    • network-policy: location where the network-policy package is stored.

Note:

  • Connections that were created before installing network policy and still persist are not impacted by the new network policy. Only the new connections would be impacted.
  • If you are using ATS suite along with network policies, it is required to install the SEPP and ATS in the same namespace.
2.2.1.12.2 Upgrading Network Policies

To add, delete, or update network policy:

  1. Modify the ocsepp_network_policies_custom_values_<version>.yaml file to update, add, and delete the network policy.
  2. Run the following command to upgrade the network policies:
helm upgrade <helm-release-name> <network-policy>/ -n <namespace> -f <custom-value-file>

For Example:

helm upgrade ocsepp-network-policy
        ocsepp-network-policy-<version>/ -n seppsvc -f
        ocsepp_network_policies_custom_values_<version>.yaml
Where,
  • helm-release-name: ocsepp-network-policy Helm release name.
  • custom-value-file: ocsepp-network-policy custom value file.
  • namespace: SEPP namespace.
  • network-policy: location where the network-policy package is stored
2.2.1.12.3 Verifying Network Policies

Run the following command to verify that the network policies have been applied successfully:

kubectl get <helm-release-name> -n <namespace>

For Example:

kubectl get ocsepp-release-network-policy -n seppsvc

Where,

  • helm-release-name: ocsepp-network-policy Helm release name
  • namespace: SEPP namespace
Sample output:
NAME                         POD-SELECTOR                                             AGE
allow-egress-database          app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway)   2m35s
allow-egress-dns               app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway)   2m35s
allow-egress-jaeger            app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway)   2m35s
allow-egress-k8-api            app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway)   2m35s
allow-egress-to-prometheus     app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway)   7s
allow-egress-to-sepp-pods      app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway)   2m35s
allow-ingress-from-console     app.kubernetes.io/name=config-mgr-svc                                   2m35s
allow-ingress-from-sepp-pods   app.kubernetes.io/part-of=ocsepp                                        2m35s
allow-ingress-prometheus       app.kubernetes.io/part-of=ocsepp                                        2m35s
allow-ingress-sbi-n32-igw      app.kubernetes.io/name=n32-ingress-gateway                              2m35s
allow-ingress-sbi-plmn-igw     app.kubernetes.io/name=plmn-ingress-gateway                             2m35s
deny-egress-all-except-egw     app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway)   2m35s
deny-ingress-all               app.kubernetes.io/part-of=ocsepp                                        2m35s
2.2.1.12.4 Uninstalling Network Policies
  1. Run the following command to uninstall the network policies:
helm uninstall <helm-release-name> -n<namespace>

For Example:

helm uninstall ocsepp-network-policy -n seppsvc

Note:

While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.
2.2.1.12.5 Configuration Parameters for Network Policies

This section includes information about the supported Kubernetes resource and configuration parameters for configuring Network Policies.

Table 2-23 Supported Kubernetes Resource for Configuring Network Policies

Parameter Description Details
apiVersion This is a mandatory parameter.

Specifies the Kubernetes version for access control.

Note: This is the supported API version for network policy. This is a read-only parameter.

DataType: String

Default Value: networking.k8s.io/v1
kind This is a mandatory parameter.

Represents the kind of REST resource this object represents.

Note: This is a read only parameter.

DataType: String

Default Value: NetworkPolicy

Table 2-24 Configuration Parameters for Network Policies

Parameter Description Details
metadata.name This is a mandatory parameter.

Specifies a unique name for the network policy.
DataType: String

Default Value: {{ .metadata.name }}
spec.{} This is a mandatory parameter.

This consists of all the information needed to define a particular network policy in the given namespace.

Note: SEPP supports the spec parameters defined in Kubernetes Resource Category.

DataType: Object

Default Value: NA

For more information about this functionality, see "Network Policies" in the Cloud Native Core, Security Edge Protection Proxy User Guide.

2.2.1.13 Configuring Traffic Segregation

This section provides information on how to configure Traffic Segregation in SEPP. For description of " Traffic Segregation" feature, see " Traffic Segregation" section in "SEPP Supported Features " chapter of Oracle Communications Cloud Native Core, Security Edge Protection Proxy User Guide .

Various networks can be created at the time of CNE cluster installation. The following things can be customized at the time of the cluster installation using cnlb.ini file provided as part of CNE installation.

  1. Number of network pools
  2. Number of Egress IPs
  3. Number of Service IPs/Ingress IPs
  4. External IPs/subnet

For more information, see Oracle Communications Cloud Native Core, Cloud Native Environment User Guide.

To use one or multiple interfaces, you must configure annotations in the deployment.customExtension.annotations parameter of the ocsepp_custom_values_<version>.yaml file.

Configuration at Ingress Gateway

Use the following annotation to configure network segregation at ingress-side in ocsepp_custom_values_<version>.yaml:

Annotation for a single interface

k8s.v1.cni.cncf.io/networks: default/<network interface>@<network interface>
oracle.com.cnc/cnlb: '[{"backendPortName": "<igw port name>", "cnlbIp": "<external IP>","cnlbPort":"<port number>"}]'

Here,

  • k8s.v1.cni.cncf.io/networks: Contains all the network attachment information the pod uses for network segregation.
  • oracle.com.cnc/cnlb: To define service IP and port configurations that the deployment will employ for ingress load balancing.

    Where,

    • cnlbIp is the front-end IP utilized by the application.
    • cnlbPort is the front-end port used in conjunction with the CNLB IP for load balancing.
    • backendPortName is the backend port name of the container that needs load balancing, retrievable from the deployment or pod spec of the application.

Sample annotation for a single interface:


k8s.v1.cni.cncf.io/networks: default/nf-sig1-int8@nf-sig1-int8
oracle.com.cnc/cnlb: '[{"backendPortName": "igw-port", "cnlbIp": "10.123.155.16","cnlbPort":"80"}]' 
Sample annotation for multiport:
        k8s.v1.cni.cncf.io/networks: default/nf-oam-int5@nf-oam-int5
        oracle.com.cnc/cnlb: '[{"backendPortName": "query", "cnlbIp": "10.75.180.128","cnlbPort": "80"}, {"backendPortName": "admin", "cnlbIp": "10.75.180.128", "cnlbPort":"16687"}]''

In the above example, each item in the list refers to a different backend port name with the same CNLB IP, but the ports for the front end are distinct.

Ensure that the backend port name aligns with the container port name specified in the deployment's specification, which needs to be load balanced from the port list. The CNLB IP represents the external IP of the service, and cnlbPort is the external-facing port:
ports:
- containerPort: 16686
  name: query
  protocol: TCP
- containerPort: 16687
  name: admin
  protocol: TCP

Configuration at Egress Gateway

Use the following annotation to configure network segregation at egress-side in ocsepp_custom_values_<version>.yaml:

Sample annotation for a single interface:
 k8s.v1.cni.cncf.io/networks: default/nf-sig-egr1@nf-sig-egr1
Sample annotation for a multiple interface:
k8s.v1.cni.cncf.io/networks: default/nf-oam-egr1@nf-oam-egr1,default/nf-sig-egr1@nf-sig-egr1

Note:

  • The network attachments will be deployed as a part of cluster installation only.
  • The network attachment name should be unique for all the pods.

For information about the above mentioned annotations, see "Configuring Cloud Native Load Balancer (CNLB)" in Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

2.2.2 Installation Tasks

This section provides installation procedures to install Security Edge Protection Proxy (SEPP) using Command Line Interface (CLI).

Before installing SEPP, you must complete Prerequisites and Preinstallation Tasks for both the deployment methods.

2.2.2.1 Installing SEPP Package

To install the SEPP package:

  1. Navigate to the Helm directory which is a part of Files directory of unzipped csar package. Run the following command:
    cd Files/Helm
  2. Run the following command to verify the SEPP Helm charts in the Helm directory:
    ls

    The output must be:

    • ocsepp-25.2.100.tgz
    • ocsepp-network-policy-25.2.100.tgz
    • ocsepp-servicemesh-config-25.2.100.tgz

  3. Customize the ocsepp_custom_values_25.2.100.yaml file with the required deployment parameters. See Customizing SEPP section to customize the file. For more information about predeployment parameter configurations, see Predeployment Configuration tasks.

    Note:

    Customize the ocsepp_custom_values_25.2.100.yaml file for SEPP deployment. See Customizing SEPP section for the details of the parameters. Some of the mandatory parameters are:
    • dockerRegistry
    • namespace
    • mysql.primary.host
    • SEPP inter and intra FQDN (nfFqdnRef, viaHeaderSeppViaInterFqdn, viaHeaderSeppViaIntraFqdn, intraPlmnFqdn, sanValues).

    Note:

    • In case of multisite georedundant setups, configure nfInstanceId uniquely for each SEPP site.
    • Ensure the nfInstanceId configuration in the global section is same as that in the appProfile section of NRF client. - dockerRegistry: occne-repo-host:5000
  4. Run the following command to install SEPP:
    helm install <helm-release> ocsepp-25.2.100.tgz --namespace <k8s namespace> -f <path to ocsepp_customized_values.yaml>
    Example:
    helm install ocsepp-release ocsepp-25.2.100.tgz --namespace seppsvc -f ocsepp_custom_values_25.2.100.yaml

    Note:

    • Ensure the following:

      <helm-release> must not exceed 20 characters.

      namespacename is the deployment namespace used by helm command.

      custom_values.yaml file name is the name of the custom values yaml file (including location).

    • Timeout duration: Timeout duration is an optional parameter that can be used in the Helm install command. If it is not specified, the default value will be 5m (5 minutes) in Helm3. It sets the time to wait for any individual Kubernetes operation (like Jobs for hooks). The default value is 5ms. If the helm install command fails at any point to create a Kubernetes object, it will internally call the purge to delete after the timeout value (default: 300s). Here, timeout value is not for overall installation but automatic purge on installation failure.

    • In Georedundant deployment, if you want to add or remove a site, refer to Adding a Site to an Existing SEPP Georedundant Site and Removing a Site to from an Existing Georedundant Deployment.

    Caution:

    Do not exit from helm install command manually. After running the helm install command, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out from helm install command. It leads to some anomalous behavior.

2.2.2.2 Installing Roaming Hub or Hosted SEPP

This section describes how to install Roaming Hub or Hosted SEPP in the Cloud Native Environment.

Note:

This is applicable only for Roaming Hub or Hosted SEPP installation.
  1. Navigate to the Helm directory which is a part of Files directory of unzipped csar package. Run the following command:
    cd Files/Helm
  2. Run the following command to verify the SEPP Helm charts in the Helm directory:
    ls

    The output must be:

    • ocsepp-25.2.100.tgz
    • ocsepp-network-policy-25.2.100.tgz
    • ocsepp-servicemesh-config-25.2.100.tgz

  3. Customize the ocsepp_custom_values_25.2.100.yaml file with the required deployment parameters. See Customizing SEPP section to customize the file. For more information about predeployment parameter configurations, see Predeployment Configuration tasks.
  4. Run the following command to install SEPP:
    helm install <helm-release> ocsepp-25.2.100.tgz --namespace <k8s namespace> -f <path to ocsepp_custom_values_roaming_hub_25.2.100.yaml>
    Example:
    helm install ocsepp-release ocsepp-25.2.100.tgz --namespace seppsvc -f ocsepp_custom_values_roaming_hub_25.2.100.yaml

    Note:

    • Ensure the following:

      <helm-release> must not exceed 20 characters.

      namespacename is the deployment namespace used by helm command.

      custom_values.yaml file name is the name of the custom values yaml file (including location).

    • Timeout duration: Timeout duration is an optional parameter that can be used in the Helm install command. If it is not specified, the default value will be 5m (5 minutes) in Helm3. It sets the time to wait for any individual Kubernetes operation (like Jobs for hooks). The default value is 5ms. If the helm install command fails at any point to create a Kubernetes object, it will internally call the purge to delete after the timeout value (default: 300s). Here, timeout value is not for overall installation but automatic purge on installation failure.

    • In Georedundant deployment, if you want to add or remove a site, refer to Adding a Site to an Existing SEPP Georedundant Site and Removing a Site to from an Existing Georedundant Deployment.

    Caution:

    Do not exit from helm install command manually. After running the helm install command, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out from helm install command. It leads to some anomalous behavior.

2.2.3 Postinstallation Tasks

This section explains the postinstallation tasks for SEPP.

2.2.3.1 Verifying Installation

To verify the installation:

  1. To verify the deployment status, open a new terminal and run the following command:
    $ watch kubectl get pods -n <SEPP namespace>

    The pod status gets updated at regular intervals.

  2. Run the following command to verify the installation status:
    helm status <helm-release> -n <SEPP namespace>
    Example:
    helm status ocsepp-release -n seppsvc

    Where,

    • <helm-release> is the Helm release name of SEPP.
    • <SEPP namespace> is the namespace of SEPP deployment.

    If the deployment is successful, then the status is displayed as deployed.

    Sample output:
    
    NAME: ocsepp-release
    LAST DEPLOYED: Sat Jan 11 20:08:03 2025
    NAMESPACE: seppsvc
    STATUS: deployed
    REVISION: 1
  3. Run the following command to check the status of the services:
    kubectl -n <SEPP namespace> get services 
    Example:
      kubectl -n seppsvc get services 
  4. Run the following command to check the status of the pods:
    $ kubectl get pods -n <SEPP namespace> 

    The value in the STATUS column of all the pods must be Running.

    The value in the READY column of all the pods must be n/n, where n is the number of containers in the pod.

    Example:

    $ kubectl get pods -n seppsvc 
    NAME                                         READY  STATUS   RESTARTS  AGE
    ocsepp-release-appinfo-55b8d4f687-wqtgj                      1/1   Running   0     141m
    ocsepp-release-cn32c-svc-64cd9c555c-ftd8z                    1/1   Running   0     113m
    ocsepp-release-cn32f-svc-dd886fbcc-xr2z8                     1/1   Running   0     4m4s
    ocsepp-release-config-mgr-svc-6c8ddf4c4f-lb4zj               1/1   Running   0     141m
    ocsepp-release-n32-egress-gateway-5b575bbf5f-z5bbx           2/2   Running   0     131m
    ocsepp-release-n32-ingress-gateway-76874c967b-btp46          2/2   Running   0     131m
    ocsepp-release-ocpm-config-65978858dc-t4t5k                  1/1   Running   0     141m
    ocsepp-release-performance-67d76d9d58-llwmt                  1/1   Running   0     141m
    ocsepp-release-plmn-egress-gateway-6dc4759cc7-wn6r8          2/2   Running   0     31m
    ocsepp-release-plmn-ingress-gateway-56c9b45658-hfcxx         2/2   Running   0     131m
    ocsepp-release-pn32c-svc-57774fdc4-2qpvx                     1/1   Running   0     141m
    ocsepp-release-pn32f-svc-586cd87c7b-pxk6m                    1/1   Running   0     3m47s
    ocsepp-release-sepp-nrf-client-nfdiscovery-65747884cd-qblqn  1/1   Running   0     141m
    ocsepp-release-sepp-nrf-client-nfmanagement-5dd6ff98d6-cr7s7 1/1   Running   0     141m
    ocsepp-release-nf-mediation-74bd4dc799-d9ks2                 1/1   Running   0     141m
    ocsepp-release-coherence-svc-54f7987c4b-wv4h7                1/1   Running   0     141m
    
    

Note:

  • Take a backup of the following files that are required during fault recovery:
    • Updated ocsepp_custom_values_<version>.yaml file
    • Updated Helm charts
    • Secrets, certificates, and keys that are used during installation
  • If the installation is not successful or you do not see the status as Running for all the pods, perform the troubleshooting steps provided in Oracle Communications Cloud Native Core, Security Edge Protection Proxy Troubleshooting Guide.
2.2.3.2 Performing Helm Test

This section describes how to perform sanity check for SEPP installation through Helm test. The pods to be checked should be based on the namespace and label selector configured for the Helm test configurations.

Helm Test is a feature that validates successful installation of SEPP and determines if the NF is ready to take traffic.

This test also checks for all the PVCs to be in bound state under the Release namespace and label selector configured.

Note:

Helm Test can be performed only on Helm3.

Perform the following Helm test procedure:

  1. Configure the Helm test configurations under the global parameters section of the ocsepp_custom_values_<version>.yaml file as follows:
    #helm test configuration
    test:
      imageRepository: occne-repo-host:5000
      nfName: ocsepp
      image:
        name: nf_test
        tag: 25.2.100
        pullPolicy: Always
      config:
        logLevel: INFO
        # Configure timeout in SECONDS.
        # Estimated total time required for SEPP deployment and helm test command completion
        timeout: 240
      resources:
        requests:
          cpu: 1
          memory: 1Gi
          #ephemeral-storage: 70Mi
        limits:
          cpu: 1
          memory: 1Gi
          #ephemeral-storage: 1Gi
      complianceEnable: true
      k8resources:
        - horizontalpodautoscalers/v1
        - deployments/v1
        - configmaps/v1
        - prometheusrules/v1
        - serviceaccounts/v1
        - poddisruptionbudgets/v1
        - roles/v1
        - services/v1
        - rolebindings/v1
    
  2. Run the following Helm test command:
    helm test <helm-release> -n <namespace>

    Where,

    <helm-release> is the release name.

    <namspace> is the deployment namespace where SEPP is installed.

    Example:
    helm test ocsepp-release -n seppsvc
    Sample Output:
    [admusr@cnejac0101-bastion-2 ocsepp-22.4.0-0]$  helm test ocsepp-release -n seppsvc
    NAME: ocsepp-release
    LAST DEPLOYED: Fri Aug 19 04:56:36 2022
    NAMESPACE: seppsvc
    STATUS: deployed
    REVISION: 1
    TEST SUITE:     ocsepp-release
    Last Started:   Fri Aug 19 05:02:03 2022
    Last Completed: Fri Aug 19 05:02:26 2022
    Phase:          Succeeded
       

If the Helm test fails, see Oracle Communications Cloud Native Core, Security Edge Protection Proxy Troubleshooting Guide.

2.2.3.3 Taking the Back Up
Take a backup of the following files, which are required during fault recovery:
  • Current custom-values.yaml file from which you are upgrading
  • Updated ocsepp_custom_values_<version>.yaml file
  • Updated Helm charts
  • Secrets, certificates, and keys that are used during installation.
  • Updated ocsepp_servicemesh_config_custom_values_<version>.yaml file.
  • Updated ocsepp_custom_values_<version>.yaml file.
2.2.3.4 Alert Configuration

This section describes the measurement based Alert rules configuration for SEPP. The Alert Manager uses the Prometheus measurements values as reported by microservices in conditions under alert rules to trigger alerts.

Note:

Alert file is packaged with SEPP custom templates. Perform the following steps before configuring alert file:

  1. Download the SEPP CSAR package from MOS. For more information, refer Downloading SEPP section.
  2. Unzip the SEPP CSAR package file to get the ocsepp_alertrules_promha_<version>.yaml and ocsepp_alertrules_<version>.yaml files.
  3. By default, kubernetes_namespace or namespace is configured as Kubernetes namespace in which SEPP is deployed. Default value of Kubernetes namespace is "sepp-namespace". Update it to the namesapace in which SEPP is deployed.
  4. Set the namespace parameter in ocsepp_alertrules_promha_<release version>.yaml file to SEPP Namespace.

    That is, set Namespace as <SEPP Namespace>
    Example:
    namespace="sepp-namespace" Where namespace name is ‘sepp-namespace’
  5. Set the kubernetes_namespace parameter in ocsepp_alertrules_<release version>.yaml file to SEPP Namespace.

    That is, set kubernetes_namespace as <SEPP Namespace>
    Example:
    kubernetes_namespace="sepp-namespace" Where kubernetes_namespace name is ‘sepp-namespace’
  6. Set the deployment parameter in ocsepp_alertrules_promha_<release version>.yaml and ocsepp_alertrules_<release version>.yaml file.

    That is, set app_kubernetes_io_part_of as "<deployment name>"
    Example:
    app_kubernetes_io_part_of="ocsepp”, Where deployment name is 'ocsepp'

2.2.3.4.1 Configuring Alerts for CNE 1.8.x and Previous Versions

The following procedure describes how to configure the SEPP alerts for CNE version 1.8.x and previous versions:

  1. Run the following command to find the config map to configure alerts in the Prometheus server:
    kubectl get configmap -n <Namespace>

    where, <Namespace> is the prometheus server namespace used in helm install command.

  2. Run the following command to take backup of current config map of prometheus server:
    kubectl get configmaps <NAME>-server -o yaml -n <Namespace> > /tmp/tempConfig.yaml
    where, <Namespace> is the prometheus server namespace used in helm install command.
    For example, assuming chart name is "prometheus-alert", so "_NAME_-server" becomes "prometheus-alert-server", run the following command to find the config map:
    kubectl get configmaps prometheus-alert-server -o yaml -n prometheus-alert2 > /tmp/tempConfig.yaml
  3. Run the following command to check if alertssepp is present in the tempConfig.yaml file:
    cat /tmp/t_mapConfig.yaml  | grep alertssepp
  4. Run the following command to delete the alertssepp entry from the t_mapConfig.yaml file, if the alertssepp is present :
    sed -i '/etc\/config\/alertssepp/d' /tmp/t_mapConfig.yaml
    
  5. Run the following command to add the alertssepp entry in the t_mapConfig.yaml file, if the alertssepp is not present :
    sed -i '/rule_files:/a\    \- /etc/config/alertssepp'  /tmp/t_mapConfig.yaml
  6. Run the following command to reload the config map with the modifed file:
    kubectl replace configmap <Name> -f /tmp/t_mapConfig.yaml
  7. Run the following command to add seppAlertRules.yaml file into prometheus config map under filename of SEPP alert file :
    kubectl patch configmap <Name> -n <Namespace> --type merge --patch
    "$(cat <PATH>/seppAlertRules.yaml)"
  8. Restart prometheus-server pod.
  9. Verify the alerts in prometheus GUI.

Note:

Prometheus takes about 20 seconds to apply the updated Config
    map.
2.2.3.4.2 Configuring Alerts for CNE 1.9.x and Higher Versions

The following procedure describes how to configure the SEPP alerts for OCCNE 1.9.x and higher versions:

  1. Run the following command to apply the Prometheus rules Custom Resource Definition (CRD):
    kubectl apply -f <file_name> -n <sepp namespace>
    Where,
    • <file_name> is the SEPP alerts file
    • <sepp namespace> is the SEPP namespace
    Example:
    $ kubectl apply -f ocsepp_alerting_rules_promha.yaml -n seppsvc
  2. Run the following command to check if SEPP alert file is added to Prometheus rules:
    $ kubectl get prometheusrules --namespace <namespace> 
    Example:
    $ kubectl get prometheusrules --namespace seppsvc
  3. Log in to Prometheus GUI and verify the alerts section.

    Note:

    The Prometheus server takes an updated config map that is automatically reloaded after approximately 60 seconds. Refresh the Prometheus GUI to confirm that the SEPP alerts have been reloaded.
2.2.3.4.3 Configuring Alerts in OCI

The following procedure describes how to configure the SEPP alerts for OCI. The OCI supports metric expressions written in MQL (Metric Query Language) and thus, requires a new SEPP alert file for configuring alerts in OCI observability platform.

The following are the steps:

  1. Run the following command to extract the .zip file:
    unzip ocsepp_oci_alertrules_<version>.zip
    The ocsepp_oci and ocsepp_oci_resources folders are available in the zip file.

    Note:

    The zip file is available in the Scripts folder of CSAR package.
  2. Open the ocsepp_oci folder, in the notifications.tf file, update the parameter endpoint with the email id of the user.
  3. Open the ocsepp_oci_resources folder, in the notifications.tf file, update the parameter endpoint with the email id of the user.
  4. Log in to the OCI Console.

    Note:

    For more details about logging in to the OCI, refer to Signing In to the OCI Console.
  5. Open the navigation menu and select Developer Services. The Developer Services window appears on the right pane.
  6. Under the Developer Services, select Resource Manager.
  7. Under Resource Manager, select Stacks. The Stacks window appears.
  8. Click Create Stack.
  9. Select the default My Configuration radio button.
  10. Under Stack configuration, select the folder radio button and upload the ocsepp_oci folder.
  11. Enter the Name and Description and select the compartment.
  12. Select the latest Terraform version from the Terraform version drop-down.
  13. Click Next. The Edit Stack screen appears.
  14. Enter the required inputs to create the SEPP alerts or alarms and click Save and Run Apply.
  15. Verify that the alarms are created in the Alarm Definitions screen (OCI Console> Observability & Management> Monitoring>Alarm Definitions) provided.

    The required inputs are:

    • Alarms Configuration
      • Compartment Name - Choose name of compartment from the drop-down
      • Metric namespace - Metric namespace that the user provided while deploying OCI Adaptors.
      • Topic Name - Any user configurable name. Must contain fewer than 256 characters. Only alphanumeric characters plus hyphens (-) and underscores (_) are allowed.
      • Message Format - Keep it as ONS_OPTIMIZED. (This is pre-populated)
      • Alarm is_enabled - Keep it as True. (This is pre-populated)
  16. The steps 6 to 15 must be repeated for uploading the ocsepp_oci_resources folder. Here, Metric namespace will be pre-populated.

For more details, see Oracle Communications Cloud Native Core, OCI Adaptor Deployment Guide.