2 Installing SEPP
This chapter provides information about installing SEPP using Command Line Interface (CLI) procedures. CLI provides an interface to run various commands required for SEPP deployment processes.
The SEPP installation is supported over the following platforms:
- Oracle Communications Cloud Native Core, Cloud Native Environment (CNE). For more information about CNE, see Oracle Communications Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
- Oracle Cloud Infrastructure (OCI). For more information, see Oracle Communications Cloud Native Core, OCI Deployment Guide.
- General Kubernetes environment.
Note:
SEPP supports fresh installation, and it can also be upgraded. For more information on how to upgrade SEPP, see Upgrading SEPP section.
The user can install either SEPP or Roaming Hub and Hosted SEPP. The installation procedure comprises of prerequisites, predeployment configuration, installation, and postinstallation tasks. You must perform the installation tasks in the same sequence as outlined in the following table:
Table 2-1 SEPP or Roaming Hub and Hosted SEPP Installation Sequence
| Task | Sub tasks | Applicable for SEPP Installation (CNE Deployment) | Applicable for Roaming Hub and Hosted SEPP Installation (CNE Deployment) | Applicable for OCI Deployment |
|---|---|---|---|---|
| Prerequisites: This section describes how to set up the installation environment. | Prerequisites | Yes | Yes | Yes |
| - | Software Requirements | Yes | Yes | Yes |
| - | Environment Setup Requirements | Yes | Yes | Yes |
| - | Resource Requirements | SEPP Resource Requirements | Roaming Hub or Hosted SEPP Resource Requirements | SEPP Resource Requirements |
| Preinstallation Tasks: This section describes how to create namespace and database and configure Kubernetes secrets. | Preinstallation Tasks | Yes | Yes | Yes |
| - | Downloading SEPP package | Yes | Yes | Yes |
| - | Pushing the SEPP Images to Customer Docker Registry | Yes | No | No |
| - | Pushing the Roaming Hub or Hosted SEPP Images to Customer Docker Registry | No | Yes | Yes |
| - | Pushing the SEPP Images to OCI Docker Registry | No | No | Yes |
| - | Verifying and Creating SEPP Namespace | Yes | Yes | Yes |
| - | Configuring Database, Creating Users, and Granting Permissions | Yes | Yes | Yes |
| - | Configuring Kubernetes Secrets for Accessing SEPP Database | Yes | Yes | Yes |
| - | Configuring Kubernetes Secret for Enabling HTTPS/ HTTP over TLS | Yes | Yes | Yes |
| Installation Tasks: This section describes how to download the SEPP package, install SEPP, and verify the installation. | Installation Tasks | |||
| Installing SEPP / Roaming Hub | Installing SEPP/Roaming Hub/Hosted SEPP | Installing SEPP | Installing Roaming Hub or Hosted SEPP | Installing SEPP |
| Verifying SEPP Installation | Verifying SEPP Installation | Yes | Yes | Yes |
| PodDisruptionBudget Kubernetes Resource | PodDisruptionBudget Kubernetes Resource | Yes | Yes | Yes |
| Customizing SEPP | Customizing SEPP | Yes | Yes | Yes |
| Upgrading SEPP | Upgrading SEPP | Yes | Yes | Yes |
| Rollback SEPP deployment | Rollback SEPP deployment | Yes | Yes | Yes |
| Uninstalling SEPP | Uninstalling SEPP | Yes | Yes | Yes |
| Fault Recovery | Fault Recovery | Yes | Yes | Yes |
2.1 Prerequisites
Before installing and configuring SEPP, ensure that the following prerequisites are met.
2.1.1 Software Requirements
This section lists the software that must be installed before installing SEPP:
Note:
Table 2-2 and Table 2-3 offer a comprehensive list of software necessary for the proper functioning of BSF during deployment. However, these tables are indicative, and the software used can vary based on the customer's specific requirements and solution.
The Software Requirement column in Table 2-2 and Table 2-3 indicates one of the following:
- Mandatory: Absolutely essential; the software cannot function without it.
- Recommended: Suggested for optimal performance or best practices but not strictly necessary.
- Conditional: Required only under specific conditions or configurations.
- Optional: Not essential; can be included based on specific use cases or preferences.
Table 2-2 Preinstalled Software Versions
| Software | 25.2.2xx | 25.2.1xx | 25.1.2xx | Software Requirement | Usage Description |
|---|---|---|---|---|---|
| Kubernetes | 1.34.1 | 1.33.1 | 1.32.0 | Mandatory |
Kubernetes orchestrates scalable, automated NF deployments for high availability and efficient resource utilization. Impact: Preinstallation is required. Without orchestration capabilities, deploying and managing network functions (NFs) can become complex, leading to inefficient resource utilization and potential downtime. |
| Helm | 3.19.1 | 3.18.x | 3.17.1 | Mandatory |
Helm, a package manager, simplifies deploying and managing NFs on Kubernetes with reusable, versioned charts for easy automation and scaling. Impact: Preinstallation is required. Without this capability, management of NF versions and configurations becomes time-consuming and error-prone, impacting deployment consistency. |
| Podman | 5.2.2 | 5.2.2 | 5.2.2 | Recommended |
Podman is a part of Oracle Linux. It manages and runs containerized NFs without requiring a daemon, offering flexibility and compatibility with Kubernetes. Impact: Preinstallation is required. Without efficient container management, the development and deployment of NFs could become cumbersome, impacting agility. |
echo $OCCNE_VERSIONhelm versionkubectl versionpodman versionNote:
This guide covers the installation instructions for SEPP when Podman is the container platform with Helm as the Packaging Manager. For non-CNE, the operator can use commands based on their deployed Container Runtime Environment, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.Note:
Podman version or docker version based on the container engine installed.If you are deploying SEPP in a cloud native environment, these following additional software are to be installed before installing SEPP.
Table 2-3 Additional Software Versions
| Software | 25.2.2xx | 25.2.1xx | 25.1.2xx | Software Requirement | Usage Description |
|---|---|---|---|---|---|
| AlertManager | 0.28.0 | 0.28.0 | 0.28.0 | Recommended |
Alertmanager is a component that works in conjunction with Prometheus to manage and dispatch alerts. It handles the routing and notification of alerts to various receivers. Impact: Not implementing alerting mechanisms can lead to delayed responses to critical issues, potentially resulting in service outages or degraded performance. |
| Calico | 3.30.3 | 3.29.3 | 3.29.1 | Recommended |
Calico provides networking and security for NFs in Kubernetes, ensuring scalable, policy-driven connectivity. Impact: Calico is a popular Container Network Interface (CNI) and CNI is mandatory for the functioning of 5G NFs. Without a CNI plugin, the network could witness security vulnerabilities and inadequate traffic management, impacting the reliability of NF communications. |
| cinder-csi-plugin | 1.33.0 | 1.32.0 | 1.32.0 | Mandatory |
Cinder CSI (Container Storage Interface) plugin is used for provisioning and managing block storage in Kubernetes. It is often used in OpenStack environments to provide persistent storage for containerized applications Impact: Without the CSI plugin, provisioning block storage for NFs would be manual and inefficient, complicating storage management. |
| containerd | 2.1.4 | 2.0.5 | 1.7.24 | Recommended |
Containerd manages container lifecycles to run NFs efficiently in Kubernetes. Impact: A lack of a reliable container runtime could lead to performance issues and instability in NF operations. |
| CoreDNS | 1.12.0 | 1.12.0 | 1.11.13 | Recommended |
CoreDNS is the DNS server in Kubernetes, which provides DNS resolution services within the cluster. Impact: DNS is an essential part of deployment. Without proper service discovery, NFs would struggle to communicate with each other, leading to connectivity issues and operational failures. |
| Fluentd | 1.17.1 | 1.17.1 | 1.17.1 | Recommended |
Fluentd is an open source data collector that streamlines data collection and consumption, ensuring improved data utilization and comprehension. Impact: Not utilizing centralized logging can hinder the ability to track NF activity and troubleshoot issues effectively, complicating maintenance and support. |
| Grafana | 7.5.17 (OCI Grafana) | 7.5.17 (OCI Grafana) | 9.5.3 | Recommended |
Grafana is a popular open source platform for monitoring and observability. It provides a user-friendly interface for creating and viewing dashboards based on various data sources. Impact: Without visualization tools, interpreting complex metrics and gaining insights into NF performance would be cumbersome, affecting effective management. |
| Jaeger | 1.72.0 | 1.69.0 | 1.65.0 | Recommended |
Jaeger provides distributed tracing for 5G NFs, enabling performance monitoring and troubleshooting across microservices. Impact: Not utilizing distributed tracing may hinder the ability to diagnose performance bottlenecks, making it challenging to optimize NF interactions and user experience. |
| Kyverno | 1.15.0 | 1.13.4 | 1.13.4 | Recommended |
Kyverno is a Kubernetes policy engine that allows to manage and enforce policies for resource configurations within a Kubernetes cluster. Impact: Without the policy enforcement, there could be misconfigurations, resulting in security risks and instability in NF operations, affecting reliability. |
| MetalLB | 0.15.2 | 0.14.4 | 0.14.4 | Recommended |
MetalLB is used as a load balancing solution in CNE, which is mandatory for the solution to work. MetalLB provides load balancing and external IP management for 5G NFs in Kubernetes environments. Impact: Without load balancing, traffic distribution among NFs may be inefficient, leading to potential bottlenecks and service degradation. |
| metrics-server | 0.7.2 | 0.7.2 | 0.7.2 | Recommended |
Metrics server is used in Kubernetes for collecting resource usage data from pods and nodes. Impact: Without resource metrics, auto-scaling and resource optimization would be limited, potentially leading to resource contention or underutilization. |
| Multus | 4.2.1-thick | 4.1.3 | 4.1.3 | Recommended |
Multus enables multiple network interfaces in Kubernetes pods, allowing custom configurations and isolated paths for advanced use cases such as NF deployments, ultimately supporting traffic segregation. Impact: Without this capability, connecting NFs to multiple networks could be limited, impacting network performance and isolation. |
| OpenSearch | 2.18.0 | 2.19.1 | 2.15.0 | Recommended |
OpenSearch provides scalable search and analytics for 5G NFs, enabling efficient data exploration and visualization. Impact: Without a robust analytics solution, there would be difficulties in identifying performance issues and optimizing NF operations, affecting overall service quality. |
| OpenSearch Dashboard | 2.18.0 | 2.19.1 | 2.15.0 | Recommended |
OpenSearch dashboard visualizes and analyzes data for 5G NFs, offering interactive insights and custom reporting. Impact: Without visualization capabilities, understanding NF performance metrics and trends would be difficult, limiting informed decision making. |
| Prometheus | 3.6.0 | 3.4.1 | 3.2.0 | Mandatory |
Prometheus is a popular open source monitoring and alerting toolkit. It collects and stores metrics from various sources and allows for alerting and querying. Impact: Not employing this monitoring solution could result in a lack of visibility into NF performance, making it difficult to troubleshoot issues and optimize resource usage. |
| prometheus-kube-state-metric | 2.16.0 | 2.16.0 | 2.15.0 | Recommended |
Kube-state-metrics is a service that generates metrics about the state of various resources in a Kubernetes cluster. It's commonly used for monitoring and alerting purposes. Impact: Without these metrics, monitoring the health and performance of NFs could be challenging, making it harder to proactively address issues. |
| prometheus-node-exporter | 1.9.1 | 1.9.1 | 1.8.2 | Recommended |
Prometheus Node Exporter collects hardware and OS-level metrics from Linux hosts. Impact: Without node-level metrics, visibility into infrastructure performance would be limited, complicating the identification of resource bottlenecks. |
| Prometheus Operator | 0.83.0 | 0.83.0 | 0.80.1 | Recommended |
The Prometheus Operator is used for managing Prometheus monitoring systems in Kubernetes. Prometheus Operator simplifies the configuration and management of Prometheus instances. Impact: Not using this operator could complicate the setup and management of monitoring solutions, increasing the risk of missed performance insights. |
| rook | 1.16.7 | 1.16.7 | 1.16.6 | Mandatory |
Rook is the Ceph orchestrator for Kubernetes that provides storage solutions. It is used in BareMetal CNE solution. Impact: Not utilizing rook could increase the complexity of deploying and managing ceph, making it difficult to scale storage solutions in a Kubernetes environment. |
| snmp-notifier | 2.0.0 | 2.0.0 | 1.6.1 | Recommended |
snmp-notifier sends SNMP alerts for 5G NFs, providing real-time notifications for network events. Impact: Without SNMP notifications, proactive monitoring of NF health and performance could be compromised, delaying response to critical issues. |
| Velero | 1.13.2 | 1.13.2 | 1.13.2 | Recommended |
Velero backs up and restores Kubernetes clusters for 5G NFs, ensuring data protection and disaster recovery. Impact: Without backup and recovery capabilities, customers would witness a risk of data loss and extended downtime, requiring a full cluster reinstall in case of failure or upgrade. |
Important:
If you are using NRF with SEPP, install it before proceeding with the SEPP installation. SEPP 25.2.2xx supports NRF 25.2.2xx.2.1.2 Environment Setup Requirements
This section describes the environment setup requirements for installing SEPP.
2.1.2.1 Client Machine Requirements
This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.
- Helm repository configured.
- To add a Helm repository, run the following command:
Where,helm repo add <helm-repo-name> <helm-repo-address><helm-repo-name>is the name of the Helm repository and<helm-repo-address>is the URL of the Helm repository. - To verify that Helm repository has been added successfully, run the
following command:
The output must show the added Helm repository in the list.helm repo list
- To add a Helm repository, run the following command:
- network access to the Helm repository and Docker image registry.
- network access to the Kubernetes cluster.
- required environment settings to run the
kubectl,podman, ordockercommands. The environment should have privileges to create namespace in the Kubernetes cluster. - Helm client installed with the push plugin. Configure the
environment in such a manner that the
helm installcommand deploys the software in the Kubernetes cluster.
2.1.2.2 Network Access Requirement
The Kubernetes cluster hosts must have network access to the following repositories:
- Local Helm repository: It contains the SEPP
helm charts.
To check if the Kubernetes cluster hosts have network access to the local helm repository, run the following command:
helm repo update - Local Docker image registry: It contains
the SEPP Docker images.
To check if the Kubernetes cluster hosts can access the local docker image registry, pull any image with an image-tag using either of the following commands:
docker pull <docker-repo>/<image-name>:<image-tag>
Where:podman pull <podman-repo>/<image-name>:<image-tag><docker-repo>is the IP address or host name of the Docker registry.<image-name>is the Docker image name.<image-tag>is the tag assigned to the Docker image used for the SEPP pod.
Example:
docker pull CUSTOMER_REPO/oc-app-info:25.2.200podman pull occne-repo-host:5000/occnp/oc-app-info:25.2.200
2.1.2.3 Server or Space Requirement
For information about server or space requirements, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation Upgrade, and Fault Recovery Guide.
2.1.2.4 CNE Requirement
This section is applicable only if you are installing SEPP on Cloud Native Environment (CNE).
SEPP supports CNE 25.2.2xx, CNE 25.2.1xx, and CNE 25.1.2xx.
To check the CNE version, run the following command:
echo $OCCNE_VERSIONFor more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
2.1.2.5 cnDBTier Requirement
SEPP supports cnDBTier 25.2.2xx, 25.2.1xx, and 25.1.2xx. cnDBTier must be configured and running before installing SEPP. For more information about cnDBTier installation, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
For more information about the cnDBTier customizations required for SEPP, see the
ocsepp_dbtier_cnDBTier_version_custom_values_SEPP_version.yaml
file.
ocsepp_dbtier_<version>_custom_values_<version>.yaml
file:
helm upgrade <release-name> <chart-path> -f <cndb-custom-values.yaml> -n <namespace> For
example:
helm upgrade mysql-cluster occndbtier/ -f ocsepp_dbtier_25.2.200_custom_values_25.2.200.yaml -n ocsepp-cndb
For more information about cnDBTier installation and upgrade procedure, see Oracle Communications Cloud Native Core, cnDBTier Installation,
Upgrade, and Fault Recovery Guide.
Note:
In georedundant deployment, a dedicated cnDBTier must be installed and configured for each SEPP site.Note:
Starting from 25.1.100 onwards, thendb_allow_copying_alter_table in cnDBTier should be set to
OFF.
2.1.2.6 OSO Requirement
SEPP supports Operations Services Overlay (OSO) 25.2.2xx for common operation services (Prometheus and components such as alertmanager, pushgateway) on a Kubernetes cluster, which does not have these common services. For more information about OSO installation, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation and Upgrade Guide.
2.1.2.7 CNC Console Requirements
SEPP supports CNC Console 25.2.2xx to configure and manage Network Functions. For more information, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.
2.1.2.8 OCCM Requirements
SEPP supports OCCM 25.2.2xx. To support automated certificate lifecycle management, SEPP integrates with Oracle Communications Cloud Native Core, Certificate Management (OCCM) in compliance with 3GPP security recommendations. For more information about OCCM in SEPP, see the "Support for Automated Certificate Lifecycle Management" section in Oracle Communications Cloud Native Core, Security Edge Protection Proxy User Guide.
For more information about OCCM, see the following guides:
- Oracle Communications Cloud Native Core, Certificate Manager Installation, Upgrade, and Fault Recovery Guide
- Oracle Communications Cloud Native Core, Certificate Manager User Guide
2.1.3 SEPP Resource Requirements
This section lists the resource requirements to install and run SEPP.
Note:
The performance and capacity of the SEPP system may vary based on the call model, Feature/Interface configuration, and underlying CNE and hardware environment.2.1.3.1 SEPP Services
The following table lists resource requirement for SEPP Services:
Table 2-4 SEPP Services
| Service Name | CPU | CPU | Memory(GB) | Memory(GB) | POD | POD | Ephemeral Storage | Ephemeral Storage |
|---|---|---|---|---|---|---|---|---|
| Min/Max | Min | Max | Min | Max | Min | Max | Min(Gi) | Max(Gi) |
| Helm Test | 1 | 1 | 1 | 1 | 1 | 1 | 70Mi | 1 |
| <helm-release-name>-n32-ingress-gateway | 6 | 6 | 5 | 5 | 7 | 7 | 1 | 2 |
| <helm-release-name>-n32-egress-gateway | 5 | 5 | 5 | 5 | 7 | 7 | 1 | 1 |
| <helm-release-name>-plmn-ingress-gateway | 5 | 5 | 5 | 5 | 7 | 7 | 1 | 2 |
| <helm-release-name>-plmn-egress-gateway | 5 | 5 | 5 | 5 | 7 | 7 | 1 | 1 |
| <helm-release-name>-pn32f-svc | 5 | 5 | 8 | 8 | 7 | 7 | 2 | 2 |
| <helm-release-name>-cn32f-svc | 5 | 5 | 8 | 8 | 7 | 7 | 2 | 2 |
| <helm-release-name>-cn32c-svc | 2 | 2 | 2 | 2 | 2 | 2 | 1 | 1 |
| <helm-release-name>-pn32c-svc | 2 | 2 | 2 | 2 | 2 | 2 | 1 | 1 |
| <helm-release-name>-config-mgr-svc | 2 | 2 | 2 | 2 | 1 | 1 | 1 | 1 |
| <helm-release-name>-sepp-nrf-client-nfdiscovery | 2 | 2 | 2 | 2 | 2 | 2 | 1 | 1 |
| <helm-release-name>-sepp-nrf-client-nfmanagement | 2 | 2 | 2 | 2 | 2 | 2 | 1 | 1 |
| <helm-release-name>-ocpm-config | 1 | 1 | 1 | 1 | 2 | 2 | 1 | 1 |
| <helm-release-name>-appinfo | 1 | 1 | 1 | 2 | 2 | 2 | 1 | 1 |
| <helm-release-name>-perf-info | 2 | 2 | 4 | 4 | 2 | 2 | 1 | 1 |
| <helm-release-name>-nf-mediation | 8 | 8 | 8 | 8 | 2 | 2 | NA | NA |
| <helm-release-name>-coherence-svc | 4 | 4 | 4 | 4 | 1 | 1 | 2 | 2 |
| <helm-release-name>-alternate-route | 2 | 2 | 4 | 4 | 2 | 2 | NA | NA |
| Total | 60 | 60 | 70 | 70 | 63 | 63 | 17.7 Gi | 20 |
- #: <helm-release-name> will be prefixed in each microservice name. Example: if Helm release name is "ocsepp-release", then cn32f-svc microservice name will be "ocsepp-release-cn32f-svc".
- Init-service container's and Common Configuration Client Hook's resources are not counted because the container gets terminated after initialization completes.
- Helm Hooks Jobs: These are pre and post jobs that are invoked during installation, upgrade, rollback, and uninstallation of the deployment. These are short span jobs that get terminated after the deployment completion.
- Helm Test Job: This job is run on demand when the Helm test command is initiated. This job runs the helm test and stops after completion. These are short-lived jobs that get terminated after the deployment is done. They are not part of active deployment resource, but are considered only during helm test procedures.
- If you enable Message Feed feature at Ingress Gateway and Egress Gateway, approximately 33% pod capacity is impacted.
Note:
If you enable Message Feed feature at Ingress Gateway and Egress Gateway, approximately 33% pod capacity is impacted.2.1.3.2 Upgrade
Following is the resource requirement for upgrading SEPP:
Table 2-5 Upgrade
| Service Name | CPU | CPU | Memory (GB) | Memory (GB) | POD | POD | Ephemeral Storage | Ephemeral Storage |
|---|---|---|---|---|---|---|---|---|
| Min/Max | Min | Max | Min | Max | Min | Max | Min(Gi) | Max(Gi) |
| Helm test | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Helm Hook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| <helm-release-name>-n32-ingress-gateway | 6 | 6 | 5 | 5 | 0 | 0 | 1 | 1 |
| <helm-release-name>-n32-egress-gateway | 5 | 5 | 5 | 5 | 0 | 0 | 1 | 1 |
| <helm-release-name>-plmn-ingress-gateway | 5 | 5 | 5 | 5 | 0 | 0 | 1 | 1 |
| <helm-release-name>-plmn-egress-gateway | 5 | 5 | 5 | 5 | 0 | 0 | 1 | 1 |
| <helm-release-name>-pn32f-svc | 5 | 5 | 8 | 8 | 0 | 0 | 2 | 2 |
| <helm-release-name>-cn32f-svc | 5 | 5 | 8 | 8 | 0 | 0 | 2 | 2 |
| <helm-release-name>-cn32c-svc | 2 | 2 | 2 | 2 | 0 | 0 | 1 | 1 |
| <helm-release-name>-pn32c-svc | 2 | 2 | 2 | 2 | 0 | 0 | 1 | 1 |
| <helm-release-name>-config-mgr-svc | 2 | 2 | 2 | 2 | 0 | 0 | 1 | 1 |
| <helm-release-name>-sepp-nrf-client-nfdiscovery | 1 | 1 | 2 | 2 | 0 | 0 | 1 | 1 |
| <helm-release-name>-sepp-nrf-client-nfmanagement | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 |
| <helm-release-name>-ocpm-config | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 |
| <helm-release-name>-appinfo | 1 | 1 | 1 | 2 | 0 | 0 | 1 | 1 |
| <helm-release-name>-perf-info | 2 | 2 | 200Mi | 4 | 0 | 0 | 1 | 1 |
| <helm-release-name>-nf-mediation | 8 | 8 | 8 | 8 | 0 | 0 | 1 | 1 |
| <helm-release-name>-coherence-svc | 1 | 1 | 2 | 2 | 0 | 0 | NA | NA |
| <helm-release-name>-alternate-route | 2 | 2 | 4 | 4 | 0 | 0 | NA | NA |
| Total | 54 | 54 | 61.2 | 66 | 0 | 0 | 17 Gi | 17 Gi |
Note:
MaxSurgeis set to 0.- <helm-release-name> is the Helm release name. For example, if Helm release name is "ocsepp-release", then cn32f-svc microservice name will be "ocsepp-release-cn32f-svc".
2.1.3.3 Common Services Container
Following is the resource requirement for Common Services Container:
Table 2-6 Common Services Container
| Container Name | CPU | Memory (GB) | Kubernetes Init Container |
|---|---|---|---|
| init-service | 1 | 1 | Y |
| common_config_hook | 1 | 1 | N |
| mediation_hook | 2 | 2 | N |
- Init Container service: Ingress or Egress Gateway services use this container to get OCSEPP Private Key or Certificate and CA Root Certificate for TLS during start up.
- Common Configuration Hook: It is used for creating database for common service configuration.
2.1.3.4 Service Mesh Sidecar
SEPP leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external TLS communication. If ASM Sidecar injection is enabled during SEPP deployment or upgrade, this container is injected to each pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist.
Table 2-7 Service Mesh Sidecar
| Service Name | CPU | CPU | Memory (GB | Memory (GB) | Ephemeral Storage | Ephemeral Storage |
|---|---|---|---|---|---|---|
| Min/Max | Min | Max | Min | Max | Min(Mi) | Max(Gi) |
| <helm-release-name>-n32-ingress-gateway | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-n32-egress-gateway | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-plmn-ingress-gateway | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-plmn-egress-gateway | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-pn32f-svc | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-cn32f-svc | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-cn32c-svc | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-pn32c-svc | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-config-mgr-svc | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-sepp-nrf-client-nfdiscovery | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-sepp-nrf-client-nfmanagement | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-ocpm-config | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-appinfo | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-perf-info | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-nf-mediation | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-coherence-svc | 1 | 1 | 1 | 1 | NA | NA |
| <helm-release-name>-alternate-route | 1 | 1 | 1 | 1 | NA | NA |
| Total | 17 | 17 | 17 | 17 | 1050 Mi | 15 Gi |
Note:
<helm-release-name> is the Helm release name. For example, if Helm release name is "ocsepp-release", then cn32f-svc microservice name will be "ocsepp-release-cn32f-svc".2.1.3.5 Debug Tool Container
The Debug Tool provides third-party troubleshooting tools for debugging the runtime issues in a lab environment. If Debug Tool Container injection is enabled during SEPP deployment or upgrade, this container is injected to each SEPP pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about configuring Debug Tool, see Oracle Communications Cloud Native Core, Security Edge Protection Proxy Troubleshooting Guide.
Table 2-8 Debug Tool Container
| Service Name | CPU | CPU | Memory (GB) | Memory (GB) | Ephemeral Storage | Ephemeral Storage |
|---|---|---|---|---|---|---|
| Min/Max | Min | Max | Min(Gi) | Max(Gi) | Min(Mi) | Max(Mi) |
| <helm-release-name>-n32-ingress-gateway | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-n32-egress-gateway | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-plmn-ingress-gateway | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-plmn-egress-gateway | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-pn32f-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-cn32f-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-cn32c-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-pn32c-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-config-mgr-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-sepp-nrf-client-nfdiscovery | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-sepp-nrf-client-nfmanagement | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-ocpm-config | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-appinfo | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-perf-info | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-nf-mediation | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-coherence-svc | NA | NA | NA | NA | NA | NA |
| <helm-release-name>-alternate-route | 0.5 | 1 | 4 | 4 | NA | NA |
| Total | 8 | 16 | 64 | 64 | 7680 Mi | 7680 Mi |
Note:
<helm_release_name> is the Helm release name. For example, if Helm release name is "ocsepp-release", then plmn-egress-gateway microservice name will be "ocsepp-release-plmn-egress-gateway".
2.1.3.6 SEPP Hooks
Following is the resource requirement for SEPP Hooks:
Table 2-9 SEPP Hooks
| Hook Name | CPU | Memory (GB) | ||
|---|---|---|---|---|
| Min/Max | Min | Max | Min | Max |
| <helm-release-name>-update-db-pre-install | 1 | 2 | 1 | 2 |
| <helm-release-name>-update-db-<post-install> | 1 | 2 | 1 | 2 |
| <helm-release-name>-update-db-<pre-upgrade> | 1 | 2 | 1 | 2 |
| <helm-release-name>-update-db-<post-upgrade> | 1 | 2 | 1 | 2 |
| <helm-release-name>-update-db-<pre-rollback> | 1 | 2 | 1 | 2 |
| <helm-release-name>-update-db-<post-rollback> | 1 | 2 | 1 | 2 |
| <helm-release-name>-pn32f-svc-pre-install | 1 | 2 | 1 | 2 |
| <helm-release-name>-pn32f-svc-post-install | 1 | 2 | 1 | 2 |
| <helm-release-name>-pn32f-svc-<pre-upgrade> | 1 | 2 | 1 | 2 |
| <helm-release-name>-pn32f-svc-<post-upgrade> | 1 | 2 | 1 | 2 |
| <helm-release-name>-pn32f-svc-<pre-rollback> | 1 | 2 | 1 | 2 |
| <helm-release-name>-pn32f-svc-<post-rollback> | 1 | 2 | 1 | 2 |
| <helm-release-name>-cn32f-svc-pre-install | 1 | 2 | 1 | 2 |
| <helm-release-name>-cn32f-svc-<post-install> | 1 | 2 | 1 | 2 |
| <helm-release-name>-cn32f-svc-<pre-upgrade> | 1 | 2 | 1 | 2 |
| <helm-release-name>-cn32f-svc-<post-upgrade> | 1 | 2 | 1 | 2 |
| <helm-release-name>-cn32f-svc-<pre-rollback> | 1 | 2 | 1 | 2 |
| <helm-release-name>-cn32f-svc-<post-rollback> | 1 | 2 | 1 | 2 |
| <helm-release-name>-cn32c-svc-pre-install | 1 | 2 | 1 | 2 |
| <helm-release-name>-cn32c-svc-<post-install> | 1 | 2 | 1 | 2 |
| <helm-release-name>-cn32c-svc-<pre-upgrade> | 1 | 2 | 1 | 2 |
| <helm-release-name>-cn32c-svc-<post-upgrade> | 1 | 2 | 1 | 2 |
| <helm-release-name>-cn32c-svc-<pre-rollback> | 1 | 2 | 1 | 2 |
| <helm-release-name>-cn32c-svc-<post-rollback> | 1 | 2 | 1 | 2 |
| <helm-release-name>-pn32c-svc-pre-install | 1 | 2 | 1 | 2 |
| <helm-release-name>-pn32c-svc-<post-install> | 1 | 2 | 1 | 2 |
| <helm-release-name>-pn32c-svc-<pre-upgrade> | 1 | 2 | 1 | 2 |
| <helm-release-name>-pn32c-svc-<post-upgrade> | 1 | 2 | 1 | 2 |
| <helm-release-name>-pn32c-svc-<pre-rollback> | 1 | 2 | 1 | 2 |
| <helm-release-name>-pn32c-svc-<post-rollback> | 1 | 2 | 1 | 2 |
| <helm-release-name>-config-mgr-svc-pre-install | 1 | 2 | 1 | 2 |
| <helm-release-name>-config-mgr-svc-<post-install> | 1 | 2 | 1 | 2 |
| <helm-release-name>-config-mgr-svc-<pre-upgrade> | 1 | 2 | 1 | 2 |
| <helm-release-name>-config-mgr-svc-<post-upgrade> | 1 | 2 | 1 | 2 |
| <helm-release-name>-config-mgr-svc-<pre-rollback> | 1 | 2 | 1 | 2 |
| <helm-release-name>-config-mgr-svc-<post-rollback> | 1 | 2 | 1 | 2 |
Note:
<helm-release-name> is the Helm release name.2.1.3.7 CNC Console
CNC Console Oracle Communications Cloud Native Configuration Console (CNC Console) is a Graphical User Interface (GUI) for NFs and Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) common services.
For information about CNC Console resources required by SEPP, see "CNC Console Resource Requirement" section in Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.
2.1.3.8 cnDBTier
cnDBTier is the geodiverse database layer provided as part of the Cloud Native Environment. It provides persistent storage for the state data and subscriber data in a cloud environment. Any Kubernetes environment with dynamic Kubernetes storage supports cnDBTier installation.
Table 2-10 cnDBTier
| Detailed DBTier Resources | vCPU | Memory (GB) | Max Replicas | Total vCPU | Total Memory (GB) | PVC Storage (GB) | Ephemeral Storage (GB) |
|---|---|---|---|---|---|---|---|
| SQL - Replication (ndbmysqld) StatefulSet | 3 | 4 | 4 | 12 | 16 | 60 | 0.1 |
| MGMT (ndbmgmd) StatefulSet | 3 | 4 | 2 | 6 | 8 | 15 | 0.1 |
| DB (ndbmtd) StatefulSet | 4 | 12 | 4 | 16 | 48 | 60 | 0.1 |
| db-backup-manager-svc | 1 | 1 | 1 | 1 | 1 | 0 | 0.1 |
| db-replication-svc | 1 | 2 | 1 | 1 | 2 | 60 | 0.1 |
| db-monitor-svc | 4 | 4 | 1 | 4 | 4 | 0 | 0.1 |
| db-connectivity-service | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| SQL - Access (ndbappmysqld) StatefulSet | 5 | 10 | 2 | 10 | 20 | 20 | 0.1 |
| grrecoveryresources | 2 | 12 | 2 | 4 | 24 | 0 | 0 |
| Total | 17 | 54 | 123 | 215 | 0.7 |
Note:
- Node profiles in the above tables are for two-site replication cnDBTier
clusters. Modify the
ndbmysqldandReplication Servicepods based on the number of georeplication sites. - In case, any of the service requires a vertical scaling of any of their PVC, see the respective sub-section in "Vertical Scaling" section in Oracle Communications Cloud Native Core, cnDBTier User Guide.
- PVC shrinking (downsizing) is not supported. It is recommended to retain the existing vertically scaled up PVC sizes, eventhough cnDBTier is rolledback to previous releases.
For information about cnDBTier resources required by SEPP, see "Resource Requirement" section in Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
2.1.3.8.1 cnDBTier Sidecars
The following table indicates the sidecars for cnDBTier services.
Table 2-11 Sidecars per cnDBTier Service
| Service Name | init-sidecar | db-executor-svc | init-discover-sql-ips | db-infra-monitor-svc |
|---|---|---|---|---|
| MGMT (ndbmgmd) | No | No | No | Yes |
| DB (ndbmtd) | No | Yes | No | Yes |
| SQL (ndbmysqld) | Yes | No | No | Yes |
| SQL (ndbappmysqld) | Yes | No | No | Yes |
| Monitor Service (db-monitor-svc) | No | No | No | No |
| Backup Manager Service (db-backup-manager-svc) | No | No | No | No |
| Replication Service | No | No | Yes | No |
Table 2-12 cnDBTier Additional Containers
| Sidecar | CPU/Pod | Memory/Pod (in Gi) | PVC Size (in Gi) | Ephemeral Storage | ||||
|---|---|---|---|---|---|---|---|---|
| Min | Max | Min | Max | PVC1 | PVC2 | Min (Mi) | Max(Gi)Min (Mi) | |
| db-executor-svc | 1 | 1 | 2 | 2 | NA | NA | 90 | 1 |
| init-sidecar | 0.1 | 0.1 | 025 | 0.25 | NA | NA | 90 | 1 |
| init-discover-sql-ips | 0.2 | 0.2 | 0.5 | 0.5 | NA | NA | 90 | 1 |
| db-infra-monitor-svc | 0.1 | 0.1 | 0.25 | 0.25 | NA | NA | 90 | 1 |
2.1.3.8.2 Service Mesh Sidecar
If SEPP is deployed with ASM, the user must add the following annotation in the
ocsepp_dbtier_CNDBTIER_VERSION_custom_values_SEPP_VERSION.yaml
file.
\
Table 2-13 Default Values for Service Mesh Specific Annotations
| Parameter Name | Annotations |
|---|---|
db-monitor-svc.podAnnotations |
traffic.sidecar.istio.io/excludeInboundPorts: "8081,8080" |
db-monitor-svc:
podAnnotations:
traffic.sidecar.istio.io/excludeInboundPorts: "8081,8080"
2.1.4 Roaming Hub Resource Requirements
2.1.4.1 Roaming Hub SEPP Services
The following table lists resource requirement for SEPP Services for Roaming Hub:
Table 2-14 SEPP Services for Roaming Hub
| Service Name | CPU | CPU | Memory (GB) | Memory (GB) | POD | POD | Ephemeral Storage | Ephemeral Storage |
|---|---|---|---|---|---|---|---|---|
| Min/Max | Min | Max | Min | Max | Min | Max | Min(Mi) | Max(Gi) |
| Helm Test | 1 | 1 | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-n32-ingress-gateway | 6 | 6 | 5 | 5 | 2 | 2 | 1 | 2 |
| <helm-release-name>-n32-egress-gateway | 5 | 5 | 5 | 5 | 2 | 2 | 1 | 1 |
| <helm-release-name>-plmn-ingress-gateway | 5 | 5 | 5 | 5 | 2 | 2 | 1 | 2 |
| <helm-release-name>-plmn-egress-gateway | 5 | 5 | 5 | 5 | 2 | 2 | 2 | 2 |
| <helm-release-name>-pn32f-svc | 5 | 5 | 8 | 8 | 2 | 2 | 2 | 2 |
| <helm-release-name>-cn32f-svc | 5 | 5 | 8 | 8 | 2 | 2 | 1 | 1 |
| <helm-release-name>-cn32c-svc | 2 | 2 | 2 | 2 | 2 | 2 | 1 | 1 |
| <helm-release-name>-pn32c-svc | 2 | 2 | 2 | 2 | 2 | 2 | 1 | 1 |
| <helm-release-name>-config-mgr-svc | 2 | 2 | 2 | 2 | 1 | 1 | 1 | 1 |
| <helm-release-name>-perf-info | 2 | 2 | 200Mi | 4 | 2 | 2 | 1 | 1 |
| <helm-release-name>-nf-mediation | 8 | 8 | 8 | 8 | 2 | 2 | NA | NA |
| <helm-release-name>-alternate-route | 2 | 2 | 4 | 4 | 2 | 2 | NA | NA |
| Total | 50 | 50 | 55.2 | 59 | 24 | 24 | 12.70 | 15 Gi |
Note:
- #: <helm-release-name> will be prefixed in each microservice name. Example: if Helm release name is "ocsepp-release", then cn32f-svc microservice name will be "ocsepp-release-cn32f-svc"
- Init-service container's and Common Configuration Client Hook's resources are not counted because the container gets terminated after initialization completes.
- Helm Hooks Jobs: These are pre and post jobs that are invoked during installation, upgrade, rollback, and uninstallation of the deployment. These are short span jobs that get terminated after the deployment completion.
- Helm Test Job: This job is run on demand when the helm test command is initiated. This job runs the helm test and stops after completion. These are short-lived jobs that get terminated after the deployment is done. They are not part of active deployment resource, but are considered only during helm test procedures.
2.1.4.2 Upgrade
Following is the resource requirement for upgrading Roaming Hub or Hosted SEPP:
Table 2-15 Upgrade
| Service Name | CPU | CPU | Memory (GB) | Memory (GB) | POD | POD | Ephemeral Storage | Ephemeral Storage |
|---|---|---|---|---|---|---|---|---|
| Min/Max | Min | Max | Min | Max | Min | Max | Min(Mi) | Max(Gi) |
| Helm test | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Helm Hook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| <helm-release-name>-n32-ingress-gateway | 6 | 6 | 5 | 5 | 1 | 2 | 70 | 1 |
| <helm-release-name>-n32-egress-gateway | 5 | 5 | 5 | 5 | 1 | 2 | 70 | 1 |
| <helm-release-name>-plmn-ingress-gateway | 5 | 5 | 5 | 5 | 1 | 2 | 70 | 1 |
| <helm-release-name>-plmn-egress-gateway | 5 | 5 | 5 | 5 | 1 | 2 | 70 | 1 |
| <helm-release-name>-pn32f-svc | 5 | 5 | 8 | 8 | 1 | 2 | 70 | 1 |
| <helm-release-name>-cn32f-svc | 5 | 5 | 8 | 8 | 1 | 3 | 70 | 1 |
| <helm-release-name>-cn32c-svc | 2 | 2 | 2 | 2 | 1 | 1 | 70 | 1 |
| <helm-release-name>-pn32c-svc | 2 | 2 | 2 | 2 | 1 | 1 | 70 | 1 |
| <helm-release-name>-config-mgr-svc | 2 | 2 | 2 | 2 | 1 | 1 | 70 | 1 |
| <helm-release-name>-perf-info | 2 | 2 | 200Mi | 4 | 1 | 1 | 70 | 1 |
| <helm-release-name>-nf-mediation | 8 | 8 | 8 | 8 | 1 | 1 | 70 | 1 |
| Total | 47 | 47 | 50.2 | 54 | 11 | 18 | 770 Mi | 11 Gi |
Note:
<helm-release-name> is the Helm release name. For example, if Helm release name is "ocsepp-release", then cn32f-svc microservice name will be "ocsepp-release-cn32f-svc"2.1.4.3 Common Services Container
Following is the resource requirement for Common Services Container:
Table 2-16 Common Services Container
| Container Name | CPU | Memory (GB) | Kubernetes Init Container |
|---|---|---|---|
| init-service | 1 | 1 | Y |
| common_config_hook | 1 | 1 | N |
Note:
- Init Container service: Ingress or Egress Gateway services use this container to get SEPP Private Key or Certificate and CA Root Certificate for TLS during start up.
- Common Configuration Hook: It is used for creating database for common service configuration.
2.1.4.5 Debug Tool Container
The Debug Tool provides third-party troubleshooting tools for debugging the runtime issues in a lab environment. If Debug Tool Container injection is enabled during Roaming Hub/Hosted SEPP deployment or upgrade, this container is injected to each Roaming Hub/Hosted SEPP pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about configuring the Debug Tool, see Oracle Communications Cloud Native Core, Security Edge Protection Proxy Troubleshooting Guide.
Table 2-17 Debug Tool Container
| Service Name | CPU | CPU | Memory (GB) | Memory (GB) | Ephemeral Storage | Ephemeral Storage |
|---|---|---|---|---|---|---|
| Min/Max | Min | Max | Min(Gi) | Max(Gi) | Min(Mi) | Max(Mi) |
| Helm Test | 0 | 0 | 0 | 0 | 512 | 512 |
| <helm-release-name>-n32-ingress-gateway | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-n32-egress-gateway | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-plmn-ingress-gateway | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-plmn-egress-gateway | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-pn32f-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-cn32f-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-cn32c-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-pn32c-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-config-mgr-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-perf-info | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-nf-mediation | 0.5 | 1 | 4 | 4 | 512 | 512 |
| Total | 5.5 | 11 | 44 | 44 | 6144 Mi | 6144 Mi |
Note:
<helm-release-name> is the Helm release name. Example: if helm release name is "ocsepp", then cn32f-svc microservice name will be "ocsepp-cn32f-svc"2.1.4.6 SEPP Hooks
Following is the resource requirement for SEPP Hooks.
Table 2-18 SEPP Hooks
| Hook Name | CPU | CPU | Memory (GB) | Memory (GB) |
|---|---|---|---|---|
| Min/Max | Min | Max | Min | Max |
| <helm-release-name>-update-db-pre-install | 1 | 1 | 1 | 1 |
| <helm-release-name>-update-db-<post-install> | 1 | 1 | 1 | 1 |
| <helm-release-name>-update-db-<pre-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-update-db-<post-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-update-db-<pre-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-update-db-<post-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32f-svc-pre-install | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32f-svc-post-install | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32f-svc-<pre-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32f-svc-<post-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32f-svc-<pre-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32f-svc-<post-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32f-svc-pre-install | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32f-svc-<post-install> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32f-svc-<pre-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32f-svc-<post-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32f-svc-<pre-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32f-svc-<post-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32c-svc-pre-install | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32c-svc-<post-install> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32c-svc-<pre-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32c-svc-<post-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32c-svc-<pre-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32c-svc-<post-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32c-svc-pre-install | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32c-svc-<post-install> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32c-svc-<pre-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32c-svc-<post-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32c-svc-<pre-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32c-svc-<post-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-config-mgr-svc-pre-install | 1 | 1 | 1 | 1 |
| <helm-release-name>-config-mgr-svc-<post-install> | 1 | 1 | 1 | 1 |
| <helm-release-name>-config-mgr-svc-<pre-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-config-mgr-svc-<post-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-config-mgr-svc-<pre-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-config-mgr-svc-<post-rollback> | 1 | 1 | 1 | 1 |
Note:
<helm-release-name> is the Helm release name.2.1.5 Hosted SEPP Resource Requirements
This section lists the resource requirements to install and run Hosted SEPP.
2.1.5.1 Hosted SEPP Services
The following table lists resource requirement for SEPP Services for Hosted SEPP:
Table 2-19 SEPP Services for Hosted SEPP
| Service Name | CPU | CPU | Memory (GB) | Memory (GB) | POD | POD | Ephemeral Storage | Ephemeral Storage |
|---|---|---|---|---|---|---|---|---|
| Min/Max | Min | Max | Min | Max | Min | Max | Min(Mi) | Max(Gi) |
| Helm Test | 1 | 1 | 1 | 1 | 1 | 1 | 70 | 1 |
| <helm-release-name>-n32-ingress-gateway | 6 | 6 | 5 | 5 | 2 | 2 | 1 | 2 |
| <helm-release-name>-n32-egress-gateway | 5 | 5 | 5 | 5 | 2 | 2 | 1 | 1 |
| <helm-release-name>-plmn-ingress-gateway | 5 | 5 | 5 | 5 | 2 | 2 | 1 | 2 |
| <helm-release-name>-plmn-egress-gateway | 5 | 5 | 5 | 5 | 2 | 2 | 2 | 2 |
| <helm-release-name>-pn32f-svc | 5 | 5 | 8 | 8 | 2 | 2 | 2 | 2 |
| <helm-release-name>-cn32f-svc | 5 | 5 | 8 | 8 | 2 | 2 | 1 | 1 |
| <helm-release-name>-cn32c-svc | 2 | 2 | 2 | 2 | 2 | 2 | 1 | 1 |
| <helm-release-name>-pn32c-svc | 2 | 2 | 2 | 2 | 2 | 2 | 1 | 1 |
| <helm-release-name>-config-mgr-svc | 2 | 2 | 2 | 2 | 1 | 1 | 1 | 1 |
| <helm-release-name>-perf-info | 2 | 2 | 200Mi | 4 | 2 | 2 | 1 | 1 |
| <helm-release-name>-alternate-route | 2 | 2 | 4 | 4 | 2 | 2 | NA | NA |
| Total | 42 | 42 | 47.2 | 51 | 22 | 22 | 12.70 | 15 Gi |
Note:
- #: <helm-release-name> will be prefixed in each microservice name. Example: if Helm release name is "ocsepp-release", then cn32f-svc microservice name will be "ocsepp-release-cn32f-svc"
- Init-service container's and Common Configuration Client Hook's resources are not counted because the container gets terminated after initialization completes.
- Helm Hooks Jobs: These are pre and post jobs that are invoked during installation, upgrade, rollback, and uninstallation of the deployment. These are short span jobs that get terminated after the deployment completion.
- Helm Test Job: This job is run on demand when the helm test command is initiated. This job runs the helm test and stops after completion. These are short-lived jobs that get terminated after the deployment is done. They are not part of active deployment resource, but are considered only during helm test procedures.
2.1.5.2 Upgrade
Following is the resource requirement for upgrading Hosted SEPP:
Table 2-20 Upgrade
| Service Name | CPU | CPU | Memory (GB) | Memory (GB) | POD | POD | Ephemeral Storage | Ephemeral Storage |
|---|---|---|---|---|---|---|---|---|
| Min/Max | Min | Max | Min | Max | Min | Max | Min(Mi) | Max(Gi) |
| Helm test | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Helm Hook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| <helm-release-name>-n32-ingress-gateway | 6 | 6 | 5 | 5 | 1 | 2 | 70 | 1 |
| <helm-release-name>-n32-egress-gateway | 5 | 5 | 5 | 5 | 1 | 2 | 70 | 1 |
| <helm-release-name>-plmn-ingress-gateway | 5 | 5 | 5 | 5 | 1 | 2 | 70 | 1 |
| <helm-release-name>-plmn-egress-gateway | 5 | 5 | 5 | 5 | 1 | 2 | 70 | 1 |
| <helm-release-name>-pn32f-svc | 5 | 5 | 8 | 8 | 1 | 2 | 70 | 1 |
| <helm-release-name>-cn32f-svc | 5 | 5 | 8 | 8 | 1 | 3 | 70 | 1 |
| <helm-release-name>-cn32c-svc | 2 | 2 | 2 | 2 | 1 | 1 | 70 | 1 |
| <helm-release-name>-pn32c-svc | 2 | 2 | 2 | 2 | 1 | 1 | 70 | 1 |
| <helm-release-name>-config-mgr-svc | 2 | 2 | 2 | 2 | 1 | 1 | 70 | 1 |
| <helm-release-name>-perf-info | 2 | 2 | 200Mi | 4 | 1 | 1 | 70 | 1 |
| Total | 47 | 47 | 50.2 | 54 | 11 | 18 | 700 Mi | 10 Gi |
Note:
<helm-release-name> is the Helm release name. For example, if Helm release name is "ocsepp-release", then cn32f-svc microservice name will be "ocsepp-release-cn32f-svc"2.1.5.3 Common Services Container
Following is the resource requirement for Common Services Container:
Table 2-21 Common Services Container
| Container Name | CPU | Memory (GB) | Kubernetes Init Container |
|---|---|---|---|
| init-service | 1 | 1 | Y |
| common_config_hook | 1 | 1 | N |
Note:
- Init Container service: Ingress or Egress Gateway services use this container to get SEPP Private Key or Certificate and CA Root Certificate for TLS during start up.
- Common Configuration Hook: It is used for creating database for common service configuration.
2.1.5.5 Debug Tool Container
The Debug Tool provides third-party troubleshooting tools for debugging the runtime issues in a lab environment. If Debug Tool Container injection is enabled during Roaming Hub/Hosted SEPP deployment or upgrade, this container is injected to each Roaming Hub/Hosted SEPP pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist.
For more information about configuring the Debug Tool, see Oracle Communications Cloud Native Core, Security Edge Protection Proxy Troubleshooting Guide.
Table 2-22 Debug Tool Container
| Service Name | CPU | CPU | Memory (GB) | Memory (GB) | Ephemeral Storage | Ephemeral Storage |
|---|---|---|---|---|---|---|
| Min/Max | Min | Max | Min(Gi) | Max(Gi) | Min(Mi) | Max(Mi) |
| Helm Test | 0 | 0 | 0 | 0 | 512 | 512 |
| <helm-release-name>-n32-ingress-gateway | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-n32-egress-gateway | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-plmn-ingress-gateway | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-plmn-egress-gateway | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-pn32f-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-cn32f-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-cn32c-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-pn32c-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-config-mgr-svc | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-perf-info | 0.5 | 1 | 4 | 4 | 512 | 512 |
| <helm-release-name>-nf-mediation | 0.5 | 1 | 4 | 4 | 512 | 512 |
| Total | 5 | 10 | 40 | 40 | 6144 Mi | 6144 Mi |
Note:
<helm-release-name> is the Helm release name. Example: if helm release name is "ocsepp", then cn32f-svc microservice name will be "ocsepp-cn32f-svc"2.1.5.6 SEPP Hooks
Following is the resource requirement for SEPP Hooks.
Table 2-23 SEPP Hooks
| Hook Name | CPU | CPU | Memory (GB) | Memory (GB) |
|---|---|---|---|---|
| Min/Max | Min | Max | Min | Max |
| <helm-release-name>-update-db-pre-install | 1 | 1 | 1 | 1 |
| <helm-release-name>-update-db-<post-install> | 1 | 1 | 1 | 1 |
| <helm-release-name>-update-db-<pre-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-update-db-<post-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-update-db-<pre-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-update-db-<post-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32f-svc-pre-install | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32f-svc-post-install | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32f-svc-<pre-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32f-svc-<post-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32f-svc-<pre-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32f-svc-<post-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32f-svc-pre-install | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32f-svc-<post-install> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32f-svc-<pre-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32f-svc-<post-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32f-svc-<pre-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32f-svc-<post-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32c-svc-pre-install | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32c-svc-<post-install> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32c-svc-<pre-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32c-svc-<post-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32c-svc-<pre-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-cn32c-svc-<post-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32c-svc-pre-install | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32c-svc-<post-install> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32c-svc-<pre-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32c-svc-<post-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32c-svc-<pre-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-pn32c-svc-<post-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-config-mgr-svc-pre-install | 1 | 1 | 1 | 1 |
| <helm-release-name>-config-mgr-svc-<post-install> | 1 | 1 | 1 | 1 |
| <helm-release-name>-config-mgr-svc-<pre-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-config-mgr-svc-<post-upgrade> | 1 | 1 | 1 | 1 |
| <helm-release-name>-config-mgr-svc-<pre-rollback> | 1 | 1 | 1 | 1 |
| <helm-release-name>-config-mgr-svc-<post-rollback> | 1 | 1 | 1 | 1 |
Note:
<helm-release-name> is the Helm release name.2.2 Installation Sequence
This section describes preinstallation, installation, and postinstallation tasks for SEPP (SEPP, Roaming Hub, or Hosted SEPP).
2.2.1 Preinstallation Tasks
Before installing SEPP, perform the tasks described in this section.
2.2.1.1 Downloading SEPP package
Perform the following procedure to download the Oracle Communications Cloud Native Core, Security Edge Protection Proxy (SEPP) release package from My Oracle Support:
- Log in to My Oracle Support using the appropriate credentials.
- Click the Patches & Updates tab to locate the patch.
- In Patch Search console, click the Product or Family (Advanced) tab.
- Enter
Oracle Communications Cloud Native Core - 5Gin Product field and select the product from the Product drop-down list. - From the Release drop-down, select "Oracle Communications Cloud Native Core Security Edge Protection Proxy <release_number>". Where, <release-number> indicates the required release number of Cloud Native Core, Security Edge Protection Proxy.
- Click Search. The Patch Advanced Search Results list appears.
- Select the required patch from the list. The Patch Details window appears.
- Click Download. File Download window appears.
- Click the
<p********_<release_number>_Tekelec>.zipfile to download the package. Where, <p********> is the MOS patch number and <release_number> is the release number of SEPP.
2.2.1.2 Pushing the SEPP Images to Customer Docker Registry
SEPP deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.
The following table lists the docker images for SEPP:
Table 2-24 SEPP Images
| Services | Image | Tag |
|---|---|---|
| <helm-release-name>-alternate_route | alternate_route | 25.2.110 |
| <helm-release-name>-common_config_hook | common_config_hook | 25.2.200 |
| <helm-release-name>-configurationinit | configurationinit | 25.2.200 |
| <helm-release-name>-mediation/ocmed-nfmediation | mediation/ocmed-nfmediation | 25.2.203 |
| <helm-release-name>-nf_test | nf_test | 25.2.205 |
| <helm-release-name>-nrf-client | nrf-client | 25.2.203 |
| <helm-release-name>-occnp/oc-app-info | occnp/oc-app-info | 25.2.208 |
| <helm-release-name>-occnp/oc-config-server | occnp/oc-config-server | 25.2.208 |
| <helm-release-name>-performance | occnp/oc-perf-info | 25.2.208 |
| <helm-release-name>-ocdebugtool/ocdebug-tools | ocdebugtool/ocdebug-tools | 25.2.204 |
| <helm-release-name>-ocegress_gateway | ocegress_gateway | 25.2.110 |
| <helm-release-name>-ocingress_gateway | ocingress_gateway | 25.2.110 |
| <helm-release-name>-oocsepp-cn32c-svc | ocsepp-cn32c-svc | 25.2.200 |
| <helm-release-name>- ocsepp-cn32f-svc | ocsepp-cn32f-svc | 25.2.200 |
| <helm-release-name>-ocsepp-coherence-svc | ocsepp-coherence-svc | 25.2.200 |
| <helm-release-name>-ocsepp-config-mgr-svc | ocsepp-config-mgr-svc | 25.2.200 |
| <helm-release-name>-ocsepp-pn32c-svc | ocsepp-pn32c-svc | 25.2.200 |
| <helm-release-name>-ocsepp-pn32f-svc | ocsepp-pn32f-svc | 25.2.200 |
| <helm-release-name>-ocsepp-pre-install-hook | ocsepp-pre-install-hook | 25.2.200 |
| <helm-release-name>-ocsepp-update-db | ocsepp-update-db | 25.2.200 |
| <helm-release-name>-ocsepp-configurationupdate | configurationupdate | 25.2.200 |
To push the images to the registry:
- Navigate to the location where you want to install SEPP. Unzip
the SEPP release package to retrieve the following CSAR package.
The SEPP package is as follows:
ReleaseName_csar_Releasenumber.zipWhere:- ReleaseName is a name that is used to track this installation instance.
- Releasenumber is the release number. For example:
ocsepp_csar_25.2.200_0_0.zip - Unzip the SEPP package file to get SEPP docker image tar
file:
unzip ocsepp_csar_Releasenumber.zipFor example:unzip ocsepp_csar_25.2.200_0_0.zip - The directory
ocsepp_csar_25.2.200_0_0.zipconsists of the following:├── Definitions │ ├── ocsepp_cne_compatibility.yaml │ └── ocsepp.yaml ├── Files │ ├── alternate_route-25.2.110.tar │ ├── ChangeLog.txt │ ├── common_config_hook-25.2.200.tar │ ├── configurationinit-25.2.200.tar │ ├── Helm │ │ ├── ocsepp-25.2.200.tgz │ │ ├── ocsepp-network-policy-25.2.200.tgz │ │ └── ocsepp-servicemesh-config-25.2.200.tgz │ ├── Licenses │ ├── mediation-ocmed-nfmediation-25.2.203.tar │ ├── nf_test-25.2.205.tar │ ├── nrf-client-25.2.203.tar │ ├── occnp-oc-app-info-25.2.208.tar │ ├── occnp-oc-config-server-25.2.208.tar │ ├── occnp-oc-perf-info-25.2.208.tar │ ├── ocdebugtool-ocdebug-tools-25.2.204.tar │ ├── ocegress_gateway-25.2.110.tar │ ├── ocingress_gateway-25.2.110.tar │ ├── ocsepp-cn32c-svc-25.2.200.tar │ ├── ocsepp-cn32f-svc-25.2.200.tar │ ├── ocsepp-coherence-svc-25.2.200.tar │ ├── ocsepp-config-mgr-svc-25.2.200.tar │ ├── ocsepp-pn32c-svc-25.2.200.tar │ ├── ocsepp-pn32f-svc-25.2.200.tar │ ├── ocsepp-pre-install-hook-25.2.200.tar │ ├── ocsepp-update-db-25.2.200.tar │ ├── Oracle.cert │ └── Tests ├── ocsepp.mf ├── Scripts │ ├── ocsepp_alertrules_promha_25.2.200.yaml │ ├── ocsepp_configuration_openapi_25.2.200.json │ ├── ocsepp_custom_values_25.2.200.yaml │ ├── ocsepp_custom_values_roaming_hub_25.2.200.yaml │ ├── ocsepp_dashboard_25.2.200.json │ ├── ocsepp_dashboard_promha_25.2.200.json │ ├── ocsepp_network_policies_custom_values_25.2.200.yaml │ ├── ocsepp-servicemesh-config-25.2.200.tgz │ ├── ocsepp_alertrules_25.2.200.yaml │ ├── ocsepp_mib_25.2.200.mib │ ├── ocsepp_mib_tc_25.2.200.mib │ └── toplevel.mib │ └── ocsepp_oci_alertrules_25.2.200.zip │ └── ocsepp_oci_dashboard_25.2.200.json │ └── ocsepp_rollback_schema_25.2.200.sql │ └── ocsepp_dbtier_25.2.200_custom_values_25.2.200.yaml │ └── ocsepp_single_service_account_config_25.2.200.yaml │ └── ocsepp_alertrules_rh_hosted_25.2.200.yaml │ └── ocsepp_alertrules_promha_rh_hosted_25.2.200.yaml └── TOSCA-Metadata └── TOSCA.meta - Open the
Filesfolder and based on the container engine installed, run one of the following command to load the SEPP images. Load all the images that are listed in SEPP Images (2-18) Table:podman load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
docker load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
Where, IMAGE_PATH is the location where the SEPP docker image tar file is archived. Sample command:podman load --input /IMAGE_PATH/ocsepp-pn32c-svc-25.2.200.tar
Note:
The docker/podman load command needs to be executed seperately for each tar file/docker image. - Run one of the following commands to verify that the image is
loaded:
docker images | grep ocsepppodman images | grep ocseppNote:
Verify the list of images shown in the output with the list of images shown in the table SEPP Images. If the list does not match, reload the image tar file. - Run one of the following commands to tag the images to the
registry:
docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>Sample Tag commands:podman tag alternate_route:25.2.110 <customer repo>/alternate_route:25.2.110 podman tag common_config_hook:25.2.200 <customer repo>/common_config_hook:25.2.200 podman tag configurationinit:25.2.200 <customer repo>/configurationinit:25.2.200 podman tag configurationupdate:25.2.200 <customer repo>/configurationupdate:25.2.200 podman tag mediation/ocmed-nfmediation:25.2.203 <customer repo>/mediation/ocmed-nfmediation:25.2.203 podman tag nf_test:25.2.205 <customer repo>/nf_test:25.2.205 podman tag nrf-client:25.2.203 <customer repo>/nrf-client:25.2.203 podman tag occnp/oc-app-info:25.2.208 <customer repo>/occnp/oc-app-info:25.2.208 podman tag occnp/oc-config-server:25.2.208 <customer repo>/occnp/oc-config-server:25.2.208 podman tag occnp/oc-perf-info:25.2.208 <customer repo>/occnp/oc-perf-info:25.2.208 podman tag ocdebugtool/ocdebug-tools:25.2.204 <customer repo>/ocdebugtool/ocdebug-tools:25.2.204 podman tag ocegress_gateway:25.2.110 <customer repo>/ocegress_gateway:25.2.110 podman tag ocingress_gateway:25.2.110 <customer repo>/ocingress_gateway:25.2.110 podman tag ocsepp-cn32c-svc:25.2.200 <customer repo>/ocsepp-cn32c-svc:25.2.200 podman tag ocsepp-cn32f-svc:25.2.200 <customer repo>/ocsepp-cn32f-svc:25.2.200 podman tag ocsepp-coherence-svc:25.2.200 <customer repo>/ocsepp-coherence-svc:25.2.200 podman tag ocsepp-config-mgr-svc:25.2.200 <customer repo>/ocsepp-config-mgr-svc:25.2.200 podman tag ocsepp-pn32c-svc:25.2.200 <customer repo>/ocsepp-pn32c-svc:25.2.200 podman tag ocsepp-pn32f-svc:25.2.200 <customer repo>/ocsepp-pn32f-svc:25.2.200 podman tag ocsepp-pre-install-hook:25.2.200 <customer repo>/ocsepp-pre-install-hook:25.2.200 podman tag ocsepp-update-db:25.2.200 <customer repo>/ocsepp-update-db:25.2.200 - Run one of the following commands to push the image to the
registry:
docker push <docker-repo>/<image-name>:<image-tag>
podman push <docker-repo>/<image-name>:<image-tag>
podman push occne-repo-host:5000/alternate_route:25.2.110
podman push occne-repo-host:5000/common_config_hook:25.2.200
podman push occne-repo-host:5000/configurationinit:25.2.200
podman push occne-repo-host:5000/configurationupdate:25.2.200
podman push occne-repo-host:5000/mediation/ocmed-nfmediation:25.2.203
podman push occne-repo-host:5000/nf_test:25.2.205
podman push occne-repo-host:5000/nrf-client:25.2.203
podman push occne-repo-host:5000/occnp/oc-app-info:25.2.208
podman push occne-repo-host:5000/occnp/oc-config-server:25.2.208
podman push occne-repo-host:5000/occnp/oc-perf-info:25.2.208
podman push occne-repo-host:5000/ocdebugtool/ocdebug-tools:25.2.204
podman push occne-repo-host:5000/ocegress_gateway:25.2.110
podman push occne-repo-host:5000/ocingress_gateway:25.2.110
podman push occne-repo-host:5000/ocsepp-cn32c-svc:25.2.200
podman push occne-repo-host:5000/ocsepp-cn32f-svc:25.2.200
podman push occne-repo-host:5000/ocsepp-coherence-svc:25.2.200
podman push occne-repo-host:5000/ocsepp-config-mgr-svc:25.2.200
podman push occne-repo-host:5000/ocsepp-pn32c-svc:25.2.200
podman push occne-repo-host:5000/ocsepp-pn32f-svc:25.2.200
podman push occne-repo-host:5000/ocsepp-pre-install-hook:25.2.200
podman push occne-repo-host:5000/ocsepp-update-db:25.2.200Note:
It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise docker push command may fail.2.2.1.3 Pushing the Roaming Hub and Hosted SEPP Images to Customer Docker Registry
Roaming Hub and Hosted SEPP deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.
The following table lists the docker images for Roaming Hub and Hosted SEPP:
Table 2-25 Roaming Hub and Hosted SEPP
| Services | Image | Tag |
|---|---|---|
| <helm-release-name>-alternate_route | alternate_route | 25.2.110 |
| <helm-release-name>-common_config_hook | common_config_hook | 25.2.200 |
| <helm-release-name>-configurationinit | configurationinit | 25.2.200 |
| <helm-release-name>-mediation/ocmed-nfmediation | mediation/ocmed-nfmediation | 25.2.203 |
| <helm-release-name>-nf_test | nf_test | 25.2.205 |
| <helm-release-name>-performance | occnp/oc-perf-info | 25.2.200 |
| <helm-release-name>-ocdebugtool/ocdebug-tools | ocdebugtool/ocdebug-tools | 25.2.204 |
| <helm-release-name>-ocegress_gateway | ocegress_gateway | 25.2.110 |
| <helm-release-name>-ocingress_gateway | ocingress_gateway | 25.2.110 |
| <helm-release-name>-oocsepp-cn32c-svc | ocsepp-cn32c-svc | 25.2.200 |
| <helm-release-name>- ocsepp-cn32f-svc | ocsepp-cn32f-svc | 25.2.200 |
| <helm-release-name>-ocsepp-coherence-svc | ocsepp-coherence-svc | 25.2.200 |
| <helm-release-name>-ocsepp-config-mgr-svc | ocsepp-config-mgr-svc | 25.2.200 |
| <helm-release-name>-ocsepp-pn32c-svc | ocsepp-pn32c-svc | 25.2.200 |
| <helm-release-name>-ocsepp-pn32f-svc | ocsepp-pn32f-svc | 25.2.200 |
| <helm-release-name>-ocsepp-pre-install-hook | ocsepp-pre-install-hook | 25.2.200 |
| <helm-release-name>-ocsepp-update-db | ocsepp-update-db | 25.2.200 |
| <helm-release-name>-ocsepp-configurationupdate | configurationupdate | 25.2.200 |
Note:
<helm-release-name> will be prefixed in each microservice name. Example: if Helm release name is "ocsepp", then cn32f-svc microservice name will be "ocsepp-cn32f-svc".
To push the images to the registry:
- Navigate to the location where you want to install SEPP. Unzip
the SEPP release package
(<p********>_<release_number>_Tekelec.zip) to retrieve the
following CSAR package.
The SEPP package is as follows:
ReleaseName_csar_Releasenumber.zipWhere:- ReleaseName is a name that is used to track this installation instance.
- Releasenumber is the release number. For example:
ocsepp_csar_25_2_200_0_0.zip - Unzip the SEPP package file to get SEPP docker image tar
file:
unzip ReleaseName_csar_Releasenumber.zipFor example:unzip ocsepp_csar_25_2_200_0_0.zip - The directory
ocsepp_csar_25_2_200_0_0.zipconsists of the following:├── Definitions │ ├── ocsepp_cne_compatibility.yaml │ └── ocsepp.yaml ├── Files │ ├── alternate_route-25.2.110.tar │ ├── ChangeLog.txt │ ├── common_config_hook-25.2.200.tar │ ├── configurationinit-25.2.200.tar │ ├── Helm │ │ ├── ocsepp-25.2.200.tgz │ │ ├── ocsepp-network-policy-25.2.200.tgz │ │ └── ocsepp-servicemesh-config-25.2.200.tgz │ ├── Licenses │ ├── mediation-ocmed-nfmediation-25.3.203.tar │ ├── nf_test-25.2.205.tar │ ├── nrf-client-25.2.203.tar │ ├── occnp-oc-app-info-25.2.208.tar │ ├── occnp-oc-config-server-25.2.208.tar │ ├── occnp-oc-perf-info-25.2.200.tar │ ├── ocdebugtool-ocdebug-tools-25.2.204.tar │ ├── ocegress_gateway-25.2.110.tar │ ├── ocingress_gateway-25.2.110.tar │ ├── ocsepp-cn32c-svc-25.2.200.tar │ ├── ocsepp-cn32f-svc-25.2.200.tar │ ├── ocsepp-coherence-svc-25.2.200.tar │ ├── ocsepp-config-mgr-svc-25.2.200.tar │ ├── ocsepp-pn32c-svc-25.2.200.tar │ ├── ocsepp-pn32f-svc-25.2.200.tar │ ├── ocsepp-pre-install-hook-25.2.200.tar │ ├── ocsepp-update-db-25.2.200.tar │ ├── Oracle.cert │ └── Tests ├── ocsepp.mf ├── Scripts │ ├── ocsepp_alertrules_promha_25.2.200.yaml │ ├── ocsepp_configuration_openapi_25.2.200.json │ ├── ocsepp_custom_values_25.2.200.yaml │ ├── ocsepp_custom_values_roaming_hub_25.2.200.yaml │ ├── ocsepp_dashboard_25.2.200.json │ ├── ocsepp_dashboard_promha_25.2.200.json │ ├── ocsepp_network_policies_custom_values_25.2.200.yaml │ ├── ocsepp-servicemesh-config-25.2.200.tgz │ ├── ocsepp_alertrules_25.2.200.yaml │ ├── ocsepp_mib_25.2.200.mib │ ├── ocsepp_mib_tc_25.2.200.mib │ └── toplevel.mib │ └── ocsepp_oci_alertrules_25.2.200.zip │ └── ocsepp_oci_dashboard_25.2.200.json │ └── ocsepp_rollback_schema_25.2.200.sql │ └── ocsepp_dbtier_25.2.200_custom_values_25.2.200.yaml │ └── ocsepp_single_service_account_config_25.2.200.yaml │ └── ocsepp_alertrules_rh_hosted_25.2.200.yaml │ └── ocsepp_alertrules_promha_rh_hosted_25.2.200.yaml └── TOSCA-Metadata └── TOSCA.meta - Open the
Filesfolder based on the container engine installed, run one of the following command to load the SEPP images. Load all the images that are listed in Roaming Hub or Hosted SEPP(2-19) Table:podman load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
docker load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
Where, IMAGE_PATH is the location where the SEPP docker image tar file is archived. Sample command:podman load --input /IMAGE_PATH/ocsepp-pn32c-svc-25.2.200.tar
Note:
The docker/ podman load command needs to be executed seperately for each tar file/ docker image. - Run one of the following commands to verify that the image is
loaded:
docker images | grep ocsepppodman images | grep ocseppNote:
Verify the list of images shown in the output with the list of images shown in the table SEPP Images. If the list does not match, reload the image tar file. - Run one of the following commands to tag the images to the
registry:
docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>Sample Tag commands:podman tag alternate_route:25.2.110 <customer repo>/alternate_route:25.2.110 podman tag common_config_hook:25.2.200 <customer repo>/common_config_hook:25.2.200 podman tag configurationinit:25.2.200 <customer repo>/configurationinit:25.2.200 podman tag configurationupdate:25.2.200 <customer repo>/configurationupdate:25.2.200 podman tag mediation/ocmed-nfmediation:25.2.203 <customer repo>/mediation/ocmed-nfmediation:25.2.203 podman tag nf_test:25.2.205 <customer repo>/nf_test:25.2.205 podman tag nrf-client:25.2.203 <customer repo>/nrf-client:25.2.203 podman tag occnp/oc-app-info:25.2.208 <customer repo>/occnp/oc-app-info:25.2.208 podman tag occnp/oc-config-server:25.2.208 <customer repo>/occnp/oc-config-server:25.2.208 podman tag occnp/oc-perf-info:25.2.200 <customer repo>/occnp/oc-perf-info:25.2.200 podman tag ocdebugtool/ocdebug-tools:25.2.204 <customer repo>/ocdebugtool/ocdebug-tools:25.2.204 podman tag ocegress_gateway:25.2.110 <customer repo>/ocegress_gateway:25.2.110 podman tag ocingress_gateway:25.2.110 <customer repo>/ocingress_gateway:25.2.110 podman tag ocsepp-cn32c-svc:25.2.200 <customer repo>/ocsepp-cn32c-svc:25.2.200 podman tag ocsepp-cn32f-svc:25.2.200 <customer repo>/ocsepp-cn32f-svc:25.2.200 podman tag ocsepp-coherence-svc:25.2.200 <customer repo>/ocsepp-coherence-svc:25.2.200 podman tag ocsepp-config-mgr-svc:25.2.200 <customer repo>/ocsepp-config-mgr-svc:25.2.200 podman tag ocsepp-pn32c-svc:25.2.200 <customer repo>/ocsepp-pn32c-svc:25.2.200 podman tag ocsepp-pn32f-svc:25.2.200 <customer repo>/ocsepp-pn32f-svc:25.2.200 podman tag ocsepp-pre-install-hook:25.2.200 <customer repo>/ocsepp-pre-install-hook:25.2.200 podman tag ocsepp-update-db:25.2.200 <customer repo>/ocsepp-update-db:25.2.200 - Run one of the following commands to push the image to the
registry:
docker push <docker-repo>/<image-name>:<image-tag>
podman push <docker-repo>/<image-name>:<image-tag>
podman push occne-repo-host:5000/alternate_route:25.2.110
podman push occne-repo-host:5000/common_config_hook:25.2.200
podman push occne-repo-host:5000/configurationinit:25.2.200
podman push occne-repo-host:5000/configurationupdate:25.2.200
podman push occne-repo-host:5000/mediation/ocmed-nfmediation:25.2.203
podman push occne-repo-host:5000/nf_test:25.2.205
podman push occne-repo-host:5000/nrf-client:25.2.203
podman push occne-repo-host:5000/occnp/oc-app-info:25.2.208
podman push occne-repo-host:5000/occnp/oc-config-server:25.2.208
podman push occne-repo-host:5000/occnp/oc-perf-info:25.2.200
podman push occne-repo-host:5000/ocdebugtool/ocdebug-tools:25.2.204
podman push occne-repo-host:5000/ocegress_gateway:25.2.110
podman push occne-repo-host:5000/ocingress_gateway:25.2.110
podman push occne-repo-host:5000/ocsepp-cn32c-svc:25.2.200
podman push occne-repo-host:5000/ocsepp-cn32f-svc:25.2.200
podman push occne-repo-host:5000/ocsepp-coherence-svc:25.2.200
podman push occne-repo-host:5000/ocsepp-config-mgr-svc:25.2.200
podman push occne-repo-host:5000/ocsepp-pn32c-svc:25.2.200
podman push occne-repo-host:5000/ocsepp-pn32f-svc:25.2.200
podman push occne-repo-host:5000/ocsepp-pre-install-hook:25.2.200
podman push occne-repo-host:5000/ocsepp-update-db:25.2.200Note:
- It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise, docker push command may fail.
2.2.1.4 Pushing the SEPP Images to OCI Docker Registry
SEPP deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.
The following table lists the docker images for SEPP:
Table 2-26 SEPP Images
| Services | Image | Tag |
|---|---|---|
| <helm-release-name>-alternate_route | alternate_route | 25.2.110 |
| <helm-release-name>-common_config_hook | common_config_hook | 25.2.200 |
| <helm-release-name>-configurationinit | configurationinit | 25.2.200 |
| <helm-release-name>-mediation/ocmed-nfmediation | mediation/ocmed-nfmediation | 25.2.203 |
| <helm-release-name>-nf_test | nf_test | 25.2.205 |
| <helm-release-name>-nrf-client | nrf-client | 25.2.203 |
| <helm-release-name>-occnp/oc-app-info | occnp/oc-app-info | 25.2.208 |
| <helm-release-name>-occnp/oc-config-server | occnp/oc-config-server | 25.2.208 |
| <helm-release-name>-performance | occnp/oc-perf-info | 25.2.208 |
| <helm-release-name>-ocdebugtool/ocdebug-tools | ocdebugtool/ocdebug-tools | 25.2.204 |
| <helm-release-name>-ocegress_gateway | ocegress_gateway | 25.2.110 |
| <helm-release-name>-ocingress_gateway | ocingress_gateway | 25.2.110 |
| <helm-release-name>-oocsepp-cn32c-svc | ocsepp-cn32c-svc | 25.2.200 |
| <helm-release-name>- ocsepp-cn32f-svc | ocsepp-cn32f-svc | 25.2.200 |
| <helm-release-name>-ocsepp-coherence-svc | ocsepp-coherence-svc | 25.2.200 |
| <helm-release-name>-ocsepp-config-mgr-svc | ocsepp-config-mgr-svc | 25.2.200 |
| <helm-release-name>-ocsepp-pn32c-svc | ocsepp-pn32c-svc | 25.2.200 |
| <helm-release-name>-ocsepp-pn32f-svc | ocsepp-pn32f-svc | 25.2.200 |
| <helm-release-name>-ocsepp-pre-install-hook | ocsepp-pre-install-hook | 25.2.200 |
| <helm-release-name>-ocsepp-update-db | ocsepp-update-db | 25.2.200 |
| <helm-release-name>-ocsepp-configurationupdate | configurationupdate | 25.2.200 |
To push the images to the registry:
- Navigate to the location where you want to install SEPP. Unzip
the SEPP release package
(<p********>_<release_number>_Tekelec.zip) to retrieve the
following CSAR package.
The SEPP package is as follows:
ReleaseName_csar_Releasenumber.zipWhere,- ReleaseName is a name that is used to track this installation instance.
- Releasenumber is the release number. For example:
ocsepp_csar_25_2_200_0_0.zip - Unzip the SEPP package file to get SEPP docker image tar
file:
unzip ocsepp_csar_Releasenumber.zipFor example:unzip ocsepp_csar_25_2_200_0_0.zip - The directory
ocsepp_csar_25_2_200_0_0.zipconsists of the following:├── Definitions │ ├── ocsepp_cne_compatibility.yaml │ └── ocsepp.yaml ├── Files │ ├── alternate_route-25.2.110.tar │ ├── ChangeLog.txt │ ├── common_config_hook-25.2.200.tar │ ├── configurationinit-25.2.200.tar │ ├── Helm │ │ ├── ocsepp-25.2.200.tgz │ │ ├── ocsepp-network-policy-25.2.200.tgz │ │ └── ocsepp-servicemesh-config-25.2.200.tgz │ ├── Licenses │ ├── mediation-ocmed-nfmediation-25.2.203.tar │ ├── nf_test-25.2.205.tar │ ├── nrf-client-25.2.203.tar │ ├── occnp-oc-app-info-25.2.208.tar │ ├── occnp-oc-config-server-25.2.208.tar │ ├── occnp-oc-perf-info-25.2.208.tar │ ├── ocdebugtool-ocdebug-tools-25.2.204.tar │ ├── ocegress_gateway-25.2.110.tar │ ├── ocingress_gateway-25.2.110.tar │ ├── ocsepp-cn32c-svc-25.2.200.tar │ ├── ocsepp-cn32f-svc-25.2.200.tar │ ├── ocsepp-coherence-svc-25.2.200.tar │ ├── ocsepp-config-mgr-svc-25.2.200.tar │ ├── ocsepp-pn32c-svc-25.2.200.tar │ ├── ocsepp-pn32f-svc-25.2.200.tar │ ├── ocsepp-pre-install-hook-25.2.200.tar │ ├── ocsepp-update-db-25.2.200.tar │ ├── Oracle.cert │ └── Tests ├── ocsepp.mf ├── Scripts │ ├── ocsepp_alertrules_promha_25.2.200.yaml │ ├── ocsepp_configuration_openapi_25.2.200.json │ ├── ocsepp_custom_values_25.2.200.yaml │ ├── ocsepp_custom_values_roaming_hub_25.2.200.yaml │ ├── ocsepp_dashboard_25.2.200.json │ ├── ocsepp_dashboard_promha_25.2.200.json │ ├── ocsepp_network_policies_custom_values_25.2.200.yaml │ ├── ocsepp-servicemesh-config-25.2.200.tgz │ ├── ocsepp_alertrules_25.2.200.yaml │ ├── ocsepp_mib_25.2.200.mib │ ├── ocsepp_mib_tc_25.2.200.mib │ └── toplevel.mib │ └── ocsepp_oci_alertrules_25.2.200.zip │ └── ocsepp_oci_dashboard_25.2.200.json │ └── ocsepp_rollback_schema_25.2.200.sql │ └── ocsepp_dbtier_25.2.200_custom_values_25.2.200.yaml │ └── ocsepp_single_service_account_config_25.2.200.yaml │ └── ocsepp_alertrules_rh_hosted_25.2.200.yaml │ └── ocsepp_alertrules_promha_rh_hosted_25.2.200.yaml └── TOSCA-Metadata └── TOSCA.meta - Open the
Filesfolder and based on the container engine installed, run one of the following command to load the SEPP images. Load all the images that are listed in SEPP Images Table:podman load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
docker load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
Where, IMAGE_PATH is the location where the SEPP docker image tar file is archived. Sample command:podman load --input /IMAGE_PATH/ocsepp-pn32c-svc-25.2.200.tar
Note:
The docker/podman load command needs to be executed seperately for each tar file/docker image. - Run one of the following commands to verify that the image is
loaded:
docker images | grep ocsepppodman images | grep ocseppNote:
Verify the list of images shown in the output with the list of images shown in the table SEPP Images. If the list does not match, reload the image tar file. - Run the following commands to log in to the OCI Docker registry:
podman login -u <REGISTRY_USERNAME> -p <REGISTRY_PASSWORD> <REGISTRY_NAME>
where,docker login -u <REGISTRY_USERNAME> -p <REGISTRY_PASSWORD> <REGISTRY_NAME>- REGISTRY_NAME is
<Region_Key>.ocir.io - REGISTRY_USERNAME is
<Object Storage Namespace>/<identity_domain>/email_id - REGISTRY_PASSWORD is the Auth token generated by the user.
<Object Storage Namespace>is configured in OCI Console. To access it, navigate to OCI Console> Governanace & Administration> Account Management> Tenenancy Details> Object Storage Namespace.<Identity Domain>is the domain where the user currently is present.- In OCI, each region is associated with a key. For
the details about the
<Region_Key>, refer to Regions and Availability Domains.
- REGISTRY_NAME is
- Run one of the following commands to tag the images to the
registry:
docker tag <image-name>:<image-tag> <REGISTRY_NAME>/<Object Storage Namespace>/<image-name>:<image-tag>podman tag <image-name>:<image-tag> <REGISTRY_NAME>/<Object Storage Namespace>/<image-name>:<image-tag>Sample Tag commands:podman tag alternate_route:25.2.110 <customer repo>/alternate_route:25.2.110 podman tag common_config_hook:25.2.200 <customer repo>/common_config_hook:25.2.200 podman tag configurationinit:25.2.200 <customer repo>/configurationinit:25.2.200 podman tag configurationupdate:25.2.200 <customer repo>/configurationupdate:25.2.200 podman tag mediation/ocmed-nfmediation:25.2.203 <customer repo>/mediation/ocmed-nfmediation:25.2.203 podman tag nf_test:25.2.205 <customer repo>/nf_test:25.2.205 podman tag nrf-client:25.2.203 <customer repo>/nrf-client:25.2.203 podman tag occnp/oc-app-info:25.2.208 <customer repo>/occnp/oc-app-info:25.2.208 podman tag occnp/oc-config-server:25.2.208 <customer repo>/occnp/oc-config-server:25.2.208 podman tag occnp/oc-perf-info:25.2.208 <customer repo>/occnp/oc-perf-info:25.2.208 podman tag ocdebugtool/ocdebug-tools:25.2.204 <customer repo>/ocdebugtool/ocdebug-tools:25.2.204 podman tag ocegress_gateway:25.2.110 <customer repo>/ocegress_gateway:25.2.110 podman tag ocingress_gateway:25.2.110 <customer repo>/ocingress_gateway:25.2.110 podman tag ocsepp-cn32c-svc:25.2.200 <customer repo>/ocsepp-cn32c-svc:25.2.200 podman tag ocsepp-cn32f-svc:25.2.200 <customer repo>/ocsepp-cn32f-svc:25.2.200 podman tag ocsepp-coherence-svc:25.2.200 <customer repo>/ocsepp-coherence-svc:25.2.200 podman tag ocsepp-config-mgr-svc:25.2.200 <customer repo>/ocsepp-config-mgr-svc:25.2.200 podman tag ocsepp-pn32c-svc:25.2.200 <customer repo>/ocsepp-pn32c-svc:25.2.200 podman tag ocsepp-pn32f-svc:25.2.200 <customer repo>/ocsepp-pn32f-svc:25.2.200 podman tag ocsepp-pre-install-hook:25.2.200 <customer repo>/ocsepp-pre-install-hook:25.2.200 podman tag ocsepp-update-db:25.2.200 <customer repo>/ocsepp-update-db:25.2.200 - Run one of the following commands to push the image to the
registry:
docker push <REGISTRY_NAME>/<Object Storage Namespace>/<image-name>:<image-tag>
podman push <REGISTRY_NAME>/<Object Storage Namespace>/<image-name>:<image-tag
Sample push commands:podman push occne-repo-host:5000/alternate_route:25.2.110 podman push occne-repo-host:5000/common_config_hook:25.2.200 podman push occne-repo-host:5000/configurationinit:25.2.200 podman push occne-repo-host:5000/configurationupdate:25.2.200 podman push occne-repo-host:5000/mediation/ocmed-nfmediation:25.2.203 podman push occne-repo-host:5000/nf_test:25.2.205 podman push occne-repo-host:5000/nrf-client:25.2.203 podman push occne-repo-host:5000/occnp/oc-app-info:25.2.208 podman push occne-repo-host:5000/occnp/oc-config-server:25.2.208 podman push occne-repo-host:5000/occnp/oc-perf-info:25.2.208 podman push occne-repo-host:5000/ocdebugtool/ocdebug-tools:25.2.204 podman push occne-repo-host:5000/ocegress_gateway:25.2.110 podman push occne-repo-host:5000/ocingress_gateway:25.2.110 podman push occne-repo-host:5000/ocsepp-cn32c-svc:25.2.200 podman push occne-repo-host:5000/ocsepp-cn32f-svc:25.2.200 podman push occne-repo-host:5000/ocsepp-coherence-svc:25.2.200 podman push occne-repo-host:5000/ocsepp-config-mgr-svc:25.2.200 podman push occne-repo-host:5000/ocsepp-pn32c-svc:25.2.200 podman push occne-repo-host:5000/ocsepp-pn32f-svc:25.2.200 podman push occne-repo-host:5000/ocsepp-pre-install-hook:25.2.200 podman push occne-repo-host:5000/ocsepp-update-db:25.2.200 - All the image repositories must be public. Run the following
steps to make all image repositories public:
- Log in to OCI Console. Navigate to OCI Console> Developer Services > Containers & Artifacts> Container Registry.
- Select the root Compartment.
- In the Repositories and Images Seach option, the images will be listed. Select each image and click Change to Public. This step must be performed for all the images sequentially.
2.2.1.5 Verifying and Creating Namespace
Note:
This is a mandatory procedure, run this before proceeding further with the installation. The namespace created or verified in this procedure is an input for the next procedures.- Run the following command to verify if the required namespace
already exists in
system:
kubectl get namespacesIn the output of the above command, if the namespace exists, continue with the Creating Service Account, Role and RoleBinding section.
- If the required namespace is not available, create the namespace
using the following
command:
kubectl create namespace <required namespace>Example:kubectl create namespace seppsvcSample output:namespace/seppsvc created - Update the namespace in the
ocsepp custom-values.yamlfile with the namespace created in the previous step. Example: Run the following kubectl command create the namespace seppsvc:
update the parameters :kubectl create namespace seppsvcglobal: nameSpace: seppsvc # NameSpacewhere secret is deployed nameSpace: seppsvc.
Naming Convention for Namespace
The namespace should:
- start and end with an alphanumeric character.
- contain 63 characters or less.
- contain only alphanumeric characters or '-'.
Note:
It is recommended to avoid using the prefix kube- when creating
a namespace. The prefix is reserved for Kubernetes system namespaces.
2.2.1.6 Creating Service Account, Role, and RoleBinding
This section outlines the process for creating one or more service accounts. Users can choose to create a single service account manually or automatically with a custom name, or they can opt to have multiple microservice-specific accounts created automatically.
2.2.1.6.1 Manually Creating Service Account, Role, and Rolebinding
This section explains how to create and use a single service account, role, and rolebinding resources which can be used by all the microservices of SEPP. This is an optional procedure and is only needed, if the user wants to use a single service account, role,and rolebinding by all the microservices of SEPP.
- Create a OCSEPP resource
file:
vi <ocsepp-resource-file>Example:
vi ocsepp_single_service_account_config_<version>.yaml - Update the
ocsepp_single_service_account_config_<version>.yamlfile with the correct namespace by replacing seppsvc with the user defined SEPP's namespace:Note:
The user have an option to update the name of service account, role, and rolebinding.Note:
If SEPP is deployed with ASM, the user must add the following annotation in the single service account:certificate.aspenmesh.io/customFields: '{ "SAN": { "DNS": [<SEPP inter PLMN FQDN>, <SEPP intra PLMN FQDN>], "URI": [<SEPP inter PLMN FQDN>, <SEPP intra PLMN FQDN>] } }Example:apiVersion: v1 kind: ServiceAccount metadata: annotations: certificate.aspenmesh.io/customFields: '{ "SAN": { "DNS": ["sepp1.inter.oracle.com", "sepp1.intra.oracle.com"], "URI": ["sepp1.inter.oracle.com", "sepp1.intra.oracle.com"] } }' name: sepp-sa namespace: seppsvc#sa-role-rolebinding.yaml apiVersion: v1 kind: ServiceAccount metadata: annotations: {} labels: {} name: sepp-sa namespace: seppsvc --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: {} labels: {} name: sepp-role namespace: seppsvc rules: - apiGroups: - "" resources: - services - configmaps - pods - secrets - endpoints - persistentvolumeclaims - pods/exec - serviceaccounts verbs: - get - watch - list - update - delete - deletecollection - create - patch # for cnDBtier - apiGroups: - apps resources: - deployments - statefulsets - replicasets verbs: - get - watch - list - update - delete - create - patch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - watch - list - update # for job deletion - apiGroups: - batch resources: - jobs verbs: - get - delete - apiGroups: - "" resources: - events - pods/log verbs: - get - watch - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: {} labels: {} name: sepp-rolebinding namespace: seppsvc roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: sepp-role subjects: - kind: ServiceAccount name: sepp-sa namespace: seppsvc - Run the following command to create service account, role, and role
binding:
$ kubectl -n <SEPP namespace> create -f ocsepp_single_service_account_config_<version>.yamlFor example:$ kubectl -n seppsvc create -f ocsepp_single_service_account_config_<version>.yaml - Update the
serviceAccountNameparameter in theocsepp_custom_values_<version>.yamlfile.
2.2.1.6.2 Automatically Creating Service Account, Role, and Rolebinding
This section describes how to automatically create service account, role, and role binding by enabling the Helm parameters.
- Global Flag (
global.autoCreateResources.enabled): Controls the overall automation of resource creation. This flag is disabled by default and must be set to true to enable automation. - Resource-specific Flag
(
global.autoCreateResources.serviceAccount.create): Controls ServiceAccount creation at the resource level. This flag is enabled by default and make sure it is set to true alongside the global flag to enable ServiceAccount automation. This flag is conditional on the global flag and will only take effect if the global flag is set to true. If the flag is disabled, you must create the service accounts manually. The role and role binding resources are created along with the service account as part of this automation.
Note:
You must also perform the following steps during the upgrade procedure.- Provide the
serviceAccountNamein the custom values yaml file to create the service account. - Enable the
autoCreateResources.enabledparameter in the global section of theocsepp_custom_values_<version>.yamlfile.Example:
autoCreateResources: enabled: true - Enable the
serviceAccount.createparameter in the global section of theocsepp_custom_values_<version>.yamlfile.Example:
serviceAccount: create: true - Perform the Helm installation. The service account will be created with the name provided in the custom values yaml file.
serviceAccountName field in the custom values
yaml file is used for service account automation to minimize the changes in the custom
values yaml
file.
serviceAccountName: &serviceAccountNameRef ""
Example:
serviceAccountName: &serviceAccountNameRef "singleserviceaccount"
autoCreateResources:
enabled: true
serviceAccount:
create: true # This internal flag controls whether ServiceAccounts should be created automatically.
The role and role binding resources are created with the name that is provided as part of the automation. The service account automation is supported only for Helm.
Table 2-27 Service Account Creation using Different Combinations
| Parameter | serviceAccountName | Result |
|---|---|---|
|
|
Provided | Service account is created or updated with the provided serviceAccountName. |
|
|
Not provided | Service account is created or updated with .Release.name. |
|
|
Provided | Service account is not created. |
|
|
Not provided | The deployment fails. The serviceAccountName is mandatory. |
|
|
Provided | The service account is not created. It must be created manually. |
|
|
Not Provided | Service account is created or updated with .Release.name |
Note:
Upgrade Process:
If a customer is upgrading from an older component version that uses manually created
service accounts (SAs) to a version that supports automatic SA creation (where SAs
are generated automatically for each component), and they wish to transition to
these automated service accounts, they must specify new SA names in the
ocsepp_custom_values_<version>.yaml file. This will trigger
the automatic creation of the new service accounts, roles, and role bindings via
Helm charts. The customer will need to manually remove the old service accounts.
Note:
Error when singleServiceAccountName contains uppercase
characters:
failed to create resource: ServiceAccount "singleService" is invalid:
metadata.name: Invalid value: "singleService": a lowercase RFC 1123 subdomain
must consist of lower case alphanumeric characters, '-' or '.', and must start
and end with an alphanumeric character (e.g. 'example.com', regex used for
validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
2.2.1.7 Configuring Database, Creating Users, and Granting Permissions
This section explains how database administrators can create users and database in a single and multisite deployment.
SEPP supports single database (provisional Database) and single type of user.
Note:
- Before running the procedure for georedundant sites, ensure that the cnDBTier for georedundant sites is up and replication channels are enabled.
- While performing a fresh installation, if SEPP is already deployed, purge the deployment and remove the database and users that were used for the previous deployment. For uninstallation procedure, see Uninstalling SEPP.
- To install cnDBTier, refer Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
cnDBTier Parameter Values
For Single Cluster, Single Instance (Single SEPP Instances on Dedicated cnDBTier
Cluster) deployment model, the cnDBTier resources can be taken from
ocsepp_dbtier_<cnDBTier_version>_custom_values_<SEPP_version>.yaml
file.
Example:
ocsepp_dbtier_25.2.200_custom_values_25.2.200.yaml
For Single Cluster, Multiple Instance (multiple SEPP instances on shared
cnDBTier cluster), the cnDBTier parameters from cnDBTier Parameter Values Table
should be updated in
ocsepp_dbtier_<cnDBTier_version>_custom_values_<SEPP_version>.yaml
file.
Note:
Verify the value of the following parameters before deploying SEPP in 1+1 site GR with single cluster, multiple instance.Table 2-28 cnDBTier Parameter Values
| Parameter | Default Values (in CV file) | New Values (to be Updated in CV file) |
|---|---|---|
| MaxNoOfTables | 1024 | 3000 |
| MaxNoOfAttributes | 5000 | 24000 |
| MaxNoOfOrderedIndexes | 1024 | 3700 |
SEPP Users
SEPP supports a single type of user:
This user has a complete set of permissions. This user can perform create, alter, or drop operations on tables to perform install, upgrade, rollback, or delete operations.SEPP Database
SEPP Database contains configuration information. The same configuration must be done on each site by the operator. In case of multisite georedundant setups, each site must have a unique SEPP Database, which is replicated to other sites. SEPP sites can access only the information in their unique provisional database.
For example:
- For Site 1: seppdb_site_1
- For Site 2: seppdb_site_1
- For Site 3: seppdb_site_1
2.2.1.7.1 Single Site
This section explains how database administrator can create the database, users, and grant permissions to the users for single SEPP site.
Follow the below steps to manually create SEPP Database and MySQL user required for the deployment:
- Log in to the machine that has permission to access the SQL nodes of NDB cluster.
- Run the following command to log in to one of the
ndbappmysqldnode pods of the primary NDB cluster: Connect to the SQL nodes.
where,kubectl exec -it ndbappmysqld-0 -n <cndb-namespace> -- bashcndb-namespaceis the namespace in which cnDBTier is installed. - Log in to the MySQL prompt using root permission or user, which has
permission to create users with conditions as mentioned (in the next step):
Example:
mysql -h127.0.0.1 -uroot -p<rootPassword>Note:
This command varies between systems, path for MySQL binary, root user, and root password. After running this command, enter the password specific to the user mentioned in the command. - Check whether OCSEPP database user already exists. If the user does not
exist, create an OCSEPP database user by running the following queries:
- Run the following command to to list the users:
$ SELECT User FROM mysql.user; - If the SEPP user does not exist, run the following command to create
the new user:
Example:$ CREATE USER IF NOT EXISTS '<OCSEPP User Name>'@'%' IDENTIFIED BY '<OCSEPP User Password>';$ CREATE USER IF NOT EXISTS 'seppusr'@'%' IDENTIFIED BY 'sepppasswd';
Note:
You must create the user on all the SQL nodes for all georedundant sites. - Run the following command to to list the users:
- Check if OCSEPP database already exists. If it does not exist, run the
following commands to create an OCSEPP database and provide permissions to OCSEPP username
created in the previous step:
Note:
Naming Convention for SEPP Database
As the SEPP instances cannot share the same database, user must provide a unique name to the SEPP database in the cnDBTier. The recommended format for SEPP database and SEPP-backup database name is as follows:
<database-name>_<site-name>_<NF_INSTANCE_ID>where "-" in NF_INSTANCE_ID is replaced by "_".Example: seppdb_site1_9faf1bbc_6e4a_4454_a507_aef01a101a06
The name of the database must:
- starts and ends with an alphanumeric character
- contains a maximum of 63 characters
- contains only alphanumeric characters or '_'
- Run the following command to check if database
exists:
$ show databases; - If database does not exist, run the following command for database
creation:
Example:$ CREATE DATABASE IF NOT EXISTS <OCSEPP Database>;$ CREATE DATABASE IF NOT EXISTS seppdb; - If backup database does not exist, run the following command for
database
creation:
Example:$ CREATE DATABASE IF NOT EXISTS <OCSEPP Backup Database>;CREATE DATABASE IF NOT EXISTS seppbackupdb;
Note:
Ensure that you use the same database names while creating database that you have used in the global parameters ofocsepp_custom_values_<version>.yamlfile. Following is an example of what are the database names configured in theocsepp_custom_values_<version>.yamlfile:global:seppDbName: "seppdb" global:leaderPodDbName: "seppdb" global:networkDbName: "seppdb" global:nrfClientDbName: "seppdb" Backup Database global:seppBackupDbName: "seppbackupdb" -
- Run the following command to grant permission to user to SEPP Database:
Example:$ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON <OCSEPP Database>.* TO '<OCSEPP User Name>'@'%';GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON seppdb.* TO 'seppusr'@'%'; - Run the following command to grant permission to user for SEPP backup
db:
Example:$ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON <OCSEPP backup Database>.* TO '<OCSEPP User Name>'@'%';GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON seppbackupdb.* TO 'seppusr'@'%'; - Run the following command to grant permission for MySQL db:
Example:$ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON mysql.* TO '<OCSEPP User Name>'@'%';GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON mysql.* TO 'seppusr'@'%';
- Run the following command to grant permission to user to SEPP Database:
- Run the following command to grand the permission to
NDB_STORED_USER:
GRANT NDB_STORED_USER ON *.* TO '<OCSEPP User Name>'@'%' WITH GRANT OPTION;Example:GRANT NDB_STORED_USER ON *.* TO 'seppusr'@'%' WITH GRANT OPTION; - Run the following command to flush the
privileges:
flush privileges; - Run the following command to verify SEPP database grants were correctly
created:
Example:show grants for '<sepp user>'@'%';show grants for 'seppusr'@'%'; +-----------------------------------------------------------------------------------------------------------------------------------------------------------+ | Grants for seppusr@% | +-----------------------------------------------------------------------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO `seppusr`@`%` | | GRANT NDB_STORED_USER ON *.* TO `seppusr`@`%` WITH GRANT OPTION | | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `mysql`.* TO `seppusr`@`%` | | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `seppbackupdb`.* TO `seppusr`@`%` | | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `seppdb`.* TO `seppusr`@`%` | +-----------------------------------------------------------------------------------------------------------------------------------------------------------+ - Exit from MySQL prompt and SQL nodes.
2.2.1.7.2 Multisite
This section explains how database administrator can create the database, users, and grant permissions to the users for multisite deployment.
- Log in to the machine that has permission to access the SQL nodes of NDB cluster.
- Run the following command to log in to one of the
ndbappmysqldnode pods of the primary NDB cluster: Connect to the SQL nodes.
where,kubectl exec -it ndbappmysqld-0 -n <cndb-namespace> -- bashcndb-namespaceis the namespace in which CNDB is installed. - Log in to the MySQL prompt using root permission or user, which has
permission to create users with conditions as mentioned (in the next step):
Example:
mysql -h127.0.0.1 -uroot -p<rootPassword>Note:
This command varies between systems, path for MySQL binary, root user, and root password. After running this command, enter the password specific to the user mentioned in the command. - Check whether SEPP database user already exists. If the user does not
exist, create an SEPP database user by running the following queries:
- Run the following command to to list the users:
$ SELECT User FROM mysql.user; - If the SEPP user does not exist, run the following command to create
the new user :
Example:$ CREATE USER IF NOT EXISTS '<OCSEPP User Name>'@'%' IDENTIFIED BY '<OCSEPP User Password>';$ CREATE USER IF NOT EXISTS 'seppusr'@'%' IDENTIFIED BY 'sepppasswd';
Note:
You must create the user on all the SQL nodes for all georedundant sites. - Run the following command to to list the users:
- Check if SEPP database already exists. If it does not exist, run the
following commands to create an SEPP database and provide permissions to SEPP username
created in the previous step:
Note:
Naming Convention for SEPP Database
As the SEPP instances cannot share the same database, user must provide a unique name to the SEPP database in the cnDBTier. The recommended format for SEPP database and SEPP-backup database name is as follows:
<database-name>_<site-name>_<NF_INSTANCE_ID>where "-" in NF_INSTANCE_ID is replaced by "_".Example: seppdb_site1_9faf1bbc_6e4a_4454_a507_aef01a101a06
The name of the database must:
- starts and ends with an alphanumeric character
- contains a maximum of 63 characters
- contains only alphanumeric characters or '_'
Note:
Create database for each site. For Site-2 or Site-3, ensure that the database name is different from the previous site names.- Run the following command to check if database
exists:
$ show databases; - If database does not exist, run the following command for database
creation:
Example:$ CREATE DATABASE IF NOT EXISTS <OCSEPP Database>;$ CREATE DATABASE IF NOT EXISTS seppdb; - If backup database does not exist, run the following command for
database
creation:
Example:$ CREATE DATABASE IF NOT EXISTS <SEPP Backup Database>;CREATE DATABASE IF NOT EXISTS seppbackupdb;
Note:
Ensure that you use the same database names while creating database that you have used in the global parameters ofocsepp_custom_values_<version>.yamlfile. Following is an example of what are the database names configured in theocsepp_custom_values_<version>.yamlfile:global:seppDbName: "seppdb" global:leaderPodDbName: "seppdb" global:networkDbName: "seppdb" global:nrfClientDbName: "seppdb" Backup Database global:seppBackupDbName: "seppbackupdb" -
- Run the following command to grant permission to user to SEPP DB:
Example:$ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON <OCSEPP Database>.* TO '<OCSEPP User Name>'@'%';GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON seppdb.* TO 'seppusr'@'%'; - Run the following command to grant permission to user for backup db:
Example:$ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON <OCSEPP backup Database>.* TO '<OCSEPP User Name>'@'%';GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON seppbackupdb.* TO 'seppusr'@'%'; - Run the following command to grant permission for MySQL db:
Example:$ GRANT SELECT,INSERT,CREATE,ALTER,DROP,LOCK TABLES,CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON mysql.* TO '<OCSEPP User Name>'@'%';GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES,DELETE,UPDATE,REFERENCES,EXECUTE ON mysql.* TO 'seppusr'@'%';
- Run the following command to grant permission to user to SEPP DB:
- Run the following command to grand the permission to
NDB_STORED_USER:
GRANT NDB_STORED_USER ON *.* TO '<OCSEPP User Name>'@'%' WITH GRANT OPTION;Example:GRANT NDB_STORED_USER ON *.* TO 'seppusr'@'%' WITH GRANT OPTION; - Run the following command to flush the
privileges:
flush privileges; - Run the following command to verify SEPP database grants were correctly
created:
Example:show grants for '<sepp user>'@'%';show grants for 'seppusr'@'%'; +-----------------------------------------------------------------------------------------------------------------------------------------------------------+ | Grants for seppusr@% | +-----------------------------------------------------------------------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO `seppusr`@`%` | | GRANT NDB_STORED_USER ON *.* TO `seppusr`@`%` WITH GRANT OPTION | | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `mysql`.* TO `seppusr`@`%` | | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `seppbackupdb`.* TO `seppusr`@`%` | | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `seppdb`.* TO `seppusr`@`%` | +-----------------------------------------------------------------------------------------------------------------------------------------------------------+ - Exit from MySQL prompt and SQL nodes.
2.2.1.8 Configuring Kubernetes Secrets for Accessing Database
This section explains how to configure Kubernetes secrets for accessing SEPP database.
- Run the following command to create a Kubernetes secret for the SEPP
users:
Where,
kubectl create secret generic <OCSEPP User secret name> --from-literal=mysql-username=<OCSEPP MySQL Database User Name> --from-literal=mysql-password=<OCSEPP Mysql User Password > -n <Namespace>- <OCSEPP User secret name> is the secret name of the user.
- <OCSEPP MySQL Database User Name> is the username of the ocsepp MySQL user.
- <OCSEPP MySQL User Password> is the password of the ocsepp MySQL user.
- <Namespace> is the namespace of SEPP deployment.
Note:
Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in future.Example:
$ kubectl create secret generic ocsepp-mysql-cred --from-literal=mysql-username=seppusr --from-literal=mysql-password=sepppasswd -n seppsvc - Run the following command to verify the secret
created:
Where,$ kubectl describe secret <OCSEPP User secret name> -n <Namespace>- <OCSEPP User secret name> is the secret name of the user.
- <Namespace> is the namespace of SEPP deployment. For example:
Sample output:$ kubectl describe secret ocsepp-mysql-cred -n seppsvcName: ocsepp-mysql-cred Namespace: seppsvc Labels: <none> Annotations: <none> Type: Opaque Data ==== mysql-password: 10 bytes mysql-username: 7 bytesNote:
If the secret name is anything other than ocsepp-mysql-cred, update the following parameter values in theocsepp_custom_values_<version>.yamlfile before deploying:global:dbCredSecretName: &dbCredSecretNameRef 'ocsepp-mysql-cred'global:privilegedDbCredSecretName: &privDbCredSecretNameRef 'ocsepp-mysql- cred'
- To update the Kubernetes secret, update the command used in step 1 with
string "--dryrun -o yaml" and "kubectl replace -f - -n <Namespace>". After the
update is performed, use the following
command:
Where,$ kubectl create secret generic <OCSEPP User secret name> --fromliteral=mysql-username=<OCSEPP MySQL Database User Name> --fromliteral=mysql-password=<OCSEPP Mysql User Password> --dryrun -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>- <OCSEPP User secret name> is the secret name of the user.
- <OCSEPP MySQL Database User Name> is the username of the ocsepp MySQL user.
- <OCSEPP MySQL User Password> is the password of the ocsepp MySQL user.
- <Namespace> is the namespace of SEPP deployment.
- Run the updated command. The following message is
displayed:
Where, <OCSEPP User secret name> is the updated secret name of the application user. Example:secret/<OCSEPP User secret name> replacedsecret/ocsepp-mysql-cred replaced
2.2.1.9 Configuring Kubernetes Secret for Enabling HTTPS
This section explains the steps to configure HTTPS at Ingress and Egress Gateways.
2.2.1.9.1 Configuring Secrets at N32 Egress and Ingress Gateway
This section explains the steps to configure secrets for enabling HTTP over TLS in N32 Ingress and Egress Gateways. This procedure must be performed before deploying SEPP.
Note:
The passwords for TrustStore and KeyStore are stored in respective password files.- ECDSA private key and CA signed certificate of SEPP (if initialAlgorithm is ES256)
- or, RSA private key and CA signed certificate of SEPP (if initialAlgorithm is RSA256)
- TrustStore password file
- KeyStore password file
- CA certificate
Note:
- Creation process for private keys, certificates, and passwords is on discretion of user/operator.
- f the certificates are not available, then create them following the instructions given in the 'Creating Private Keys and Certificates for Gateways' section.
- Managing secrets through OCCM
- Managing secrets manually
Managing Secrets Through OCCM
To create the secrets using OCCM, see "Managing Certificates" in Oracle Communications Cloud Native Core, Certificate Management User Guide.
- To patch the secrets created with the keyStore password file:
Where,
TLS_CRT=$(base64 < "key.txt" | tr -d '\n') kubectl patch secret server-primary-ocsepp-secret-occm -n seppsvc -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"key.txtis the password file that contains KeyStore password.server-primary-ocsepp-secret-occmis the secret created by OCCM.
- To patch the secrets created with the trustStore password
file:
Where,TLS_CRT=$(base64 < "trust.txt" | tr -d '\n') kubectl patch secret server-primary-ocsepp-secret-occm -n seppsvc -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"trust.txtis the password file that contains TrustStore password.server-primary-ocsepp-secret-occmis the secret created by OCCM.
Note:
To monitor the lifecycle management of the certificates through OCCM, do not patch the Kubernetes secrets manually to update the TLS certificate or keys. It must be done through the OCCM GUI.Managing Secrets Manually
- Run the following command to create secret:
Where,
$ kubectl create secret generic <ocsepp-n32-secret> --fromfile=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --fromfile=<trust.txt> --from-file=<key.txt> --from-file=<caroot.cer> --fromfile=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of SEPP deployment><ocsepp-n32-secre> is the secret name for n32-egress-gateway and n32-ingress-agteway. <ssl_ecdsa_private_key.pem> is the ECDSA private key. <rsa_private_key_pkcs1.pem> is the RSA private key. <trust.txt> is the SSL Truststore file. <key.txt> is the SSL Keystore file. <caroot.cer> is the CA root certificate authority <ssl_rsa_certificate.crt> is the SSL RSA certificate. <ssl_ecdsa_certificate.crt> is the SSL ECDSA certificate. <Namespace> of SEPP deployment.Note:
- Note down the command used during the creation of Kubernetes secret, this command will be used for updates in future.
- It is recommended to use the same secret name as
mentioned in the example. In case you change
<ocsepp-n32-secret>, then update the
k8SecretNameparameter under n32-ingress-gateway and n32-egress-gateway section in theocsepp_custom_values_<version>.yaml. For more information, see then32-ingress-gatewayandn32-egress-gatewaysection.
Example:kubectl create secret generic ocsepp-n32-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=trust.txt --from-file=key.txt --from-file=caroot.cer --from-file=rsa_certificate.crt --from-file=ssl_ecdsa_private_key.pem --from-file=ocsepp.cer -n seppsvc - Run the following command to verify the secret:
Where,$ kubectl describe secret <ocsepp-n32-secret> -n <Namespace><ocsepp-n32-secret>is the secret name for n32-egress-gateway and n32-ingress-agteway and <Namespace> is the namespace of SEPP deployment.Example:$ kubectl describe secret ocsepp-n32-secret -n seppsvc
Note:
If the certificates are not available, then create them following the instructions given in the 'Creating Private Keys and Certificates for Gateways' section.Hosted SEPP Mode
This section explains the steps to configure secrets for enabling HTTP over TLS in N32 Ingress and Egress Gateways. This procedure must be performed before deploying SEPP.
Note:
The passwords for TrustStore and KeyStore are stored in respective password files.- ECDSA private key and CA signed certificate of SEPP (if initialAlgorithm is ES256)
- or, RSA private key and CA signed certificate of SEPP (if initialAlgorithm is RSA256)
- TrustStore password file
- KeyStore password file
- CA certificate
Note:
- Creation process for private keys, certificates, and passwords is on discretion of user/operator.
- f the certificates are not available, then create them following the instructions given in the 'Creating Private Keys and Certificates for Gateways' section.
- Managing secrets through OCCM
- Managing secrets manually
Managing Secrets Through OCCM
To create the secrets using OCCM, see "Managing Certificates" in Oracle Communications Cloud Native Core, Certificate Management User Guide.
- To patch the secrets created with the keyStore password file:
Where,
TLS_CRT=$(base64 < "key.txt" | tr -d '\n') kubectl patch secret server-primary-ocsepp-secret-occm -n seppsvc -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"key.txtis the password file that contains KeyStore password.server-primary-ocsepp-secret-occmis the secret created by OCCM.
- To patch the secrets created with the trustStore password
file:
Where,TLS_CRT=$(base64 < "trust.txt" | tr -d '\n') kubectl patch secret server-primary-ocsepp-secret-occm -n seppsvc -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"trust.txtis the password file that contains TrustStore password.server-primary-ocsepp-secret-occmis the secret created by OCCM.
Note:
To monitor the lifecycle management of the certificates through OCCM, do not patch the Kubernetes secrets manually to update the TLS certificate or keys. It must be done through the OCCM GUI.Managing Secrets Manually
- Run the following command to create secret:
Where,
kubectl create secret generic <ocsepp-n32-secret> --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=trust.txt --from-file=key.txt --from-file=caroot.cer --from-file=rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --from-file=ocsepp.cer --from-file=internal_fqdn.mcc111.mnc123.3gppnetwork_ssl_ecdsa_private_key.pem --from-file=internal_fqdn.mcc111.mnc123.3gppnetwork_rsa_private_key_pkcs1.pem --from-file=internal_fqdn.mcc111.mnc123.3gppnetwork_trust.txt --from-file=internal_fqdn.mcc111.mnc123.3gppnetwork_key.txt --from-file=bundle1.cer --from-file=internal_fqdn.mcc111.mnc123.3gppnetwork_rsa_certificate.crt --from-file=internal_fqdn.mcc111.mnc123.3gppnetwork_ssl_ecdsa_certificate.crt --from-file=internal_fqdn.mcc111.mnc123.3gppnetwork_ocsepp.cer --from-file=external_fqdn1.mcc123.mnc90.cer -n <namespace>
and rest are the default certificates that are created for the SEPP.<ocsepp-n32-secre> is the secret name for n32-egress-gateway and n32-ingress-agteway. 'ocsepp-n32-igw-secret' is the secret name for n32-ingress-gateway 'ocsepp-n32-egw-secret' is the secret name for n32-egress-gateway <hosted_fqdn_ssl_ecdsa_private_key.pem> is the ECDSA private key for the hosted partner. <hosted_fqdn_rsa_private_key_pkcs1.pem> is the RSA private key for the hosted partner. <hosted_fqdn_trust.txt> is the SSL Truststore file for the hosted partner. <hosted_fqdn_key.txt> is the SSL Keystore file for the hosted partner. <hosted_fqdn_caroot.cer> is the CA root certificate authority for the hosted partner. <hosted_fqdn_ssl_rsa_certificate.crt> is the SSL RSA certificate for the hosted partner. <hosted_fqdn_ssl_ecdsa_certificate.crt> is the SSL ECDSA certificate for the hosted partner. <external_fqdn_ssl_ecdsa_private_key.pem> is the ECDSA private key exposed towards the roaming partner. <external_fqdn_rsa_private_key_pkcs1.pem> is the RSA private key exposed towards the roaming partner. <external_fqdn_trust.txt> is the SSL Truststore file exposed towards the roaming partner. <external_fqdn_key.txt> is the SSL Keystore file exposed towards the roaming partner. <external_fqdn_caroot.cer> is the CA root certificate authority exposed towards the roaming partner. <external_fqdn_ssl_rsa_certificate.crt> is the SSL RSA certificate exposed towards the roaming partner. <external_fqdn_ssl_ecdsa_certificate.crt> is the SSL ECDSA certificate exposed towards the roaming partner. <Namespace> of SEPP deployment.Note:
- Note down the command used during the creation of Kubernetes secret, this command will be used for updates in future.
- It is recommended to use the same secret name as
mentioned in the example. In case you change
<ocsepp-n32-igw-secret> and <ocsepp-n32-egw-secret>,
then update the
k8SecretNameparameter under n32-ingress-gateway and n32-egress-gateway section in theocsepp_custom_values_<version>.yaml. For more information, see then32-ingress-gatewayandn32-egress-gatewaysection. - Hosted Partners with multiple FQDN can have different set of client certificates and CA bundle. In Hosted SEPP mode, user must add the client certificate of all hosted partners in the <ocsepp-n32-igw-secret> and <ocsepp-n32-egw-secret>.
- SAN values present in the certificate should match the FQDNs configured in the multi FQDN configuration on n32 ingress gateway and n32 egress gateway.
Example:kubectl create secret generic ocsepp-n32-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=trust.txt --from-file=key.txt --from-file=caroot.cer --from-file=rsa_certificate.crt --from-file=ssl_ecdsa_private_key.pem --from-file=ocsepp.cer -n seppsvc - Run the following command to verify the secret:
Where,$ kubectl describe secret <ocsepp-n32-secret> -n <Namespace><ocsepp-n32-secret>is the secret name for n32-egress-gateway and n32-ingress-agteway and <Namespace> is the namespace of SEPP deployment.Example:$ kubectl describe secret ocsepp-n32-secret -n seppsvc
Note:
If the certificates are not available, then create them following the instructions given in the 'Creating Private Keys and Certificates for Gateways' section.2.2.1.9.2 Configuring Secrets at PLMN Egress and Ingress Gateway
This section explains the steps to configure secrets for enabling HTTPS/HTTP over TLS in Public Land Mobile Network (PLMN) Ingress and Egress Gateways. This procedure must be performed before deploying SEPP.
Note:
The passwords for TrustStore and KeyStore are stored in respective password files.- ECDSA private key and CA signed certificate of SEPP, if initialAlgorithm is ES256
- RSA private key and CA signed certificate of SEPP, if initialAlgorithm is RS256
- TrustStore password file
- KeyStore password file
- Certificate chain for trust store
- Signed server certificate or Signed client certificate
Note:
The creation process for private keys, certificates, and passwords are at the discretion of the user or operator.- Managing secrets through OCCM
- Managing secrets manually
Managing Secrets Through OCCM
To create the secrets using OCCM, see "Managing Certificates" in Oracle Communications Cloud Native Core, Certificate Management User Guide.
- To patch the secrets created with the keyStore password file:
Where,
TLS_CRT=$(base64 < "key.txt" | tr -d '\n') kubectl patch secret server-primary-seppsvc-secret-occm -n seppsvc -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"key.txtis the password file that contains KeyStore password.server-primary-ocsepp-secret-occmis the secret created by OCCM.
- To patch the secrets created with the trustStore password
file:
Where,TLS_CRT=$(base64 < "trust.txt" | tr -d '\n') kubectl patch secret server-primary-ocsepp-secret-occm -n seppsvc -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"trust.txtis the password file that contains TrustStore password.server-primary-ocsepp-secret-occmis the secret created by OCCM.
Note:
To monitor the lifecycle management of the certificates through OCCM, do not patch the Kubernetes secrets manually to update the TLS certificate or keys. It must be done through the OCCM GUI.Managing Secrets Manually
- Run the following command to create secret:
Where,
$ kubectl create secret generic <ocsepp-plmn-secret> --fromfile=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --fromfile=<trust.txt> --from-file=<key.txt> --from-file=<caroot.cer> --fromfile=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of SEPP deployment><ocsepp-plmn-secret> is the secret name for plmn-egress-gateway and plmn-ingress-gateway. <ssl_ecdsa_private_key.pem> is the ECDSA private key. <rsa_private_key_pkcs1.pem> is the RSA private key. <trust.txt> is the SSL Truststore file. <key.txt> is the SSL Keystore file. <caroot.cer> is the CA root certificate authority <ssl_rsa_certificate.crt> is the SSL RSA certificate. <ssl_ecdsa_certificate.crt> is the SSL ECDSA certificate. <Namespace> of SEPP deployment.Note:
- Note down the command used during the creation of kubernetes secret, this command will be used for updates in future.
- It is recommended to use the same secret name as
mentioned in the example. In case you change
<ocsepp-plmn-secret>, then update thek8SecretNameparameter underplmn-ingress-gatewayandplmn-egress-gatewaysection in theocsepp-custom-values-<version>.yamlfile. For more information, see theplmn-ingress-gatewayandplmn-egress-gatewaysection. - For multiple CA root partners, the SEPP CA certificate
should contain the CA information in the particular format. The CAs
of the roaming partners should be concatenated in the single file
separated by eight hyphens as given
below:
CA1 content -------- CA2 content -------- CA3 content
Example:kubectl create secret generic ocsepp-plmn-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=trust.txt --from-file=key.txt --from-file=caroot.cer --from-file=rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --from-file=ocsepp.cer -n seppsvc - Run the following command to verify the
secret:
Where,$ kubectl describe secret <ocsepp-plmn-secret> -n <Namespace><ocsepp-plmn-secret> is the secret name for plmn-egress-gateway and plmn-ingress-gateway. <Namespace> is the namespace of SEPP deployment.Example:$ kubectl describe secret ocsepp-plmn-secret -n seppsvc
Hosted SEPP Mode
This section explains the steps to configure secrets for enabling HTTPS/HTTP over TLS in Public Land Mobile Network (PLMN) Ingress and Egress Gateways. This procedure must be performed before deploying SEPP.
Note:
The passwords for TrustStore and KeyStore are stored in respective password files.- ECDSA private key and CA signed certificate of SEPP, if initialAlgorithm is ES256
- RSA private key and CA signed certificate of SEPP, if initialAlgorithm is RS256
- TrustStore password file
- KeyStore password file
- Certificate chain for trust store
- Signed server certificate or Signed client certificate
Note:
The creation process for private keys, certificates, and passwords are at the discretion of the user or operator.- Managing secrets through OCCM
- Managing secrets manually
Managing Secrets Through OCCM
To create the secrets using OCCM, see "Managing Certificates" in Oracle Communications Cloud Native Core, Certificate Management User Guide.
- To patch the secrets created with the keyStore password file:
Where,
TLS_CRT=$(base64 < "key.txt" | tr -d '\n') kubectl patch secret server-primary-seppsvc-secret-occm -n seppsvc -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"key.txtis the password file that contains KeyStore password.server-primary-ocsepp-secret-occmis the secret created by OCCM.
- To patch the secrets created with the trustStore password
file:
Where,TLS_CRT=$(base64 < "trust.txt" | tr -d '\n') kubectl patch secret server-primary-ocsepp-secret-occm -n seppsvc -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"trust.txtis the password file that contains TrustStore password.server-primary-ocsepp-secret-occmis the secret created by OCCM.
Note:
To monitor the lifecycle management of the certificates through OCCM, do not patch the Kubernetes secrets manually to update the TLS certificate or keys. It must be done through the OCCM GUI.Managing Secrets Manually
- Run the following command to create secret:
Where,
$ kubectl create secret generic <ocsepp-plmn-secret> --fromfile=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --fromfile=<trust.txt> --from-file=<key.txt> --from-file=<caroot.cer> --fromfile=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of SEPP deployment><ocsepp-plmn-secret> is the secret name for plmn-egress-gateway and plmn-ingress-gateway. 'ocsepp-plmn-igw-secret' is the secret name for plmn-ingress-gateway 'ocsepp-plmn-egw-secret' is the secret name for plmn-egress-gateway <ssl_ecdsa_private_key.pem> is the ECDSA private key. <rsa_private_key_pkcs1.pem> is the RSA private key. <trust.txt> is the SSL Truststore file. <key.txt> is the SSL Keystore file. <caroot.cer> is the CA root certificate authority <ssl_rsa_certificate.crt> is the SSL RSA certificate. <ssl_ecdsa_certificate.crt> is the SSL ECDSA certificate. <Namespace> of SEPP deployment.Note:
- Note down the command used during the creation of kubernetes secret, this command will be used for updates in future.
- It is recommended to use the same secret name as
mentioned in the example. In case you change
<ocsepp-plmn-secret>, then update thek8SecretNameparameter underplmn-ingress-gatewayandplmn-egress-gatewaysection in theocsepp-custom-values-<version>.yamlfile. For more information, see theplmn-ingress-gatewayandplmn-egress-gatewaysections. - For multiple CA root partners, the SEPP CA certificate
should contain the CA information in the particular format. The CAs
of the roaming partners should be concatenated in the single file
separated by eight hyphens as given
below:
CA1 content -------- CA2 content -------- CA3 content
Example for PLMN Egress Gateway:kubectl create secret generic ocsepp-plmn-egw-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=trust.txt --from-file=key.txt --from-file=caroot.cer --from-file=rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --from-file=ocsepp.cer -n seppsvcExample for PLMN Ingress Gateway:kubectl create secret generic ocsepp-plmn-igw-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=trust.txt --from-file=key.txt --from-file=caroot.cer --from-file=rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --from-file=ocsepp.cer -n seppsvc - Run the following command to verify the
secret:
Where,$ kubectl describe secret <ocsepp-plmn-secret> -n <Namespace><ocsepp-plmn-secret> is the secret name for plmn-egress-gateway and plmn-ingress-gateway. <Namespace> is the namespace of SEPP deployment.Example:$ kubectl describe secret ocsepp-plmn-secret -n seppsvc
2.2.1.10 SEPP Compatibility with Kubernetes, CNE and, Kyverno Policies
- If Istio or Aspen Service Mesh (ASM) is installed on CNE, run the following
command to patch the "disallow-capabilities" clusterpolicy of CNE and exclude
the NF namespace before the NF
deployment:
where, namespace of NF is the SEPP namespace used for deployment.kubectl patch clusterpolicy disallow-capabilities --type "json" -p '[{"op":"add","path":"/spec/rules/0/exclude/any/0/resources/namespaces/-","value":"<namespace of NF>"}]'Example:
kubectl patch clusterpolicy disallow-capabilities --type "json" -p '[{"op":"add","path":"/spec/rules/0/exclude/any/0/resources/namespaces/-","value":"seppsvc"}]' - Run the following command to verify that cluster policuies are updates with the
SEPP namespace in exclude
list:
Example:kubectl get clusterpolicies disallow-capabilities -oyamlspec: background: true failurePolicy: Ignore rules: - exclude: any: - resources: kinds: - Pod - DaemonSet namespaces: - kube-system - occne-infra - rook-ceph - seppsvc
2.2.1.11 Configuring SEPP to Support Aspen Service Mesh
Note:
From Release 25.2.200 onwards, SEPP ASM is only supported for incoming traffic on config manager and PLMN Ingress Gateway and outgoing traffic from PLMN Egress Gateway.SEPP leverages Aspen Service Mesh (ASM) for all the external TLS communication. The service mesh integration provides inter-NF communication and allows API gateway co-working with service mesh. The service mesh integration supports the services by deploying a special sidecar proxy in supported pods to intercept all the network communication between microservices.
Supported ASM version:1.21.6, 1.14.6
For ASM installation and configuration, refer to official Aspen Service Mesh website for details.
Aspen Service Mesh (ASM) configurations are categorized as follows:
- Control Plane: It involves adding labels or annotations to inject sidecar. The control plane configurations are part of the NF Helm chart.
- Data Plane: It helps in traffic management, such as handling
NF call flows by adding Service Entries (SE), Destination Rules (DR), Envoy
Filters (EF), and other resource changes such as apiVersion change between
different versions. This configuration is done manually by considering each NF
requirement and ASM deployment. This configuration can be done using
ocsepp_servicemesh_config_custom_values_<version>.yamlfile.
Data Plane Configuration
Data Plane configuration consists of following Custom Resource Definitions (CRDs):
- Service Entry (SE)
- Destination Rule (DR)
- Envoy Filter (EF)
- Peer Authentication (PA)
- Virtual Service (VS)
Note:
Useocsepp_servicemesh_config_custom_values_<version>.yaml
file and Helm charts to add or remove CRDs that you may require due to ASM
upgrades to configure features across different releases.
The data plane configuration is applicable in the following scenarios:
- Service Entry: Enables adding additional entries into Sidecar's internal service registry, so that auto-discovered services in the mesh can access or route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints).
- Destination Rule: Defines policies that apply to traffic intended for service after routing has occurred. These rules specify configuration for load balancing, connection pool size from the sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load balancing pool.
- Envoy Filters: Sidecars rewrite the header with its own default value. Therefore, the headers from back-end services are lost. You require Envoy Filters to help in passing the headers from back-end services to use it as it is, for eg. server header.
- Peer Authentication: Used for service-to-service authentication to verify the client making the connection. This template can be used to change the default mTLS mode on the deployment. It allows values such as STRICT, PERMISSIVE, and DISABLE.
- Virtual Service: Defines a set of traffic routing rules to apply when a host is addressed. Each routing rule defines matching criteria for the traffic of a specific protocol. If the traffic is matched, then it is sent to a named destination service (or subset or version of it) defined in the registry.
Service Mesh Configuration File
The following are supported fields in CRD:- Service Entry
- hosts
- exportTo
- location
- addresses
- ports.name
- ports.number
- ports.protocol
- Destination Rule
- host
- mode
- name
- exportTo
- Envoy Filters
- labelselector
- applyTo
- filtername
- operation
- typeconfig
- configkey
- configvalue
- Peer Authentication
- name
- labelselector
- tlsmode
- Virtual Service
- name
- prefix
- host
- destinationhost
- port
- exportTo
- attempts
For more information about the CRDs and the parameters, see Aspen Service Mesh.
A sample
ocsepp_servicemesh_config_custom_values_<version>.yaml is
available in Custom_Templates file. For downloading the file, see Customizing SEPP.
2.2.1.11.1 Predeployment Configurations
This section explains the predeployment configuration procedure to install SEPP with Service Mesh support.
Note:
From Release 25.2.200 onwards, SEPP ASM is only supported with DB services depoyed in either non-ASM or one side ASM modes.Prerequisites
- Run the following command to verify the
certificateCustomFieldsparameter value in ASM deployed namespace. This parameter should be set to true.
Example:kubectl describe cm istio-sidecar-injector -n <namespace in which ASM is deployed>| grep "certificateCustomFields"
Output:kubectl describe cm istio-sidecar-injector -nistio-system | grep "certificateCustomFields""certificateCustomFields": true, - If this parameter is set to false, update the ASM charts to set it
to true and perform an
upgrade.
./manifests/charts/istio-control/istio-discovery/values.yaml certificateCustomFields: true - Run the following command to verify that
istio-baseandistiodare installed in the cluster.
Example:helm ls -nistio-systemhelm ls -nistio-system NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION istio-base istio-system 1 2024-12-20 10:31:27.240210738 +0000 UTC deployed base-1.14.6-am1 1.14.6-am1 istiod istio-system 1 2024-12-20 10:32:45.905279498 +0000 UTC deployed istiod-1.14.6-am1 1.14.6-am1
Predeployment Configuration
Follow the predeployment configuration procedure as mentioned below:
- Creating SEPP namespace
- Run the following command to verify if the required namespace
already exists in the
system:
kubectl get namespaces - In the output of the above command, check if required namespace
is available. If not available, then create the namespace using the
following
command:
kubectl create namespace <namespace>Where,
<Namespace>is the SEPP namespace.Example:kubectl create namespace seppsvc
- Run the following command to verify if the required namespace
already exists in the
system:
SEPP Specific Changes
In the ocsepp_custom_values_<version>.yaml, do
the following changes:
- Modify
serviceMeshCheckflag from false to true in all the sections. - In the PLMN Ingress Gateway section, do the following:
- change
initsslto false. - change
enableIncomingHttpto true. - change
enableIncomingHttpsto false.
- change
- In the PLMN Egress Gateway section, do the following:
- change
initsslto false. - change
enableOutgoingHttpsto false.
- change
2.2.1.11.2 Installation of ASM Configuration Charts
In the
ocsepp_servicemesh_config_custom_values_<version>.yaml
file, do the following changes:
- Create a destination rule to establish a connection between OCSEPP
and cnDBTier. Sample template is given below:
Destination Rules (for DB):
Note:
- If the DB does not have istio sidecar injection, then create the destination rule.
- for one side ASM feature, a destination rule has to be created for alternate route service.
- If the user is running ATS testcases, then a destination rule has to be created for pn32f service, else skip this step.
- host: "<db-service-fqdn>.<db-namespace>.svc.<domain>" mode: DISABLE name: ocsepp-db-service-dr exportTo: |- [ "." ] namespace: seppsvcDestination Rules (for Alternate route service):- host: "<ocsepp-release-name>-alternate-route.<namespace>.svc.<domain>" mode: DISABLE name: ocsepp-ar-svc-dr exportTo: |- [ "." ]Destination Rules (for pn32f, this is ATS specific):where,- host: "<ocsepp-release-name>-pn32f-svc.<namespace>.svc.<domain>" mode: DISABLE name: ocsepp-pn32f-svc-dr exportTo: |- [ "." ]- host is the complete hostname of DB sevice or pn32f or alternate route.
- <ocsepp-release-name> is the Helm release name used while deploying SEPP.
- namespace is the Namespace in which SEPP will be deployed. For example, mysql-connectivity-service.seppsvc.occne-24-2-cluster-user1"
apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ocsepp-db-service-dr namespace: <ocsepp-namespace> spec: exportTo: - "." host: "<db-service-fqdn>.<db-namespace>.svc.<domain>" # Example: mysql-connectivity-service.seppsvc.svc.cluster.local" trafficPolicy: tls: mode: DISABLE - Modify the service entry in pod networking, so that the pods can
access Kubernetes api- server. Update the following parameters value:
- hosts
- addresses
kube-api-seapiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: kube-api-server namespace: <ocsepp-namespace> spec: hosts: - kubernetes.default.svc.<domain> # domain can be extracted using kubectl -n kube-system get configmap kubeadm-config -o yaml | grep -i dnsDomain exportTo: - "." addresses: - <20.96.0.1> # cluster IP of kubernetes api server, can be extracted using this command -- kubectl get svc -n default location: MESH_INTERNAL ports: - number: 443 name: https protocol: HTTPS resolution: NONE - PeerAuthentication is created for the namespace with default mTLS
mode as PERMISSIVE. If user wants to configure, modify the following parameter to STRICT
or DISABLE.
A sample template is as follows:tlsmode: STRICTpeerauth
apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: ocsepp-peerauthentication spec: selector: matchLabels: app.kubernetes.io/part-of: ocsepp mtls: mode: PERMISSIVENote:
- After the successful deployment, you can change the PeerAuthentication mtls mode to STRICT from PERMISSIVE and perform Helm upgrade.
- Ensure that spec.selector.matchLabels must be
app.kubernetes.io/part-of: ocseppvalue. User should not change it.
- Optional: Uncomment the SE section below, if deploying OCSEPP
in ASM mode (only for
ATS):
apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: stub-serviceentry namespace: seppsvc spec: exportTo: - '*' hosts: - '*.svc.cluster.local' - '*.3gppnetwork.org' location: MESH_INTERNAL ports: - name: http2 number: 8080 protocol: HTTP2 resolution: NONE - Envoy filter will be deployed with the following default
configuration:
envoyFilters_v_19x_111x: - name: serverheaderfilter configpatch: - applyTo: NETWORK_FILTER filtername: envoy.filters.network.http_connection_manager operation: MERGE typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager configkey: server_header_transformation configvalue: PASS_THROUGH - Run the following command to verify if all CRDs are
installed:
kubectl get <CRD-Name> -n <Namespace>Where,Example:- CRD-Name are the resource names
- Namespace is the namespace in which SEPP should be deployed
kubectl get se,dr,peerauthentication,envoyfilter,vs -n seppsvcNote:
Any modification to the existing CRDs or adding CRDs can be done by updating theocsepp_servicemesh_config_custom_values_<version>.yamlfile and running Helm upgrade. - Run the following command to install ASM specific resources in your
namespace:
helm install -f ocsepp_servicemesh_config_custom_values_<version>.yaml <release-name> ocsepp-servicemesh-config-<version>.tgz --namespace <sepp namespace>Example:
helm install -f ocsepp_servicemesh_config_custom_values_25.2.200.yaml ocsepp-servicemesh ocsepp-servicemesh-config-25.2.200.tgz --namespace seppsvc
2.2.1.11.3 Deploying SEPP with ASM
- Create namespace label for auto sidecar injection to automatically add the
sidecars in all of the pods spawned in SEPP namespace:
kubectl label ns <ocsepp-namespace> istio-injection=enabledExample:kubectl label ns seppsvc istio-injection=enabled - Run the following command to verify that label is applied on the namespace:
Output:kubectl describe ns seppsvcName: seppsvc Labels: istio-injection=enabled kubernetes.io/metadata.name=seppsvc Annotations: <none> Status: Active - Update
ocsepp_custom_values_<version>.yamlwith the following:- Update the following sidecar resource configuration in allResources
section of customExtension in global section:
- sidecar.istio.io/proxyCPULimit: "2"
- sidecar.istio.io/proxyMemoryLimit: 1Gi
- sidecar.istio.io/proxyCPU: 200m
- sidecar.istio.io/proxyMemory:
128Mi
customExtension: allResources: labels: {} annotations: sidecar.istio.io/proxyCPULimit: "2" sidecar.istio.io/proxyMemoryLimit: 1Gi sidecar.istio.io/proxyCPU: 200m sidecar.istio.io/proxyMemory: 128Mi lbServices: labels: {} annotations: {} lbDeployments: labels: {} annotations: {} nonlbServices: labels: {} annotations: {} nonlbDeployments: labels: {} annotations: {}
- To scrape metrics from SEPP pods, add oracle.com/cnc: "true" annotation
under lbDeployments and nonlbDeployments section of customExtension in global
section:
Note:
This step is required only if OSO is deployed.customExtension: allResources: labels: {} annotations: {} lbServices: labels: {} annotations: {} lbDeployments: labels: {} annotations: oracle.com/cnc: "true" nonlbServices: labels: {} annotations: {} nonlbDeployments: labels: {} annotations: oracle.com/cnc: "true"
- Update the following sidecar resource configuration in allResources
section of customExtension in global section:
- To deploy and use SEPP with ASM, ensure to use
ocsepp_custom_values_<version>.yamlfile while performing helm install or upgrade.The file must have all the necessary changes mentioned in the Predeployment Configurations section for deploying SEPP with ASM.
2.2.1.11.4 Postdeployment Configuration
This section explains the postdeployment configurations.
Note:
The following steps are not required if SEPP is deployed only in ATS mode.- In the
ocsepp_servicemesh_config_custom_values_<version>.yamlfile, do the following:- Uncomment the section and add values for virtual service as follows:
- Uncomment the following sections to disable Istio retries on 503
response code for each microservice as
needed:
#Uncomment the below virtualservices to disable istio retries when 503 is received. User can add for services as per below templates. #NOTE: Replace <ocsepp-release-name> with OCSEPP release name - name: no-istio-retries-for-plmn-ingress-gateway prefix: "/" host: <ocsepp-ocsepp-release-name>-plmn-ingress-gateway destinationhost: <ocsepp-release-name>-plmn-ingress-gateway port: 80 attempts: "0" exportTo: |- [ "." ] - name: no-istio-retries-for-plmn-egress-gateway prefix: "/" host: <ocsepp-release-name>-plmn-egress-gateway destinationhost: <ocsepp-release-name>-plmn-egress-gateway port: 8080 attempts: "0" exportTo: |- [ "." ] - name: no-istio-retries-for-config-mgr-svc host: <ocsepp-release-name>-config-mgr-svc destinationhost: <ocsepp-release-name>-config-mgr-svc attempts: "0" exportTo: |- [ "." ]
- Uncomment the following sections to disable Istio retries on 503
response code for each microservice as
needed:
- Uncomment the section and add values for virtual service as follows:
- The above changes will be done on C SEPP as well as on P SEPP. Ensure correct values are populated on both sides.
- Run the following command to add these
changes:
Example:helm upgrade -f ocsepp_servicemesh_config_custom_values_<version>.yaml <release-name> ocsepp-servicemesh-config-25.2.200.tgz --namespace <ns>helm upgrade -f ocsepp_servicemesh_config_custom_values_25.2.200.yaml ocsepp-servicemesh ocsepp-servicemesh-config-25.2.200.tgz --namespace seppsvcEnable Inter-NF communication
For every new NF participating in call flows when SEPP is a client, DestinationRule, and ServiceEntry must be created in SEPP namespace to enable communication. Following is the inter-NF communication with SEPP:OSO deployment- SEPP to NRF communication ( for registration and heartbeat) Create CRDs as mentioned in above step.
- If OSO is deployed with service mesh, add this annotation in OSO
ocoso_csar_vzw_<release-number>_prom_custom_values.yamlfile file to exclude outbound ports of all the SEPP services.Example:
traffic.sidecar.istio.io/excludeOutboundPorts: 9090, 9093, 9094, 8085, 9091, 8091, 9000, 8081Note:
This is applicable only when OSO is deployed with istio side car.
2.2.1.11.5 Deleting Service Mesh
This section describes the steps to disable or delete the service mesh.
To disable service mesh, run the following command:
kubectl label --overwrite namespace seppsvc istio-injection=disabled
To verify if service mesh is disabled, run the following command:
kubectl get se,dr,peerauthentication,envoyfilter,vs -n seppsvc
helm delete <helm-release-name> -n <namespace>- <helm-release-name> is the release name used by the helm command. This release name must be the same as the release name used for ServiceMesh.
- <namespace> is the deployment namespcae used by Helm command.
helm delete ocsepp-servicemesh -n seppsvcNote:
The changes due to the disabling of service mesh will be reflected only if SEPP is redeployed.2.2.1.12 Configuring Network Policies
Kubernetes network policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application. These network policies enforce access restrictions for all the applicable data flows except communication from Kubernetes node to pod for invoking container probe.
Note:
Configuring network policies is an optional step. Based on the security requirements, network policies may or may not be configured.For more information on network policies, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.
Note:
- If the traffic is blocked or unblocked between the pods even after applying network policies, check if any existing policy is impacting the same pod or set of pods that might alter the overall cumulative behavior.
- If changing default ports of services such as Prometheus, Database, Jaegar, or if Ingress or Egress Gateway names are overridden, update them in the corresponding network policies.
Configuring Network Policies
The following are the various operations that can be performed for network policies:
2.2.1.12.1 Installing Network Policies
Prerequisite
Network Policies are implemented by using the network plug-in. To use network policies, you must use a networking solution that supports Network Policy.
Note:
For a fresh installation, it is recommended to install Network Policies before installing SEPP. However, if SEPP is already installed, you can still install the Network Policies.
To install network policy:
- Open the
ocsepp_network_policies_custom_values_<version>.yamlfile provided in the release package zip file. For downloading the file, see Downloading SEPP Package, Pushing the SEPP Images to Customer Docker Registry, and Pushing the Roaming Hub or Hosted SEPP Images to Customer Docker Registry. - The file is provided with the default network policies. If required,
update the
ocsepp_network_policies_custom_values_<version>.yamlfile. For more information on the parameters, see the Configuration Parameters for network policy parameter table.Note:
- To run ATS, uncomment the following policies
from
ocsepp_network_policies_custom_values_<version>.yamlfile:- allow-ingress-traffic-to-notification
- allow-egress-ats
- allow-ingress-ats
- To connect with CNC Console, update the below parameter
in the
allow-ingress-from-consolepolicy in theocsepp_network_policies_custom_values_<version>.yamlfile:kubernetes.io/metadata.name: <namespace in which CNCC is deployed>
- To copy messages from plmn-ingress-gateway and
n32-ingress-gateway to kafka broker in Data Director, update the
below parameter in the
allow-egress-to-data-director-from-igwpolicy in theocsepp_network_policies_custom_values_<version>.yamlfile:kubernetes.io/metadata.name: <namespace in which kafka broker is present>
- In
allow-ingress-prometheusandallow-egress-to-prometheuspolicies,kubernetes.io/metadata.nameparameter must contain the value for the namespace where Prometheus is deployed, andapp.kubernetes.io/nameparameter value should match the label from Prometheus pod.
- The following Network Policies require modification for
ASM deployment. The required modifications are mentioned in the
comments in
ocsepp_custom_values_network_policies.yamlfile. Update the policies as per the comments.- allow-ingress-sbi-n32-igw
- allow-ingress-sbi-plmn-igw
- To run ATS, uncomment the following policies
from
- Run the following command to install the network
policies:
helm install <helm-release-name> <network-policy>/ -n <namepsace> -f <custom-value-file>For Example:
helm install ocsepp-network-policy ocsepp-network-policy-25.2.200/ -n seppsvc -f ocsepp_network_policies_custom_values_25.2.200.yamlWhere,
helm-release-name: ocsepp-network-policy helm release name.custom-value-file: ocsepp-network-policy custom value file.namespace: SEPP namespace.network-policy: location where the network-policy package is stored.
Note:
- Connections that were created before installing network policy and still persist are not impacted by the new network policy. Only the new connections would be impacted.
- If you are using ATS suite along with network policies, it is required to install the SEPP and ATS in the same namespace.
2.2.1.12.2 Upgrading Network Policies
To add, delete, or update network policy:
- Modify the
ocsepp_network_policies_custom_values_<version>.yamlfile to update, add, and delete the network policy. - Run the following command to upgrade the network policies:
helm upgrade <helm-release-name> <network-policy>/ -n <namespace> -f <custom-value-file>
For Example:
helm upgrade ocsepp-network-policy
ocsepp-network-policy-<version>/ -n seppsvc -f
ocsepp_network_policies_custom_values_<version>.yaml
helm-release-name:ocsepp-network-policy Helm release name.custom-value-file:ocsepp-network-policy custom value file.namespace:SEPP namespace.network-policy:location where the network-policy package is stored
2.2.1.12.3 Verifying Network Policies
Run the following command to verify that the network policies have been applied successfully:
kubectl get <helm-release-name> -n <namespace>
For Example:
kubectl get ocsepp-release-network-policy -n seppsvc
Where,
helm-release-name:ocsepp-network-policy Helm release namenamespace:SEPP namespace
NAME POD-SELECTOR AGE
allow-egress-database app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway) 2m35s
allow-egress-dns app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway) 2m35s
allow-egress-jaeger app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway) 2m35s
allow-egress-k8-api app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway) 2m35s
allow-egress-to-prometheus app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway) 7s
allow-egress-to-sepp-pods app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway) 2m35s
allow-ingress-from-console app.kubernetes.io/name=config-mgr-svc 2m35s
allow-ingress-from-sepp-pods app.kubernetes.io/part-of=ocsepp 2m35s
allow-ingress-prometheus app.kubernetes.io/part-of=ocsepp 2m35s
allow-ingress-sbi-n32-igw app.kubernetes.io/name=n32-ingress-gateway 2m35s
allow-ingress-sbi-plmn-igw app.kubernetes.io/name=plmn-ingress-gateway 2m35s
deny-egress-all-except-egw app.kubernetes.io/name notin (n32-egress-gateway,plmn-egress-gateway) 2m35s
deny-ingress-all app.kubernetes.io/part-of=ocsepp 2m35s2.2.1.12.4 Uninstalling Network Policies
- Run the following command to uninstall the network policies:
helm uninstall <helm-release-name> -n<namespace>
For Example:
helm uninstall ocsepp-network-policy -n seppsvc
Note:
While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.2.2.1.12.5 Configuration Parameters for Network Policies
This section includes information about the supported Kubernetes resource and configuration parameters for configuring Network Policies.
Table 2-29 Supported Kubernetes Resource for Configuring Network Policies
| Parameter | Description | Details |
|---|---|---|
apiVersion |
This is a mandatory parameter.
Specifies the Kubernetes version for access control. Note: This is the supported API version for network policy. This is a read-only parameter. |
DataType: String Default Value: networking.k8s.io/v1 |
kind |
This is a mandatory parameter.
Represents the kind of REST resource this object represents. Note: This is a read only parameter. |
DataType: String Default Value: NetworkPolicy |
Table 2-30 Configuration Parameters for Network Policies
| Parameter | Description | Details |
|---|---|---|
metadata.name |
This is a mandatory parameter. Specifies a unique name for the network policy. | DataType: String Default Value: {{ .metadata.name }} |
spec.{} |
This is a mandatory parameter.
This
consists of all the information needed to define a particular network
policy in the given namespace.
Note: SEPP supports the spec parameters defined in Kubernetes Resource Category. |
DataType: Object Default Value: NA |
For more information about this functionality, see "Network Policies" in the Cloud Native Core, Security Edge Protection Proxy User Guide.
2.2.1.13 Configuring Traffic Segregation
This section provides information on how to configure Traffic Segregation in SEPP. For description of " Traffic Segregation" feature, see " Traffic Segregation" section in "SEPP Supported Features " chapter of Oracle Communications Cloud Native Core, Security Edge Protection Proxy User Guide .
Various networks can be created at the time of CNE cluster installation. The
following things can be customized at the time of the cluster installation using
cnlb.ini file provided as part of CNE installation.
- Number of network pools
- Number of Egress IPs
- Number of Service IPs/Ingress IPs
- External IPs/subnet
For more information, see Oracle Communications Cloud Native Core, Cloud Native Environment User Guide.
To use one or multiple interfaces, you must configure annotations in the
deployment.customExtension.annotations parameter of the
ocsepp_custom_values_<version>.yaml file.
Configuration at Ingress Gateway
Use the following annotation to configure network segregation at
ingress-side in ocsepp_custom_values_<version>.yaml:
Annotation for a single interface
k8s.v1.cni.cncf.io/networks: default/<network interface>@<network interface>
oracle.com.cnc/cnlb: '[{"backendPortName": "<igw port name>", "cnlbIp": "<external IP>","cnlbPort":"<port number>"}]'
Here,
k8s.v1.cni.cncf.io/networks: Contains all the network attachment information the pod uses for network segregation.oracle.com.cnc/cnlb: To define service IP and port configurations that the deployment will employ for ingress load balancing.Where,
cnlbIpis the front-end IP utilized by the application.cnlbPortis the front-end port used in conjunction with the CNLB IP for load balancing.backendPortNameis the backend port name of the container that needs load balancing, retrievable from the deployment or pod spec of the application.
Sample annotation for a single interface:
k8s.v1.cni.cncf.io/networks: default/nf-sig1-int8@nf-sig1-int8
oracle.com.cnc/cnlb: '[{"backendPortName": "igw-port", "cnlbIp": "10.123.155.16","cnlbPort":"80"}]'
k8s.v1.cni.cncf.io/networks: default/nf-oam-int5@nf-oam-int5
oracle.com.cnc/cnlb: '[{"backendPortName": "query", "cnlbIp": "10.75.180.128","cnlbPort": "80"}, {"backendPortName": "admin", "cnlbIp": "10.75.180.128", "cnlbPort":"16687"}]''In the above example, each item in the list refers to a different backend port name with the same CNLB IP, but the ports for the front end are distinct.
ports:
- containerPort: 16686
name: query
protocol: TCP
- containerPort: 16687
name: admin
protocol: TCPConfiguration at Egress Gateway
Use the following annotation to configure network segregation at
egress-side in ocsepp_custom_values_<version>.yaml:
k8s.v1.cni.cncf.io/networks: default/nf-sig-egr1@nf-sig-egr1k8s.v1.cni.cncf.io/networks: default/nf-oam-egr1@nf-oam-egr1,default/nf-sig-egr1@nf-sig-egr1Note:
- The network attachments will be deployed as a part of cluster installation only.
- The network attachment name should be unique for all the pods.
For information about the above mentioned annotations, see "Configuring Cloud Native Load Balancer (CNLB)" in Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
2.2.2 Installation Tasks
This section provides installation procedures to install Security Edge Protection Proxy (SEPP) using Command Line Interface (CLI).
Before installing SEPP, you must complete Prerequisites and Preinstallation Tasks for both the deployment methods.
2.2.2.1 Installing SEPP Package
To install the SEPP package:
- Navigate to the Helm directory which is a part of Files directory
of unzipped csar package. Run the following
command:
cd Files/Helm - Run the following command to verify the SEPP Helm charts in the Helm
directory:
lsThe output must be:
ocsepp-25.2.200.tgzocsepp-network-policy-25.2.200.tgzocsepp-servicemesh-config-25.2.200.tgz
- Customize the
ocsepp_custom_values_25.2.200.yamlfile with the required deployment parameters. See Customizing SEPP section to customize the file. For more information about predeployment parameter configurations, see Predeployment Configuration tasks.Note:
Customize theocsepp_custom_values_25.2.200.yamlfile for SEPP deployment. See Customizing SEPP section for the details of the parameters. Some of the mandatory parameters are:dockerRegistrynamespacemysql.primary.host- SEPP inter and intra FQDN
(nfFqdnRef, viaHeaderSeppViaInterFqdn, viaHeaderSeppViaIntraFqdn, intraPlmnFqdn, sanValues).
Note:
- In case of multisite georedundant setups, configure nfInstanceId uniquely for each SEPP site.
- Ensure the nfInstanceId configuration in the global
section is same as that in the appProfile section of NRF client.
- dockerRegistry: occne-repo-host:5000
- Run the following command to install
SEPP:
helm install <helm-release> ocsepp-25.2.200.tgz --namespace <k8s namespace> -f <path to ocsepp_customized_values.yaml>Example:helm install ocsepp-release ocsepp-25.2.200.tgz --namespace seppsvc -f ocsepp_custom_values_25.2.200.yamlNote:
- Ensure the following:
<helm-release>must not exceed 20 characters.namespacenameis the deployment namespace used by helm command.custom_values.yamlfile name is the name of the custom values yaml file (including location). -
Timeout duration: Timeout duration is an optional parameter that can be used in the Helm install command. If it is not specified, the default value will be 5m (5 minutes) in Helm3. It sets the time to wait for any individual Kubernetes operation (like Jobs for hooks). The default value is 5ms. If the helm install command fails at any point to create a Kubernetes object, it will internally call the purge to delete after the timeout value (default: 300s). Here, timeout value is not for overall installation but automatic purge on installation failure.
- In Georedundant deployment, if you want to add or remove a site, refer to Adding a Site to an Existing SEPP Georedundant Site and Removing a Site to from an Existing Georedundant Deployment.
Caution:
Do not exit fromhelm installcommand manually. After running thehelm installcommand, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out fromhelm installcommand. It leads to some anomalous behavior. - Ensure the following:
2.2.2.2 Installing Roaming Hub and Hosted SEPP
This section describes how to install Roaming Hub and Hosted SEPP in the Cloud Native Environment.
Note:
This is applicable only for Roaming Hub and Hosted SEPP installation.- Navigate to the Helm directory which is a part of Files directory
of unzipped csar package. Run the following
command:
cd Files/Helm - Run the following command to verify the SEPP Helm charts in the Helm
directory:
lsThe output must be:
ocsepp-25.2.200.tgzocsepp-network-policy-25.2.200.tgzocsepp-servicemesh-config-25.2.200.tgz
- Customize the
ocsepp_custom_values_25.2.200.yamlfile with the required deployment parameters. See Customizing SEPP section to customize the file. For more information about predeployment parameter configurations, see Predeployment Configuration tasks.For more details about enabling and configuring Hosted SEPP mode, refer to ' Hosted SEPP' section of Oracle Communications Cloud Native Core, Security Edge Protection Proxy User Guide.
- Run the following command to install
SEPP:
helm install <helm-release> ocsepp-25.2.200.tgz --namespace <k8s namespace> -f <path to ocsepp_custom_values_roaming_hub_25.2.200.yaml>Example:helm install ocsepp-release ocsepp-25.2.200.tgz --namespace seppsvc -f ocsepp_custom_values_roaming_hub_25.2.200.yamlNote:
- Ensure the following:
<helm-release>must not exceed 20 characters.namespacenameis the deployment namespace used by helm command.custom_values.yamlfile name is the name of the custom values yaml file (including location). -
Timeout duration: Timeout duration is an optional parameter that can be used in the Helm install command. If it is not specified, the default value will be 5m (5 minutes) in Helm3. It sets the time to wait for any individual Kubernetes operation (like Jobs for hooks). The default value is 5ms. If the helm install command fails at any point to create a Kubernetes object, it will internally call the purge to delete after the timeout value (default: 300s). Here, timeout value is not for overall installation but automatic purge on installation failure.
- In Georedundant deployment, if you want to add or remove a site, refer to Adding a Site to an Existing SEPP Georedundant Site and Removing a Site to from an Existing Georedundant Deployment.
Caution:
Do not exit fromhelm installcommand manually. After running thehelm installcommand, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out fromhelm installcommand. It leads to some anomalous behavior. - Ensure the following:
2.2.3 Postinstallation Tasks
This section explains the postinstallation tasks for SEPP.
2.2.3.1 Verifying Installation
To verify the installation:
- To verify the deployment status, open a new terminal and run
the following command:
$ watch kubectl get pods -n <SEPP namespace>The pod status gets updated at regular intervals.
- Run the following command to verify the installation
status:
helm status <helm-release> -n <SEPP namespace>Example:helm status ocsepp-release -n seppsvcWhere,
<helm-release>is the Helm release name of SEPP.<SEPP namespace>is the namespace of SEPP deployment.
If the deployment is successful, then the status is displayed as
Sample output:deployed.NAME: ocsepp-release LAST DEPLOYED: Sat Jan 11 20:08:03 2025 NAMESPACE: seppsvc STATUS: deployed REVISION: 1 - Run the following command to check the status of the services:
Example:
kubectl -n <SEPP namespace> get serviceskubectl -n seppsvc get services - Run the following command to check the status of the pods:
$ kubectl get pods -n <SEPP namespace>The value in the
STATUScolumn of all the pods must beRunning.The value in the
READYcolumn of all the pods must be n/n, where n is the number of containers in the pod.Example:
$ kubectl get pods -n seppsvcNAME READY STATUS RESTARTS AGE ocsepp-release-appinfo-55b8d4f687-wqtgj 1/1 Running 0 141m ocsepp-release-cn32c-svc-64cd9c555c-ftd8z 1/1 Running 0 113m ocsepp-release-cn32f-svc-dd886fbcc-xr2z8 1/1 Running 0 4m4s ocsepp-release-config-mgr-svc-6c8ddf4c4f-lb4zj 1/1 Running 0 141m ocsepp-release-n32-egress-gateway-5b575bbf5f-z5bbx 2/2 Running 0 131m ocsepp-release-n32-ingress-gateway-76874c967b-btp46 2/2 Running 0 131m ocsepp-release-ocpm-config-65978858dc-t4t5k 1/1 Running 0 141m ocsepp-release-performance-67d76d9d58-llwmt 1/1 Running 0 141m ocsepp-release-plmn-egress-gateway-6dc4759cc7-wn6r8 2/2 Running 0 31m ocsepp-release-plmn-ingress-gateway-56c9b45658-hfcxx 2/2 Running 0 131m ocsepp-release-pn32c-svc-57774fdc4-2qpvx 1/1 Running 0 141m ocsepp-release-pn32f-svc-586cd87c7b-pxk6m 1/1 Running 0 3m47s ocsepp-release-sepp-nrf-client-nfdiscovery-65747884cd-qblqn 1/1 Running 0 141m ocsepp-release-sepp-nrf-client-nfmanagement-5dd6ff98d6-cr7s7 1/1 Running 0 141m ocsepp-release-nf-mediation-74bd4dc799-d9ks2 1/1 Running 0 141m ocsepp-release-coherence-svc-54f7987c4b-wv4h7 1/1 Running 0 141m
Note:
- Take a backup of the following files that are required during fault
recovery:
- Updated
ocsepp_custom_values_<version>.yamlfile - Updated Helm charts
- Secrets, certificates, and keys that are used during installation
- Updated
- If the installation is not successful or you do not see the status as Running for all the pods, perform the troubleshooting steps provided in Oracle Communications Cloud Native Core, Security Edge Protection Proxy Troubleshooting Guide.
2.2.3.2 Performing Helm Test
This section describes how to perform sanity check for SEPP installation through Helm test. The pods to be checked should be based on the namespace and label selector configured for the Helm test configurations.
Helm Test is a feature that validates successful installation of SEPP and determines if the NF is ready to take traffic.
This test also checks for all the PVCs to be in bound state under the Release namespace and label selector configured.
Note:
Helm Test can be performed only on Helm3.Perform the following Helm test procedure:
- Configure the Helm test configurations under the global parameters section of
the
ocsepp_custom_values_<version>.yamlfile as follows:#helm test configuration test: imageRepository: occne-repo-host:5000 nfName: ocsepp image: name: nf_test tag: 25.2.200 pullPolicy: Always config: logLevel: INFO # Configure timeout in SECONDS. # Estimated total time required for SEPP deployment and helm test command completion timeout: 240 resources: requests: cpu: 1 memory: 1Gi #ephemeral-storage: 70Mi limits: cpu: 1 memory: 1Gi #ephemeral-storage: 1Gi complianceEnable: true k8resources: - horizontalpodautoscalers/v1 - deployments/v1 - configmaps/v1 - prometheusrules/v1 - serviceaccounts/v1 - poddisruptionbudgets/v1 - roles/v1 - services/v1 - rolebindings/v1 - Run the following Helm test
command:
helm test <helm-release> -n <namespace>
Where,
<helm-release>is the release name.
Example:<namspace>is the deployment namespace where SEPP is installed.
Sample Output:helm test ocsepp-release -n seppsvc[admusr@cnejac0101-bastion-2 ocsepp-22.4.0-0]$ helm test ocsepp-release -n seppsvc NAME: ocsepp-release LAST DEPLOYED: Fri Aug 19 04:56:36 2022 NAMESPACE: seppsvc STATUS: deployed REVISION: 1 TEST SUITE: ocsepp-release Last Started: Fri Aug 19 05:02:03 2022 Last Completed: Fri Aug 19 05:02:26 2022 Phase: Succeeded
If the Helm test fails, see Oracle Communications Cloud Native Core, Security Edge Protection Proxy Troubleshooting Guide.
2.2.3.3 Taking the Back Up
- Current custom-values.yaml file from which you are upgrading
- Updated
ocsepp_custom_values_<version>.yamlfile - Updated Helm charts
- Secrets, certificates, and keys that are used during installation.
- Updated
ocsepp_servicemesh_config_custom_values_<version>.yamlfile. - Updated
ocsepp_custom_values_<version>.yamlfile.
2.2.3.4 Alert Configuration
This section describes the measurement based Alert rules configuration for SEPP. The Alert Manager uses the Prometheus measurements values as reported by microservices in conditions under alert rules to trigger alerts.
Note:
Alert file is packaged with SEPP custom templates. Perform the following steps before configuring alert file:
- Download the SEPP CSAR package from MOS. For more information, refer Downloading SEPP section.
- Unzip the SEPP CSAR package file to get the
ocsepp_alertrules_promha_<version>.yamlandocsepp_alertrules_<version>.yamlfiles. - By default, kubernetes_namespace or namespace is configured as Kubernetes namespace in which SEPP is deployed. Default value of Kubernetes namespace is "sepp-namespace". Update it to the namesapace in which SEPP is deployed.
- Set the namespace parameter in
ocsepp_alertrules_promha_<release version>.yamlfile to SEPP Namespace. That is, setNamespaceas<SEPP Namespace>Example:namespace="sepp-namespace" Where namespace name is ‘sepp-namespace’ - Set the kubernetes_namespace parameter in
ocsepp_alertrules_<release version>.yamlfile to SEPP Namespace. That is, setkubernetes_namespaceas<SEPP Namespace>Example:kubernetes_namespace="sepp-namespace" Where kubernetes_namespace name is ‘sepp-namespace’ - Set the deployment parameter in
ocsepp_alertrules_promha_<release version>.yamlandocsepp_alertrules_<release version>.yamlfile. That is, setapp_kubernetes_io_part_ofas"<deployment name>"Example:app_kubernetes_io_part_of="ocsepp”, Where deployment name is 'ocsepp'
2.2.3.4.1 Configuring Alerts for CNE 1.8.x and Previous Versions
The following procedure describes how to configure the SEPP alerts for CNE version 1.8.x and previous versions:
- Run the following command to find the config map to configure
alerts in the Prometheus server:
kubectl get configmap -n <Namespace>where, <Namespace> is the prometheus server namespace used in helm install command.
-
Run the following command to take backup of current config map of prometheus server:where, <Namespace> is the prometheus server namespace used in helm install command.
kubectl get configmaps <NAME>-server -o yaml -n <Namespace> > /tmp/tempConfig.yamlFor example, assuming chart name is "prometheus-alert", so "_NAME_-server" becomes "prometheus-alert-server", run the following command to find the config map:kubectl get configmaps prometheus-alert-server -o yaml -n prometheus-alert2 > /tmp/tempConfig.yaml - Run the following command to check if alertssepp is present in
the tempConfig.yaml
file:
cat /tmp/t_mapConfig.yaml | grep alertssepp - Run the following command to delete the alertssepp entry from
the
t_mapConfig.yamlfile, if the alertssepp is present:sed -i '/etc\/config\/alertssepp/d' /tmp/t_mapConfig.yaml - Run the following command to add the alertssepp entry in the
t_mapConfig.yaml file, if the alertssepp is not
present:
sed -i '/rule_files:/a\ \- /etc/config/alertssepp' /tmp/t_mapConfig.yaml - Run the following command to reload the config map with the
modifed
file:
kubectl replace configmap <Name> -f /tmp/t_mapConfig.yaml - Run the following command to add seppAlertRules.yaml file into
prometheus config map under filename of SEPP alert file
:
kubectl patch configmap <Name> -n <Namespace> --type merge --patch "$(cat <PATH>/seppAlertRules.yaml)" - Restart prometheus-server pod.
- Verify the alerts in prometheus GUI.
Note:
Prometheus takes about 20 seconds to apply the updated Config map.2.2.3.4.2 Configuring Alerts for CNE 1.9.x and Higher Versions
The following procedure describes how to configure the SEPP alerts for OCCNE 1.9.x and higher versions:
- Run the following command to apply the Prometheus rules Custom
Resource Definition (CRD):
Example:
Where,kubectl apply -f <file_name> -n <sepp namespace>- <file_name> is the SEPP alerts file
- <sepp namespace> is the SEPP namespace
$ kubectl apply -f ocsepp_alerting_rules_promha.yaml -n seppsvc - Run the following command to check if SEPP alert file is added to
Prometheus rules:
$ kubectl get prometheusrules --namespace <namespace>Example:$ kubectl get prometheusrules --namespace seppsvc - Log in to Prometheus GUI and verify the alerts section.
Note:
The Prometheus server takes an updated config map that is automatically reloaded after approximately 60 seconds. Refresh the Prometheus GUI to confirm that the SEPP alerts have been reloaded.
2.2.3.4.3 Configuring SEPP Alerts for non-OCCNE Versions
- A new
oso-alr-configHelm chart is provided as part of OSO package. - The
oso-alr-configHelm chart must be deployed once OSO is installed. - This separate Helm chart allows the Helm install command to run
with or without an input alert file.
Command and example for without alert file:
Command:
Example:helm install <oso-alr-config-release-name> <ocoso_alr_config_csar_<version>_alert_config_charts.tgz> -f <ocoso_alr_config_csar_<version>_alert_config_custom_values.yaml> -n <oso_namespace>helm install oso-alr-config ocoso_alr_config_csar_<version>_alert_config_charts.tgz -f ocoso_alr_config_csar_<version>_alert_config_custom_values.yaml -n oso - Command and example for with alert
file:
Command:
Example:helm install <oso-alr-config-release-name> <ocoso_alr_config_csar_<version>_alert_config_charts.tgz> -f <ocoso_alr_config_csar_<version>_alert_config_custom_values.yaml> -f ocsepp_alertrules_<version>.yaml -n <oso_namespace>helm install oso-alr-config ocoso_alr_config_csar_<version>_alert_config_charts.tgz -f ocoso_alr_config_csar_<version>_alert_config_custom_values.yaml -f ocsepp_alertrules_<version>.yaml -n oso - When the
oso-alr-configHelm chart installation is completed then theoso-alr-configis ready to use. - Run Helm upgrade, if you are enabling this feature after SEPP
deployment. Run the following Helm upgrade command in
oso-alr-configfile to apply SEPP alert file:helm upgrade oso-alr-config ocoso_alr_config_csar_<version>_alert_config_charts.tgz -f ocoso_alr_config_csar_<version>_alert_config_custom_values.yaml -f ocsepp_alertrules_<version>.yaml -n oso - Once the Helm upgrade is completed, you can view the alerts file that is applied to OSO Prometheus ConfigMap. This can be viewed in the Prometheus Graphical User Interface (GUI).
- You can also update the changes in the same alert file and perform a Helm upgrade. The alert file will be updated with the latest changes.
- An empty
ocsepp_empty_alertrules.yamlfile is delivered as part of SEPP software package. You must provide thisocsepp_empty_alertrules.yamlfile during the Helm upgrade. - This
ocsepp_empty_alertrules.yamlfile is used to remove all the alerts using the Helm upgrade command by providingocsepp_empty_alertrules.yamlfile as an input file. This removes the alerts from the OSO Prometheus ConfigMap and Prometheus GUI and keeps the references under rule_files"/etc/config/alertssepp"and the alert rules will be empty"alertssepp: { }". - For example, a sample Helm upgrade command to clean up alert
rules is as
follows:
Command:
Example:$ helm upgrade <oso-alr-config-release-name> <ocoso_alr_config_csar_<version>_alert_config_charts.tgz> -f <ocoso_alr_config_csar_<version>_alert_config_custom_values.yaml> -f ocsepp_empty_alertrules.yaml -n <oso_namespace>$ helm upgrade oso-alr-config ocoso_alr_config_csar_<version>_alert_config_charts.tgz -f ocoso_alr_config_csar_<version>_alert_config_custom_values.yaml -f ocsepp_empty_alertrules.yaml -n oso
apiVersion: v1
data:
alertssepp: |
{}2.2.3.4.4 Configuring Alerts in OCI
The following procedure describes how to configure the SEPP alerts for OCI. The OCI supports metric expressions written in MQL (Metric Query Language) and thus, requires a new SEPP alert file for configuring alerts in OCI observability platform.
The following are the steps:
- Run the following command to extract the .zip
file:
unzip ocsepp_oci_alertrules_<version>.zipTheocsepp_ociandocsepp_oci_resourcesfolders are available in the zip file.Note:
The zip file is available in the Scripts folder of CSAR package. - Open the
ocsepp_ocifolder, in thenotifications.tf file, update the parameterendpointwith the email id of the user. - Open the
ocsepp_oci_resourcesfolder, in thenotifications.tf file, update the parameterendpointwith the email id of the user. - Log in to the OCI Console.
Note:
For more details about logging in to the OCI, refer to Signing In to the OCI Console. - Open the navigation menu and select Developer Services. The Developer Services window appears on the right pane.
- Under the Developer Services, select Resource Manager.
- Under Resource Manager, select Stacks. The Stacks window appears.
- Click Create Stack.
- Select the default My Configuration radio button.
- Under Stack configuration, select the folder radio button and upload
the
ocsepp_ocifolder. - Enter the Name and Description and select the compartment.
- Select the latest Terraform version from the Terraform version drop-down.
- Click Next. The Edit Stack screen appears.
- Enter the required inputs to create the SEPP alerts or alarms and click Save and Run Apply.
- Verify that the alarms are created in the Alarm Definitions screen
(OCI Console> Observability & Management> Monitoring>Alarm
Definitions) provided.
The required inputs are:
- Alarms Configuration
- Compartment Name - Choose name of compartment from the drop-down
- Metric namespace - Metric namespace that the user provided while deploying OCI Adaptors.
- Topic Name - Any user configurable name. Must contain fewer than 256 characters. Only alphanumeric characters plus hyphens (-) and underscores (_) are allowed.
- Message Format - Keep it as ONS_OPTIMIZED. (This is pre-populated)
- Alarm is_enabled - Keep it as True. (This is pre-populated)
- Alarms Configuration
- The steps 6 to 15 must be repeated for uploading the
ocsepp_oci_resourcesfolder. Here, Metric namespace will be pre-populated.
For more details, see Oracle Communications Cloud Native Core, OCI Adaptor Deployment Guide.