2 Installing NSSF

This chapter provides information about installing Oracle Communications Cloud Native Core, Network Slice Selection Function (NSSF) in a cloud native environment.

Note:

NSSF supports fresh installation, and it can also be upgraded from 25.1.1xx to 25.1.2xx. For more information on how to upgrade NSSF, see the Upgrading NSSF section.

2.1 Prerequisites

Before installing and configuring NSSF, ensure that the following prerequisites are met.

2.1.1 Software Requirements

This section lists the software that must be installed before installing NSSF.

Note:

Tables Table 2-1 and Table 2-2 in this section offer a comprehensive list of software necessary for the proper functioning of NSSF during deployment. However, these tables are indicative, and the software used can vary based on the customer's specific requirements and solution.

The Software Requirement column in Table 2-1 and Table 2-2 tables indicates one of the following:

  • Mandatory: Absolutely essential; the software cannot function without it.
  • Recommended: Suggested for optimal performance or best practices but not strictly necessary.
  • Conditional: Required only under specific conditions or configurations.
  • Optional: Not essential; can be included based on specific use cases or preferences.

Table 2-1 Preinstalled Software Versions

Software NSSF 25.1.2xx NSSF 25.1.1xx NSSF 24.3.x Software Requirement Usage Description
Helm 3.17.1 3.16.2 3.15.2 Mandatory Helm, a package manager, simplifies deploying and managing NFs on Kubernetes with reusable, versioned charts for easy automation and scaling.

Impact:

Preinstallation is required. Without this capability, management of NF versions and configurations becomes time-consuming and error-prone, impacting deployment consistency.

Kubernetes 1.32.0 1.31.1 1.30.0 Mandatory Kubernetes orchestrates scalable, automated NF deployments for high availability and efficient resource utilization.

Impact:

Preinstallation is required. Without orchestration capabilities, deploying and managing network functions (NFs) can become complex, leading to inefficient resource utilization and potential downtime.

Podman 4.9.4 4.9.4 4.6.1 Mandatory Podman is a part of Oracle Linux. It manages and runs containerized NFs without requiring a daemon, offering flexibility and compatibility with Kubernetes.

Impact:

Preinstallation is required. Without efficient container management, the development and deployment of NFs could become cumbersome, impacting agility.

To check the versions of the preinstalled software in the cloud native environment, run the following commands:

kubectl version
helm version 
podman version 

Note:

This guide covers the installation instructions for NSSF when Podman is the container platform with Helm as the Packaging Manager. For non-CNE, the operator can use commands based on their deployed Container Runtime Environment, see Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) Installation, Upgrade and Fault Recovery Guide.

If you are deploying NSSF in a cloud native environment, these following additional software are to be installed before installing NSSF.

Table 2-2 Additional Software Versions

Software 25.1.2xx 25.1.1xx 24.3.x Software Requirement Usage Description
AlertManager 0.28.0 0.27.0 0.27.0 Recommended Alertmanager is a component that works in conjunction with Prometheus to manage and dispatch alerts. It handles the routing and notification of alerts to various receivers.

Impact:

Not implementing alerting mechanisms can lead to delayed responses to critical issues, potentially resulting in service outages or degraded performance.

Calico 3.29.1 3.28.1 3.27.3 Recommended Calico provides networking and security for NFs in Kubernetes with scalable, policy-driven connectivity.

Impact:

CNI is mandatory for the functioning of 5G NFs. Without CNI and proper plugin, the network could face security vulnerabilities and inadequate traffic management, impacting the reliability of NF communications

cinder-csi-plugin 1.32.0 1.31.1 1.30.0 Recommended Cinder CSI (Container Storage Interface) plugin is for provisioning and managing block storage in Kubernetes. It is often used in OpenStack environments to provide persistent storage for containerized applications.

Impact:

Cinder CSI Plugin is used in OpenStack vCNE solution. Without this integration, provisioning block storage for NFs could be manual and inefficient, complicating storage management.

containerd 1.7.24 1.7.22 1.7.16 Recommended Containerd manages container lifecycles to run NFs efficiently in Kubernetes.

Impact:

A lack of a reliable container runtime could lead to performance issues and instability in NF operations.

CoreDNS 1.11.3 1.11.1 1.11.1 Recommended CoreDNS is the DNS server in Kubernetes, which provides DNS resolution services within the cluster.

Impact:

DNS is an essential part of deployment. Without proper service discovery, NFs would struggle to communicate with each other, leading to connectivity issues and operational failures.

Fluentd 1.17.1 1.17.1 1.16.2 Recommended Fluentd is an open source data collector that streamlines data collection and consumption, ensuring improved data utilization and comprehension.

Impact:

Not utilizing centralized logging can hinder the ability to track NF activity and troubleshoot issues effectively, complicating maintenance and support.

Grafana 9.5.3 9.5.3 9.5.3 Recommended Grafana is a popular open-source platform for monitoring and observability. It provides a user-friendly interface for creating and viewing dashboards based on various data sources.

Impact:

Without visualization tools, interpreting complex metrics and gaining insights into NF performance would be cumbersome, hindering effective management.

Jaeger 1.65.0 1.60.0 1.60.0 Recommended Jaeger provides distributed tracing for 5G NFs, enabling performance monitoring and troubleshooting across microservices.

Impact:

Not utilizing distributed tracing may hinder the ability to diagnose performance bottlenecks, making it challenging to optimize NF interactions and user experience.

Kyverno 1.13.4 1.12.5 1.12.0 Recommended Kyverno is a Kubernetes policy engine that allows to manage and enforce policies for resource configurations within a Kubernetes cluster.

Impact:

Without the policy enforcement, there could be misconfigurations, resulting in security risks and instability in NF operations, affecting reliability.

MetalLB 0.14.4 0.14.4 0.14.4 Recommended MetalLB is used as a load balancing solution in CNE, which is mandatory for the solution to work. MetalLB provides load balancing and external IP management for 5G NFs in Kubernetes environments.

Impact:

Without load balancing, traffic distribution among NFs may be inefficient, leading to potential bottlenecks and service degradation.

metrics-server 0.7.2 0.7.2 0.7.1 Recommended Metrics server is used in Kubernetes for collecting resource usage data from pods and nodes.

Impact:

Without resource metrics, auto-scaling and resource optimization would be limited, potentially leading to resource contention or underutilization.

Multus 4.1.3 3.8.0 3.8.0 Recommended Multus enables multiple network interfaces in Kubernetes pods, allowing custom configurations and isolated paths for advanced use cases such as NF deployments, ultimately supporting traffic segregation.

Impact:

Without this capability, connecting NFs to multiple networks could be limited, impacting network performance and isolation.

OpenSearch 2.15.0 2.11.0 2.11.0 Recommended OpenSearch provides scalable search and analytics for 5G NFs, enabling efficient data exploration and visualization.

Impact:

Without a robust analytics solution, there would be difficulties in identifying performance issues and optimizing NF operations, affecting overall service quality.

OpenSearch Dashboard 2.15.0 2.11.0 2.11.0 Recommended OpenSearch dashboard visualizes and analyzes data for 5G NFs, offering interactive insights and custom reporting.

Impact:

Without visualization capabilities, understanding NF performance metrics and trends would be difficult, limiting informed decision making.

Prometheus 3.2.0 2.52.0 2.52.0 Mandatory Prometheus is a popular open source monitoring and alerting toolkit. It collects and stores metrics from various sources and allows for alerting and querying.

Impact:

Not employing this monitoring solution could result in a lack of visibility into NF performance, making it difficult to troubleshoot issues and optimize resource usage.

prometheus-kube-state-metric 2.15.0 2.13.0 2.13.0 Recommended Kube-state-metrics is a service that generates metrics about the state of various resources in a Kubernetes cluster. It's commonly used for monitoring and alerting purposes.

Impact:

Without these metrics, monitoring the health and performance of NFs could be challenging, making it harder to proactively address issues.

prometheus-node-exporter 1.8.2 1.8.2 1.8.2 Recommended Prometheus Node Exporter collects hardware and OS-level metrics from Linux hosts.

Impact:

Without node-level metrics, visibility into infrastructure performance would be limited, complicating the identification of resource bottlenecks.

Prometheus Operator 0.80.1 0.76.0 0.76.0 Recommended The Prometheus Operator is used for managing Prometheus monitoring systems in Kubernetes. Prometheus Operator simplifies the configuration and management of Prometheus instances.

Impact:

Not using this operator could complicate the setup and management of monitoring solutions, increasing the risk of missed performance insights.

rook 1.16.6 1.15.2 1.13.3 Recommended Rook is the Ceph orchestrator for Kubernetes that provides storage solutions. It is used in bm CNE solution.

Impact:

CSI is mandatory for the solution to work. Not utilizing Rook could increase the complexity of deploying and managing Ceph, making it difficult to scale storage solutions in a Kubernetes environment.

snmp-notifier 1.6.1 1.5.0 1.4.0 Recommended snmp-notifier sends SNMP alerts for 5G NFs, providing real-time notifications for network events.

Impact:

Without SNMP notifications, proactive monitoring of NF health and performance could be compromised, delaying response to critical issues.

Velero 1.13.2 1.13.2 1.12.0 Recommended Velero backs up and restores Kubernetes clusters for 5G NFs, ensuring data protection and disaster recovery.

Impact:

Without backup and recovery capabilities, customers would witness a risk of data loss and extended downtime, requiring a full cluster reinstall in case of failure or upgrade.

2.1.2 Environment Setup Requirements

This section describes the environment setup requirements for installing NSSF.

2.1.2.1 Client Machine Requirement

This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.

The client machine should have:

  • Helm repository configured.
    1. To add a Helm repository, run the following command
      helm repo add <helm-repo-name> <helm-repo-address>

      Where,

      <helm-repo-name> is the name of the Helm repository.

      <helm-repo-address> is the URL of the Helm repository.

      For example:

      helm repo add ocnssf-helm-repo http://10.75.237.20:8081

    2. To verify that Helm repository has been added successfully, run the following command:
      helm repo list

      The output must show the added Helm repository in the list.

  • network access to the Helm repository and Docker image repository.
  • network access to the Kubernetes cluster.
  • required environment settings to run the kubectl, docker, and podman commands. The environment must have privileges to create a namespace in the Kubernetes cluster.
  • Helm client installed with the push plugin. Configure the environment in such a manner that the helm install command deploys the software in the Kubernetes cluster.
2.1.2.2 Network Access Requirement

The Kubernetes cluster hosts must have network access to the following repositories:

  • Local Helm repository: It contains the NSSF Helm charts.

    To check if the Kubernetes cluster hosts can access the local Helm repository, run the following command:

    helm repo update
  • Local Docker image repository: It contains the NSSF Docker images.

    To check if the Kubernetes cluster hosts can access the local Docker image repository, pull any image with an image-tag using either of the following commands:

    docker pull <Docker-repo>/<image-name>:<image-tag>
    podman pull <Podman-repo>/<image-name>:<image-tag>
    Where:
    • <Docker-repo> is the IP address or host name of the Docker repository.
    • <Podman-repo> is the IP address or host name of the Podman repository.
    • <image-name> is the Docker image name.
    • <image-tag> is the tag assigned to the Docker image used for the NSSF pod.

    For example:

    docker pull CUSTOMER_REPO/oc-app-info:25.1.201

    podman pull ocnssf-repo-host:5000/ocnssf/oc-app-info:25.1.201

    Note:

    Run the kubectl and helm commands on a system based on the deployment infrastructure. For instance, they can be run on a client machine such as VM, server, local desktop, and so on.
2.1.2.3 Server or Space Requirement

For information about server or space requirements, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

2.1.2.4 CNE Requirement

This section is applicable only if you are installing NSSF on Cloud Native Environment (CNE).

NSSF supports CNE 25.1.2xx, 25.1.1xx, 24.3.x.

To check the CNE version, run the following command:

echo $OCCNE_VERSION

For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

2.1.2.5 cnDBTier Requirement

NSSF supports cnDBTier 25.1.2xx, 25.1.1xx, 24.3.x. cnDBTier must be configured and running before installing NSSF.

To install NSSF with recommended cnDBTier resources, install cnDBTier using the ocnssf_dbtier_25.1.201_custom_values_25.1.201.yaml file provided in the ocnssf-custom-configtemplates-25_1_201_0_0 file. For information about the steps to download ocnssf-custom-configtemplates-25_1_201_0_0 file, see Customizing NSSF.

If you have already installed a version of cnDBTier, run the following command to upgrade your current cnDBTier installation using the ocnssf_dbtier_25.1.201_custom_values_25.1.201.yaml file:

helm upgrade <release-name>  <chart-path> -f <cndb-custom-values.yaml> -n <namespace>

For example:

helm upgrade mysql-cluster occndbtier/ -f ocnssf_dbtier_25.1.201_custom_values_25.1.201.yaml -n nssf-cndb

For more information about cnDBTier installation and upgrade procedure, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

Check These Additonal cnDBTier Configurations

Before installing or upgrading your cnDBTier instance, check and perform the following configurations, if not done already:

  • The default value of HeartbeatIntervalDbDb parameter for cnDBTier is 1250. Check the value of HeartbeatIntervalDbDb in the running cnDBTier instance. If the value is not set to 1250, then update the value by following the steps explained in the "Upgrading cnDBTier Clusters" section of the Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
  • Check if the value of ndb_allow_copying_alter_table parameter is set to 'ON' in the ocnssf_dbtier_25.1.201_custom_values_25.1.201.yaml file. If not, set this to 'ON' before installing NSSF. After the NSSF installation, set the parameter value to its default value, OFF.
  • To ensure Oracle Communications Cloud Native Configuration Console (CNCC) functions properly, update the following cnDBTier parameters in your ocnssf_dbtier_25.1.201_custom_values_25.1.201.yaml file:
    global:
      additionalndbconfigurations:
        ndb:
          MaxNoOfAttributes: 10000
          MaxNoOfOrderedIndexes: 2048
          MaxNoOfTables: 2048
    

    Make sure these values are applied before deploying CNCC to avoid any configuration issues.

2.1.2.6 OSO Requirement

NSSF supports Operations Services Overlay (OSO) 25.1.2xx, 25.1.1xx, 24.3.x for common operation services (Prometheus and components such as alertmanager, pushgateway) on a Kubernetes cluster, which does not have these common services. For more information about OSO installation, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation, Upgrade, and Fault Recovery Guide.

2.1.2.7 CNC Console Requirements

NSSF supports CNC Console 25.1.200 to configure and manage Network Functions. For more information, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.

2.1.2.8 OCCM Requirements
NSSF supports OCCM 25.1.2xx. To support automated certificate lifecycle management, NSSF integrates with Oracle Communications Cloud Native Core, Certificate Management (OCCM) in compliance with 3GPP security recommendations. For more information about OCCM, see the following guides:
  • Oracle Communications Cloud Native Core, Certificate Management Installation, Upgrade, and Fault Recovery Guide
  • Oracle Communications Cloud Native Core, Certificate Management User Guide

2.1.3 Resource Requirement

This section lists the resource requirements to install and run NSSF.

Note:

The performance and capacity of the NSSF system may vary based on the call model, Feature or Interface configuration, and underlying CNE and hardware environment.
2.1.3.1 NSSF Services

The following table lists resource requirement for NSSF Services:

Table 2-3 NSSF Services

Service Replica Min CPU Max CPU Min Memory Max Memory Min Ephemeral Storage (Mi) Max Ephemeral Storage (Gi)
<helm-release-name>-alternate-route 1 1 2 2Gi 4Gi 80 1
<helm-release-name>-appinfo 1 200m 200m 1Gi 1Gi 80 1
<helm-release-name>-config-server 1 500m 1 1Gi 1Gi 80 1
<helm-release-name>-egress-gateway 2 4 4 4Gi 4Gi 80 1
<helm-release-name>-ingress-gateway 5 6 6 6Gi 6Gi 80 1
<helm-release-name>-ocnssf-nrf-client-nfdiscovery 2 2 2 1Gi 1Gi 80 1
<helm-release-name>-ocnssf-nrf-client-nfmanagement 2 1 1 1Gi 1Gi 80 1
<helm-release-name>-nsauditor 1 500m 2 512Mi 1Gi 80 1
<helm-release-name>-nsavailability 2 4 4 4Gi 4Gi 80 1
<helm-release-name>-nsconfig 1 2 2 2Gi 2Gi 80 1
<helm-release-name>-nsselection 6 6 6 6Gi 6Gi 80 1
<helm-release-name>-nssubscription 1 2 2 1Gi 1Gi 80 1
<helm-release-name>-perf-info 1 2 8 1Gi 1Gi 80 1
  • #: <helm-release-name> will be prefixed in each microservice name. Example: if helm release name is "ocnssf", then nsselection microservice name will be "ocnssf-nsselection"
  • Init-service container's and Common Configuration Client Hook's resources are not counted because the container gets terminated after initialization completes.
  • Helm Hooks Jobs: These are pre and post jobs that are invoked during installation, upgrade, rollback, and uninstallation of the deployment. These are short span jobs that get terminated after the deployment completion.
  • Helm Test Job: This job is run on demand when the helm test command is initiated. This job runs the helm test and stops after completion. These are short-lived jobs that get terminated after the deployment is done. They are not part of active deployment resource, but are considered only during helm test procedures.
2.1.3.2 Debug Tool Container

The Debug Tools provides third party troubleshooting tools for debugging the runtime issues in lab. If Debug Tool Container injection is enabled during NSSF deployment or upgrade, this container is injected to each NSSF pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about configuring the Debug Tool, see Oracle Communications Cloud Native Core, Network Slice Selection Function Troubleshooting Guide.

Table 2-4 Debug Tool Container

Service Name MinCPU MaxCPU MinMemory (GB) MaxMemory (GB)
<helm-release-name>-nsselection 0.5 1 1 2
<helm-release-name>-nsavailability 0.5 1 1 2
<helm-release-name>-nssubscription 0.5 1 1 2
<helm-release-name>-nsauditor 0.5 1 1 2
<helm-release-name>-nsconfiguration 0.5 1 1 2
<helm-release-name>-ocnssf-nrf-client-nfdiscovery 0.5 1 1 2
<helm-release-name>-ocnssf-nrf-client-nfmanagement 0.5 1 1 2
<helm-release-name>-ingressgateway 0.5 1 1 2
<helm-release-name>-egressgateway 0.5 1 1 2
<helm-release-name>-config-server 0.5 1 1 2
<helm-release-name>-alternate-route 0.5 1 1 2
<helm-release-name>-appinfo 0.5 1 1 2
<helm-release-name>-perfinfo 0.5 1 1 2

Note:

<helm-release-name> is the Helm release name. For example, if Helm release name is "ocnssf", then nsselection microservice name will be "ocnssf-nsselection".
2.1.3.3 ASM Sidecar

NSSF leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external TLS communication. If ASM Sidecar injection is enabled during NSSF deployment or upgrade, this container is injected to each NSSF pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about installing ASM, see Configuring NSSF to support Aspen Service Mesh.

Table 2-5 ASM Sidecar

Pod Name Pod Count MinCPU MaxCPU MinMemory MaxMemory
<helm-release-name>-alternate-route 1 250m 250m 512Mi 512Mi
<helm-release-name>-appinfo 1 250m 250m 512Mi 512Mi
<helm-release-name>-config-server 1 250m 250m 512Mi 512Mi
<helm-release-name>-egress-gateway 2 500m 500m 1Gi 1Gi
<helm-release-name>-ingress-gateway 5 3 3 512Mi 512Mi
<helm-release-name>-nsauditor 1 250m 250m 512Mi 512Mi
<helm-release-name>-nsavailability 2 500m 500m 1Gi 1Gi
<helm-release-name>-nsconfig 1 250m 250m 512Mi 512Mi
<helm-release-name>-nsselection 6 4 4 2Gi 2Gi
<helm-release-name>-nssubscription 1 250m 250m 512Mi 512Mi
<helm-release-name>-ocnssf-nrf-client-nfdiscovery 2 500m 500m 1Gi 1Gi
<helm-release-name>-ocnssf-nrf-client-nfmanagement 2 500m 500m 1Gi 1Gi
<helm-release-name>-perf-info 1 250m 250m 512Mi 512Mi

Note:

<helm-release-name> is the Helm release name. For example, if Helm release name is "ocnssf", then nsselection microservice name will be "ocnssf-nsselection".
2.1.3.4 Upgrade

Following is the resource requirement for upgrading NSSF.

Table 2-6 Upgrade

Service Name MinPod Replicas MaxPod Replicas MinCPU MaxCPU MinMemory (GB) MaxMemory (GB)
<helm-release-name>-nsselection 1 1 2 2 2 2
<helm-release-name>-nsavailability 1 1 4 4 2 2
<helm-release-name>-nssubscription 1 1 2 2 2 2
<helm-release-name>-nsauditor 1 1 6 6 3 3
<helm-release-name>-nsconfiguration 1 1 2 2 2 2
<helm-release-name>-ocnssf-nrf-client-nfdiscovery 1 1 2 2 2 2
<helm-release-name>-ocnssf-nrf-client-nfmanagement 1 1 4 4 2 2
<helm-release-name>-ingressgateway 1 1 6 6 4 4
<helm-release-name>-egressgateway 1 1 6 6 4 4
<helm-release-name>-config-server 1 1 2 2 4 4
<helm-release-name>-alternate-route 1 1 2 2 4 4
<helm-release-name>-appinfo 1 1 1 1 1 1
<helm-release-name>-perfinfo 1 1 1 1 1 1

Note:

<helm-release-name> is the Helm release name. For example, if Helm release name is "ocnssf", then nsselection microservice name will be "ocnssf-nsselection".
2.1.3.5 Common Services Container

Following is the resource requirement for Common Services Container.

Table 2-7 Common Services Container

Container Name CPU Memory (GB) Kubernetes Init Container
init-service 1 1 Y
update-service 1 1 N
common_config_hook 1 1 N
  • Update Container service: Ingress or Egress Gateway services use this container service to periodically refresh private keys, CA Root Certificate for TLS, and other certificates for NSSF.
  • Init Container service: Ingress or Egress Gateway services use this container to get private keys, CA Root Certificate for TLS during start up, and other certificates for NSSF.
  • Common Configuration Hook: It is used for creating database for common service configuration.
2.1.3.6 NSSF Hooks

Following is the resource requirement for NSSF Hooks.

Table 2-8 NSSF Hooks

Hook Name CPU Memory (Mi)
MinCPU MaxCPU MinMemory (Mi) MaxMemory (Mi)
<helm-release-name>-nsconfig-pre-install 0.25 0.5 256 512
<helm-release-name>-nsconfig-post-install 0.25 0.5 256 512
<helm-release-name>-nsconfig-pre-upgrade 0.25 0.5 256 512
<helm-release-name>-nsconfig-post-upgrade 0.25 0.5 256 512
<helm-release-name>-nsconfig-pre-rollback 0.25 0.5 256 512
<helm-release-name>-nsconfig-post-rollback 0.25 0.5 256 512
<helm-release-name>-nsconfig-pre-delete 0.25 0.5 256 512
<helm-release-name>-nsconfig-post-delete 0.25 0.5 256 512
<helm-release-name>-nsselection-pre-install 0.25 0.5 256 512
<helm-release-name>-nsselection-post-install 0.25 0.5 256 512
<helm-release-name>-nsselection-pre-upgrade 0.25 0.5 256 512
<helm-release-name>-nsselection-post-upgrade 0.25 0.5 256 512
<helm-release-name>-nsselection-pre-rollback 0.25 0.5 256 512
<helm-release-name>-nsselection-post-rollback 0.25 0.5 256 512
<helm-release-name>-nsselection-pre-delete 0.25 0.5 256 512
<helm-release-name>-nsselection-post-delete 0.25 0.5 256 512
<helm-release-name>-nsavailability-pre-install 0.25 0.5 256 512
<helm-release-name>-nsavailability-post-install 0.25 0.5 256 512
<helm-release-name>-nsavailability-pre-upgrade 0.25 0.5 256 512
<helm-release-name>-nsavailability-post-upgrade 0.25 0.5 256 512
<helm-release-name>-nsavailability-pre-rollback 0.25 0.5 256 512
<helm-release-name>-nsavailability-post-rollback 0.25 0.5 256 512
<helm-release-name>-nsavailability-pre-delete 0.25 0.5 256 512
<helm-release-name>-nsavailability-post-delete 0.25 0.5 256 512
<helm-release-name>-nssubscription-pre-install 0.25 0.5 256 512
<helm-release-name>-nssubscription-post-install 0.25 0.5 256 512
<helm-release-name>-nssubscription-pre-upgrade 0.25 0.5 256 512
<helm-release-name>-nssubscription-post-upgrade 0.25 0.5 256 512
<helm-release-name>-nssubscription-pre-rollback 0.25 0.5 256 512
<helm-release-name>-nssubscription-post-rollback 0.25 0.5 256 512
<helm-release-name>-nssubscription-pre-delete 0.25 0.5 256 512
<helm-release-name>-nssubscription-post-delete 0.25 0.5 256 512
<helm-release-name>-nsconfig-pre-install 0.25 0.5 256 512
<helm-release-name>-nsauditor-post-install 0.25 0.5 256 512
<helm-release-name>-nsauditor-pre-upgrade 0.25 0.5 256 512
<helm-release-name>-nsauditor-post-upgrade 0.25 0.5 256 512
<helm-release-name>-nsauditor-pre-rollback 0.25 0.5 256 512
<helm-release-name>-nsauditor-post-rollback 0.25 0.5 256 512
<helm-release-name>-nsauditor-pre-delete 0.25 0.5 256 512
<helm-release-name>-nsauditor-post-delete 0.25 0.5 256 512
<helm-release-name>-alternate-route-post-install 0.25 0.5 256 512
<helm-release-name>-alternate-route-pre-upgrade 0.25 0.5 256 512
<helm-release-name>-alternate-route-post-upgrade 0.25 0.5 256 512
<helm-release-name>-alternate-route-pre-rollback 0.25 0.5 256 512
<helm-release-name>-alternate-route-post-rollback 0.25 0.5 256 512
<helm-release-name>-alternate-route-pre-delete 0.25 0.5 256 512
<helm-release-name>-alternate-route-post-delete 0.25 0.5 256 512

Note:

<helm-release-name> is the Helm release name. For example, if Helm release name is "ocnssf", then nsselection microservice name will be "ocnssf-nsselection"
2.1.3.7 CNC Console Resources

Oracle Communications Cloud Native Configuration Console (CNC Console) is a Graphical User Interface (GUI) for NFs and Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) common services. For information about CNC Console resources required by NSSF, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.

2.1.3.8 cnDBTier Resources

The following table lists resource requirement for cnDBTier:

cnDBTier Services and PVC

Table 2-9 cnDBTier Services and PVC

cnDBTier Pods Replica MinvCPU MaxvCPU MinMemory MaxMemory PVCStorage StorageCount MinEphemeral Storage MaxEphemeral Storage
SQL (ndbmysqld) 2 8 8 10Gi 10Gi 100Gi 1 90Mi 1Gi
SQL (ndbappmysqld) 4 8 8 10Gi 10Gi 20Gi 1 90Mi 1Gi
MGMT (ndbmgmd) 2 4 4 10Gi 10Gi 15Gi 1 90Mi 1Gi
DB (ndbmtd) 4 10 10 18Gi 18Gi 60Gi 2 90Mi 1Gi
Backup Manager Service (db-backup-manager-svc) 1 0.1 0.1 128Mi 128Mi NA NA 90Mi 1Gi
Replication Service (db-replication-svc) 1 2 2 12Gi 12Gi 60Gi 1 90Mi 1Gi
Monitor Service (db-monitor-svc) 1 1 1 1Gi 1Gi NA NA 90Mi 1Gi

cnDBTier Sidecar

Table 2-10 cnDBTier Sidecar

cnDBTier Pods Replica Name vCPU RAM Ephemeral Storage

SQL (ndbmysqld)

Kubernetes Resource Type: StatefulSet

Sidecar name: init-sidecar

2 istio-proxy 3 4Gi

Min: 90Mi

Max: 1Gi

init-sidecar 0.1 256Mi
db-infra-monitor-svc 0.1 256Mi

SQL (ndbappmysqld)

Kubernetes Resource Type: Statefulset

Sidecar name: init-sidecar

4 init-sidecar 0.1 256Mi

Min: 90Mi

Max: 1Gi

istio-proxy 3 4Gi

MGMT (ndbmgmd)

Kubernetes Resource Type: StatefulSet

Sidecar name: db-infra-monitor-svc

2 db-infra-monitor-svc 0.1 256Mi NA
istio-proxy 3 4Gi

DB (ndbmtd)

Kubernetes Resource Type: StatefulSet

Sidecar name: db-backup-executor-svc

4 db-backup-executor-svc 1 2Gi

Min: 90Mi

Max: 1Gi

db-infra-monitor-svc 100m 256Mi
istio-proxy 4 2Gi

Backup Manager Service

(db-backup-manager-svc)

Sidecar name: NA

Kubernetes Resource Type: Deployment

1 istio-proxy 2 1Gi NA

Replication Service

(db-replication-svc)

Sidecar name: istio-proxy

Kubernetes Resource Type: Deployment

1 istio-proxy 2 1Gi

Min: 90Mi

Max: 1Gi

Monitor Service

(db-monitor-svc)

Sidecar name: istio-proxy

Kubernetes Resource Type: Deployment

1 istio-proxy 2 1Gi NA
2.1.3.9 OSO Resources

The following table lists resource requirement for OSO:

Table 2-11 OSO Resources

Microservice Replica Min CPU Max CPU Min Memory (GB) Max Memory (GB)
prom-alertmanager 2 0.5 0.5 1 1
prom-server 1 2 2 4 4
2.1.3.10 OCCM Resources

OCCM manages certificate creation, recreation, renewal, and so on for NSSF. For information about OCCM resources required by NSSF, see Oracle Communications Cloud Native Core, Certificate Management Installation, Upgrade, and Fault Recovery Guide.

2.2 Installation Sequence

This section describes NSSF preinstallation, installation, and postinstallation tasks for NSSF.

2.2.1 Preinstallation Tasks

Before installing NSSF, perform the tasks described in this section.

2.2.1.1 Downloading the NSSF Package

To download the NSSF package from My Oracle Support (MOS), perform the following steps:

  1. Log in to My Oracle Support using the appropriate credentials.
  2. Select Patches & Updates tab.
  3. In Patch Search console, select Product or Family (Advanced) option.
  4. Enter Oracle Communications Cloud Native Core - 5G in Product field and select the product from the Product drop-down list.
  5. From the Release drop-down list, select "Oracle Communications Cloud Native Core Network Slice Selection Function <release_number>".

    Where, <release_number> indicates the required release number of NSSF.

  6. Click Search.

    The Patch Advanced Search Results list appears.

  7. Select the required patch from the list.

    The Patch Details window appears.

  8. Click Download.

    File Download window appears.

  9. Click <p********_<release_number>_Tekelec>.zip to download the release package.

    Where,

    Where, <p********> is the MOS patch number and <release_number> is the release number of NSSF.

2.2.1.2 Pushing the Images to Customer Docker Registry

NSSF deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.

Following table lists the Docker images of NSSF:

Table 2-12 Images for NSSF

Service Name Image Name Image Tag
<helm-release-name>-nsauditor ocnssf-nsauditor 25.1.201
<helm-release-name>-nssubscription ocnssf-nssubscription 25.1.201
<helm-release-name>-nsselection ocnssf-nsselection 25.1.201
<helm-release-name>-nsconfiguration ocnssf-nsconfig 25.1.201
<helm-release-name>-nsavailability ocnssf-nsavailability 25.1.201
<helm-release-name>-ingressgateway ocingress_gateway 25.1.206
<helm-release-name>-configurationinit configurationinit 25.1.206
<helm-release-name>-configurationupdate configurationupdate 25.1.206
<helm-release-name>-egressgateway ocegress_gateway 25.1.206
<helm-release-name>-common_config_hook common_config_hook 25.1.206
<helm-release-name>-alternate-route alternate-route 25.1.206
<helm-release-name>-nrf-client Nrf-client 25.1.204
<helm-release-name>-appinfo occnp/oc-app-info 25.1.204
<helm-release-name>-perfinfo occnp/oc-perf-info 25.1.204
<helm-release-name>-oc-config-server occnp/oc-config-server 25.1.204
<helm-release-name>-debug-tool ocdebug-tools 25.1.203
<helm-release-name>-helm-test helm-test 25.1.202

To push the images to the registry:

  1. Unzip the release package to the location where you want to install NSSF. The NSSF package is as follows:

    ocnssf_pkg_25_1_201_0_0.tgz

  2. Untar the NSSF package zip file to get NSSF image tar file:
    tar -xvzf ocnssf_pkg_25_1_201_0_0.tgz
    The directory consists of the following:
    • ocnssf-25.1.201.0.0.tgz: Helm Charts
    • ocnssf-25.1.201.0.0.tgz.sha256: Checksum for Helm chart tgz file
    • ocnssf-images-25.1.201.0.0.tar: NSSF images file
    • ocnssf-images-25.1.201.0.0.tar.sha256: Checksum for images tar file
    • ocnssf-servicemesh-config-25.1.201.0.0.tgz: Servicemesh configuration chart
    • ocnssf-servicemesh-config-25.1.201.0.0.tgz.sha256: Checksum for servicemesh configuration tgz chart
    • ocnssf_limit_range.yaml: Limit range configuration file used to limit the resource quota
    • ocnssf_resource_quota.yaml: Resource quota configuration file used to define the resource quota
    • ocnssf-custom-configtemplates-25_1_201_0_0.tar: Contains NSSF custom configuration templates.
    • ocnssf-custom-configtemplates-25_1_201_0_0.tar.sha256: Contains checksum for NSSF custom configuration templates.
    • Readme.txt: Readme txt file
  3. Run one of the following commands to load the ocnssf-images-25.1.201.0.0.tar file:
    docker load --input /IMAGE_PATH/ocnssf-images-25.1.201.0.0.tar
    podman load --input /IMAGE_PATH/ocnssf-images-25.1.201.0.0.tar
  4. Run one of the following commands to verify if the images are loaded:
    docker images
    podman images

    Verify the list of images shown in the output with the list of images shown in the table Table 2-12. If the list does not match, reload the image tar file.

  5. Run one of the following commands to tag each imported image to the registry:
    docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
    podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
  6. Run one of the following commands to push the image to the registry:
    docker push <docker-repo>/<image-name>:<image-tag> 
    podman push <docker-repo>/<image-name>:<image-tag> 

    Note:

    It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise, docker push command may fail.
2.2.1.3 Verifying and Creating Namespace

This section explains how to verify and create a namespace in the system.

Note:

This is a mandatory procedure, run this before proceeding further with the installation procedure. The namespace created or verified in this procedure is an input for the next procedures.
  1. Run the following command to verify if the required namespace already exists in the system:
    kubectl get namespace

    In the output of the above command, if the namespace exists, continue with Creating Service Account, Role, and RoleBinding.

  2. If the required namespace is unavailable, create the namespace using the following command:
    kubectl create namespace <required namespace>

    Where,

    <required namespace> is the namespace to be used for NSSF installation.

    For example, the following command creates the namespace, ocnssf:

    kubectl create namespace ocnssf

    Sample output:

    namespace/ocnssf created

  3. Update the global.nameSpace parameter in the ocnssf_custom_values_25.1.201.yaml file with the namespace created in the previous step:

    Here is a sample configuration snippet from the ocnssf_custom_values_25.1.201.yaml file:

    
    global:
      # NameSpace where secret is deployed
      nameSpace: ocnssf

Naming Convention for Namespace

The namespace should:

  • start and end with an alphanumeric character
  • contain 63 characters or less
  • contain only alphanumeric characters or '-'

Note:

It is recommended to avoid using the prefix kube- when creating a namespace. The prefix is reserved for Kubernetes system namespaces.
2.2.1.4 Creating Service Account, Role, and RoleBinding

This section is optional and it describes how to manually create a service account, role, and rolebinding. It is required only when customer needs to create a role, rolebinding, and service account manually before installing NSSF.

Note:

The secret(s) should exist in the same namespace where NSSF is getting deployed. This helps to bind the Kubernetes role with the given service account.

Creating Service Account, Role, and RoleBinding

  1. Run the following command to create a NSSF resource file:
    vi <ocnssf-resource-file>

    Where,

    <ocnssf-resource-file> is the name of the resource file.

    Example:

    vi ocnssf-resource-template.yaml
  2. Update the ocnssf-resource-template.yaml with release specific information:

    Note:

    Update <helm-release> and <namespace> with its respective NSSF namespace and NSSF Helm release name.

    A sample template to update the ocnssf-resource-template.yaml file with is given below:

    ## Sample template start## 
    Copyright 2018 (C), Oracle and/or its affiliates. All rights reserved.
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: <helm-release>-ocnssf-serviceaccount
      namespace: <namespace>
      labels:
        {{- include "labels.allResources" . }}
      annotations:
        {{- include "annotations.allResources" . }}
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: <helm-release>-ocnssf-role
      namespace: <namespace>
      labels:
        {{- include "labels.allResources" . }}
      annotations:
        {{- include "annotations.allResources" . }}
    rules:
    - apiGroups:
      - "" # "" indicates the core API group
      resources:
      - services
      - configmaps
      - pods
      - secrets
      - endpoints
      - persistentvolumeclaims
      - serviceaccounts
      verbs:
      - get
      - watch
      - list
      - update
    - apiGroups:
      - policy
      resources:
      - poddisruptionbudgets
      verbs:
      - get
      - watch
      - list
      - update
    - apiGroups:
      - apps
      resources:
      - deployments
      - statefulsets
      verbs:
      - get
      - watch
      - list
      - update
    - apiGroups:
      - autoscaling
      resources:
      - horizontalpodautoscalers
      verbs:
      - get
      - watch
      - list
      - update
    - apiGroups:
      - rbac.authorization.k8s.io
      resources:
      - roles
      - rolebindings
      verbs:
      - get
      - watch
      - list
      - update
    - apiGroups:
      - monitoring.coreos.com
      resources:
      - prometheusrules
      verbs:
      - get
      - watch
      - list
      - update
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: <helm-release>-ocnssf-rolebinding
      namespace: <namespace>
      labels:
        {{- include "labels.allResources" . }}
      annotations:
        {{- include "annotations.allResources" . }}
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: <helm-release>-ocnssf-role
    subjects:
    - kind: ServiceAccount
      name:  <helm-release>-ocnssf-serviceaccount
      namespace: <namespace>
    ---
    
    ## Sample template end#
  3. Run the following command to create service account, role, and rolebinding:
    $ kubectl -n <namespace> create -f ocnssf-resource-template.yaml

    Where,

    <namespace> is the namespace where NSSF is deployed.

    Example:

    $ kubectl -n ocnssf create -f ocnssf-resource-template.yaml
  4. Update the serviceAccountName parameter in the ocnssf_custom_values_25.1.201.yaml file with the value updated in name field under kind: ServiceAccount. For more information about serviceAccountName parameter, see the Global Parameters section.

Note:

PodSecurityPolicy kind is required for Pod Security Policy service account. For more information, see Oracle Communications Cloud Native Core, Network Slice Selection Function Troubleshooting Guide.
2.2.1.5 Configuring Database, Creating Users, and Granting Permissions

This section explains how database administrators can create users and database in a single and multisite deployment.

NSSF has five databases (Provisional, State, Release, Leaderpod, and NRF Client Database) and two users (Application and Privileged).

Note:

  • Before running the procedure for georedundant sites, ensure that the cnDBTier for georedundant sites is up and replication channels are enabled.
  • While performing a fresh installation, if NSSF release is already deployed, purge the deployment and remove the database and users that were used for the previous deployment. For uninstallation procedure, see Uninstalling NSSF.

NSSF Database

For NSSF application, four types of database are required:
  1. Provisional Database: Provisional Database contains configuration information. The same configuration must be done on each site by the operator. Both Privileged User and Application User have access to this database. In case of multisite georedundant setups, each site must have a unique Provisional Database. NSSF sites can access only the information in their unique Provisional Database.
    For example:
    • For Site 1: nssfProvSite1DB
    • For Site 2: nssfProvSite2DB
    • For Site 3: nssfProvSite3DB
  2. State Database: This database maintains the running state of NSSF sites and has information of subscriptions, pending notification triggers, and availability data. It is replicated and the same configuration is maintained by all NSSF georedundant sites. Both Privileged User and Application User have access to this database.
  3. Release Database: This database maintains release version state, and it is used during upgrade and rollback scenarios. Only Privileged User has access to this database.
  4. Leaderpod Database: This database is used to store leader and follower if PDB is enabled for microservices that require a single pod to be up in all the instances. The configuration of this database must be done on each site. In case of georedundant deployments, each site must have a unique Leaderpod database.

    For example:

    • For Site 1: LeaderPod1Db
    • For Site 2: LeaderPod2Db
    • For Site 3: LeaderPod3Db

    Note:

    This database is used only when nrf-client-nfmanagement.enablePDBSupport is set to true in the ocnssf_custom_values_25.1.201.yaml. For more information, see NRF Client.
  5. NRF Client Database: This database is used to store discovery cache tables, and it also supports NRF Client features. Only Privileged User has access to this database and it is used only when the caching feature is enabled. In case of georedundant deployments, each site must have a unique NRF Client database and its configuration must be done on each site.

    For example:

    • For Site 1: nrf_client_db1
    • For Site 2: nrf_client_db2
    • For Site 3: nrf_client_db3

NSSF Users

There are two types of NSSF database users with different set of permissions:

  1. Privileged User: This user has a complete set of permissions. This user can perform create, alter, or drop operations on tables to perform install, upgrade, rollback, or delete operations.
  2. Application User: This user has a limited set of permissions and is used by NSSF application to handle service operations. This user can insert, update, get, or remove the records. This user will not be able to create, alter, or drop the database or tables.

Note:

In examples given in this document-
  • Application User's username is 'nssfusr' and password is 'nssfpasswd'.
  • Privileged User's username is 'nssfprivilegedusr' and password is 'nssfpasswd'.
2.2.1.5.1 Single Site

This section explains how a database administrator can create database and users for a single site deployment.

  1. Log in to the machine where SSH keys are stored and have permission to access the SQL nodes of NDB cluster.
  2. Connect to the SQL nodes.
  3. Log in to the MySQL prompt using root permission, or log in as a user who has the permission to create users as per conditions explained in the next step. For example: mysql -h 127.0.0.1 -uroot -p

    Note:

    This command varies between systems, path for MySQL binary, root user, and root password. After running this command, enter the password specific to the user mentioned in the command.
  4. Run the following command to check if both the NSSF users already exist:
    $ SELECT User FROM mysql.user;
    If the users already exist, go to the next step. Else, create the respective user or users by following the steps below:
    • Run the following command to create Privileged User:
      $ CREATE USER '<NSSF Privileged Username>'@'%' IDENTIFIED BY '<NSSF Privileged User Password>';

      Where,

      <NSSF Privileged Username> is the username of the Privileged user.

      <NSSF Privileged User Password> is the password of the Privileged user.

      For example:

      $ CREATE USER 'nssfprivilegedusr'@'%' IDENTIFIED BY 'nssfpasswd';

    • Run the following command to create Application User:
      $ CREATE USER '<Application User Name>'@'%' IDENTIFIED BY '<NSSF APPLICATION User Password>';

      Where,

      <NSSF Application Username> is the username of the Application user.

      <NSSF Application User Password> is the password of the Application user.

      For example:

      $ CREATE USER 'nssfusr'@'%' IDENTIFIED BY 'nssfpasswd';

    Note:

    You must create both the users on all the SQL nodes for all georedundant sites.
  5. Run the following command to check whether any of the NSSF database already exists:
    $ show database;
    1. If any of the previously configured database is already present, remove them. Otherwise, skip this step.

      Caution:

      In case you have multisite georedundant setup configured, removal of the database from any one of the SQL nodes of any cluster will remove the database from all georedundant sites.

      Run the following command to remove a preconfigured NSSF database:

      $ DROP DATABASE if exists <DB Name>;
       

      Where,

      <DB Name> is the database.

      For example:

      Run the following command if State Database already exists:

      $ DROP DATABASE if exists nssfStateDB;

    2. Run the following command to create a new NSSF database if it does not exist, or after dropping an existing database:
      $ CREATE DATABASE IF NOT EXISTS <DB Name> CHARACTER SET latin1;

      For example: Sample illustration for creating all five databases required for NSSF installation.

      $ CREATE DATABASE IF NOT EXISTS nssfStateDB CHARACTER SET latin1;

      $ CREATE DATABASE IF NOT EXISTS nssfProvSite1DB CHARACTER SET latin1;

      $ CREATE DATABASE IF NOT EXISTS ocnssfReleaseDB CHARACTER SET latin1;

      $ CREATE DATABASE IF NOT EXISTS LeaderPodDb CHARACTER SET latin1;

      $ CREATE DATABASE IF NOT EXISTS nrf_client_db CHARACTER SET latin1;

      Note:

      Ensure that you use the same database names while creating database that you have used in the global parameters of ocnssf_custom_values_25.1.201.yaml file. Following is an example of what are the names of the three NSSF database names configured in the ocnssf_custom_values_25.1.201.yaml file:
        global.stateDbName : nssfStateDB 
        global.provisionDbName: nssfProvSite1DB
        global.releaseDbName: ocnssfReleaseDB
      global.leaderPodDbName: LeaderPodDb
      nrfClientDbName: nrf_client_db 
      Hence, if you want to create any of these five databases, you must ensure that you create it with the same name as configured in the ocnssf_custom_values_25.1.201.yaml file. In this case, nssfStateDB, nssfProvSite1DB, ocnssfReleaseDB, LeaderPodDb, and nrf_client_db.
  6. Grant permissions to users on the database:

    Note:

    • Run this step on all the SQL nodes for each NSSF standalone site in a multisite georedundant setup.
    • Creation of database is optional if grant is scoped to all database, that is, database name is not mentioned in grant command.
    1. Run the following command to grant NDB_STORED_USER permissions to the Privileged User:
      GRANT NDB_STORED_USER ON *.* TO 'nssfprivilegedusr'@'%';
    2. Run the following commands to grant Privileged User permission on Provisional, State, Release, Leaderpod, and NRF Client databases:
      1. Privileged User on Provisional Database:
        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Userame>`@`%`;

        For example:

        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON nssfProvSite1DB.*TO `nssfprivilegedusr`@`%`;

      2. Privileged User on State Database:
        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Username>`@`%`;

        For example:

        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON nssfStateDB.*TO `nssfprivilegedusr`@`%`;

      3. Privileged User on Release Database:
        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Username>`@`%`;

        For example:

        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON ocnssfReleaseDB.*TO `nssfprivilegedusr`@`%`;

      4. Privileged User on NSSF Leaderpod Database:
        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Username>`@`%`;

        For example:

        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON LeaderPodDb.*TO `nssfprivilegedusr`@`%`;

      5. Privileged User on NRF Client Database and MySQL Database:
        $ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<NSSF Privileged Username>'@'%';

        For example: On NRF Client Database

        $ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nrf_client_db.* TO 'nssfprivilegedusr'@'%';

        For example: On MySQL Database

        $ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON mysql.* TO 'nssfprivilegedusr'@'%';

    3. Run the following command to grant NDB_STORED_USER permissions to the Application User:
      GRANT NDB_STORED_USER ON *.* TO 'nssfusr'@'%';
    4. Run the following commands to grant Application User permission on Provisional Database and State Database:
      1. Application User on Provisional Database:
        $ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<Application User Name>'@'%'; 

        For example:

        $ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nssfProvSite1DB.* TO 'nssfusr'@'%';

      2. Application User on State Database:
        $ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<Application User Name>'@'%'; 

        For example:

        $ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nssfStateDB.* TO 'nssfusr'@'%';

  7. Run the following command to flush privileges:
    FLUSH PRIVILEGES;
  8. Exit from MySQL prompt and SQL nodes.
2.2.1.5.2 Multisite

This section explains how database administrator can create the database and users for a multisite deployment.

Note:

For multisite georedudnant setups, change the parameter values of the unique database (provisionDbName, leaderPodDbName, and nrf_client_db) in the ocnssf_custom_values_25.1.201.yaml file. For example, change the values as mentioned below in case of two-site and three-site setup, respectively:
For Two-Site:
  • Change the value of global.provisionDbName as nssfProvSite1DB and nssfProvSite2DB for one-site and two-site scenarios, respectively.
  • Change the value of global.leaderPodDbName as LeaderPod1Db and LeaderPod2Db for one-site and two-site scenarios, respectively.
  • Change the value of global.nrfClientDbName as nrf_client_db1 and nrf_client_db2 for one-site and two-site scenarios, respectively.
For Three-Site:
  • Change the value of global.provisionDbName as nssfProvSite1DB, nssfProvSite2DB, and nssfProvSite3DB for one-site, two-site, and three-site scenarios, respectively.
  • Change the value of global.leaderPodDbName as LeaderPod1Db, LeaderPod2Db, and LeaderPod3Db for one-site, two-site, and three-site scenarios, respectively.
  • Change the value of global.nrfClientDbName as nrf_client_db1,, nrf_client_db2, and nrf_client_db3 for one-site, two-site, and three-site scenarios, respectively.
  1. Log in to the machine where SSH keys are stored and have permission to access the SQL nodes of NDB cluster.
  2. Connect to the SQL nodes.
  3. Log in to the MySQL prompt using root permission, or log in as a user who has the permission to create users as per conditions explained in the next step. For example: mysql -h 127.0.0.1 -uroot -p

    Note:

    This command varies from system to system, path for MySQL binary, root user, and root password. After running this command, enter the password specific to the user mentioned in the command.
  4. Run the following command to check if both the NSSF users already exist:
    $ SELECT User FROM mysql.user;
    If users already exist, go to the next step. Otherwise, create the respective new user or users by following the steps below:
    • Run the following command to create a new Privileged User:
      $ CREATE USER '<NSSF Privileged Username>'@'%' IDENTIFIED BY '<NSSF Privileged User Password>';

      For example:

      $ CREATE USER 'nssfprivilegedusr'@'%' IDENTIFIED BY 'nssfpasswd';

    • Run the following command to create a new NSSF Application User:
      $ CREATE USER '<NSSF APPLICATION Username>'@'%' IDENTIFIED BY '<NSSF APPLICATION Password>';

      For example:

      $ CREATE USER 'nssfusr'@'%' IDENTIFIED BY 'nssfpasswd';

    Note:

    You must create both users on all the SQL Nodes for all georedundant sites.
  5. Run the following command to check any of the NSSF database already exists:
    $ show database;
    1. If any of the previously configured database is already present, remove it (them). Otherwise, skip this step.

      Caution:

      In case you have multisite georedundant setup configured, removal of the database from any one of the SQL nodes of any cluster will remove the database from all georedundant sites.

      Run the following command to remove a preconfigured NSSF database:

      $ DROP DATABASE if exists <DB Name>;
       

      For example:

      Run the following command if you find that State Database already exists:

      $ DROP DATABASE if exists nssfStateDB;

    2. Run the following command to create a new database for NSSF if it does not exist, or after dropping a database:
      $ CREATE DATABASE IF NOT EXISTS <DB Name> CHARACTER SET latin1;

      For example: Sample illustration for creating all five database required for NSSF installation.

      $ CREATE DATABASE IF NOT EXISTS nssfStateDB CHARACTER SET latin1;

      $ CREATE DATABASE IF NOT EXISTS nssfProvSite1DB CHARACTER SET latin1;

      $ CREATE DATABASE IF NOT EXISTS ocnssfReleaseDB CHARACTER SET latin1;

      $ CREATE DATABASE IF NOT EXISTS LeaderPod1Db CHARACTER SET latin1;

      $ CREATE DATABASE IF NOT EXISTS nrf_client_db1 CHARACTER SET latin1;

      Note:

      Ensure that you use the same database names while creating database that you have used in the global parameters of ocnssf_custom_values_25.1.201.yaml file. Following is an example of the three database names configured in the ocnssf_custom_values_25.1.201.yaml file:
        global.stateDbName : nssfStateDB 
        global.provisionDbName: nssfProvSite1DB
        global.releaseDbName: ocnssfReleaseDB
      global.leaderPodDbName: LeaderPod1Db
      nrfClientDbName: nrf_client_db1 
      Hence, if you want to create any of these five databases, you must ensure that you create it with the same name as configured in the ocnssf_custom_values_25.1.201.yaml file. In this case, the names are nssfStateDB, nssfProvSite1DB, ocnssfReleaseDB, LeaderPod1Db, and nrf_client_db1.
  6. Grant permissions to users on the database:

    Note:

    • Run this step on all the SQL nodes for each NSSF standalone site in a multisite georedundant setup.
    • Creation of database is optional if grant is scoped to all database, that is, database name is not mentioned in grant command.
    1. Run the following command to grant NDB_STORED_USER permissions to the Privileged User:
      GRANT NDB_STORED_USER ON *.* TO 'nssfprivilegedusr'@'%';
    2. Run the following commands to grant Privileged User permission on Provisional, State, Release, Leaderpod, and NRF Client databases:
      1. Privileged User on Provisional Database:
        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Username>`@`%`;

        Example Site 1:

        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON nssfProvSite1DB.*TO `nssfprivilegedusr`@`%`;

        Example Site 2:

        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON nssfProvSite2DB.*TO `nssfprivilegedusr`@`%`;

      2. Privileged User on State Database:
        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Username>`@`%`;

        For example:

        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON nssfStateDB.*TO `nssfprivilegedusr`@`%`;

      3. Privileged User on NSSF Release Database:
        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Username>`@`%`;

        For example:

        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON ocnssfReleaseDB.*TO `nssfprivilegedusr`@`%`;

      4. Privileged User on NSSF Leaderpod Database:
        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Username>`@`%`;

        Example Site 1:

        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON LeaderPod1Db.*TO `nssfprivilegedusr`@`%`;

        Example Site 2:

        $ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON LeaderPod2Db.*TO `nssfprivilegedusr`@`%`;

      5. Privileged User on NRF Client Database and MySQL Database:
        $ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<NSSF Privileged Username>'@'%';

        Example Site 1: On NRF Client Database

        $ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nrf_client_db1.* TO 'nssfprivilegedusr'@'%';

        Example Site 2: NRF Client Database

        $ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nrf_client_db2.* TO 'nssfprivilegedusr'@'%';

        Example: On MySQL Database:
        GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON mysql.* TO 'nssfprivilegedusr'@'%';
    3. Run the following command to grant NDB_STORED_USER permissions to the Application User:
      GRANT NDB_STORED_USER ON *.* TO 'nssfusr'@'%';
    4. Run the following commands to grant Application User permission on Provisional Database and State Database:
      1. NSSF Application User on Provisional Database:
        $ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<NSSF APPLICATION Username>'@'%'; 

        Example: Site 1

        $ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nssfProvSite1DB.* TO 'nssfusr'@'%';

        Example Site 2:

        $ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nssfProvSite2DB.* TO 'nssfusr'@'%';

      2. NSSF Application User on State Database:
        $ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<NSSF APPLICATION Username>'@'%'; 

        For example:

        $ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nssfStateDB.* TO 'nssfusr'@'%';

    5. Run the following command to grant read permission to NSSF Application User for replication_info:
      $ GRANT SELECT ON replication_info.* TO '<NSSF APPLICATION Username>'@'%';

      For example:

      $ GRANT SELECT ON replication_info.* TO 'nssfusr'@'%';

    6. Run the following command to grant read permission to Privileged User for replication_info:
      $ GRANT SELECT ON replication_info.* TO '<NSSF Privileged Username>'@'%';

      For example:

      $ GRANT SELECT ON replication_info.* TO 'nssfprivilegedusr'@'%';

  7. Run the following command to flush privileges:
    FLUSH PRIVILEGES;
  8. Exit from MySQL prompt and SQL nodes.
2.2.1.6 Configuring Resource Quota and Limit Range

Configuring Resource Quota

Resource quota, defined by a ResourceQuota object, provides constraints to limit combined resource consumption per namespace. You can limit the quantity of objects that can be created in your namespace by type and by the total amount of resources required.

Note:

This is an optional step. You can perform it if you want to limit the resources for a namespace.
To configure the Resource Quota for a namespace, perform the steps given below:
  1. Run the following command to create a ocnssf_resource_quota.yaml file using the template given below:
    kubectl create -f <path of ocnssf_resource_quota.yaml file> -n <namespace>

    For example:

    kubectl create -f ./ocnssf_resource_quota.yaml -n ocnssf

    Template:
    apiVersion: v1
    kind: ResourceQuota
    metadata:
      name: ocnssf-resource-quota
    spec:
      hard:
        requests.cpu: "200"
        requests.memory: 200Gi
        limits.cpu: "200"
        limits.memory: 200Gi
  2. Run the following command to create resource quota for a given namespace:
    kubectl apply -f <path of ocnssf_resource_quota.yaml file> -n <namespace>

    For example:

    kubectl apply -f ./ocnssf_resource_quota.yaml -n ocnssf

Configuring Limit Range

Limit Range is a policy to limit the resource allocations (limits and requests) that you can specify for each applicable object kind (for example, container) in a namespace.

Note:

If Resource Quota is configured for a namespace, then it is mandatory to configure a Limit Range as well.

To configure the Limit Range for an object in a namespace, perform the steps given below:

  1. Run the following command to create a ocnssf_limit_range.yaml file using the template given below:
    kubectl create -f <path of ocnssf_limit_range.yaml file> -n <namespace>

    For example:

    kubectl create -f ./ocnssf_limit_range.yaml -n ocnssf

    Template:
    apiVersion: v1
    kind: LimitRange
    metadata:
      name: ocnssf-limit-range
    spec:
      limits:
      - default:
          memory: 512Mi
          cpu: 0.5
        defaultRequest:
          memory: 256Mi
          cpu: 250m
        type: Container
  2. Run the following command to create resource quota for a given namespace:
    kubectl apply -f <path of ocnssf_limit_range.yaml file> -n <namespace>

    For example:

    kubectl apply -f ./ocnssf_limit_range.yaml -n ocnssf

2.2.1.7 Configuring Kubernetes Secret for Accessing Database

This section explains how to configure Kubernetes secrets for accessing NSSF database.

2.2.1.7.1 Creating and Updating Secret for Privileged Database User

This section explains how to create and update Kubernetes secret for Privileged User to access the database.

  1. Run the following command to create Kubernetes secret:
    kubectl create secret generic <Privileged User secret name> --from-literal=mysql-username=<Privileged Mysql database username> --from-literal=mysql-password=<Privileged Mysql User database passsword>  -n <Namespace>

    Where,

    <Privileged User secret name> is the secret name of the Privileged User.

    <Privileged MySQL database username> is the username of the Privileged User.

    <Privileged MySQL User database passsword> is the password of the Privileged User.

    <Namespace> is the namespace of NSSF deployment.

    Note:

    Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in future.

    For example:

    $ kubectl create secret generic privileged-db-creds --from-literal=mysql-username=nssfprivilegedusr --from-literal=mysql-password=nssfpasswd -n ocnssf
  2. Run the following command to verify the secret created:
    $ kubectl describe secret <Privileged User secret name> -n <Namespace>

    Where,

    <Privileged User secret name> is the secret name of the database.

    <Namespace> is the namespace of NSSF deployment.

    For example:

    $ kubectl describe secret privileged-db-creds -n ocnssf

    Sample output:
    Name:         privileged-db-creds
    Namespace:    ocnssf
    Labels:       <none>
    Annotations:  <none>
    
    Type:  Opaque
    
    Data
    ====
    mysql-password:  10 bytes
    mysql-username:  17 bytes
    
  3. Update the command used in step 1 with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of NSSF deployment>". After the update is performed, use the following command:
    $ kubectl create secret generic <Privileged User secret name> --from-literal=dbUsername=<Privileged MySQL database username> --from-literal=dbPassword=<Privileged Mysql database password> --dry-run -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>

    Where,

    <Privileged User secret name> is the secret name of the Privileged User.

    <Privileged MySQL database username> is the username of the Privileged User.

    <Privileged MySQL User database passsword> is the password of the Privileged User.

    <Namespace> is the namespace of NSSF deployment.

  4. Run the updated command. The following message is displayed:
    secret/<Privileged User secret name> replaced

    Where,

    <Privileged User secret name> is the updated secret name of the Privileged User.

2.2.1.7.2 Creating and Updating Secret for Application Database User

This section explains how to create and update Kubernetes secret for application user to access the database.

  1. Run the following command to create Kubernetes secret:
    $ kubectl create secret generic <Application User secret name> --from-literal=mysql-username=<Application MySQL Database Username> --from-literal=mysql-password=<Application MySQL User database passsword> -n <Namespace>

    Where,

    <Application User secret name> is the secret name of the Application User.

    <Application MySQL database username> is the username of the Application User.

    <Application MySQL User database passsword> is the password of the Application User.

    <Namespace> is the namespace of NSSF deployment.

    Note:

    Note down the command used during the creation of Kubernetes secret, this command will be used for updating the secrets in future.

    For example:

    $ kubectl create secret generic ocnssf-db-creds --from-literal=mysql-username=nssfusr --from-literal=mysql-password=nssfpasswd -n ocnssf

  2. Run the following command to verify the secret created:
    $ kubectl describe secret <Application User secret name> -n <Namespace>

    Where,

    <Application User secret name> is the secret name of the database.

    <Namespace> is the namespace of NSSF deployment.

    For example:

    $ kubectl describe secret ocnssf-db-creds -n ocnssf

    Sample output:
    Name:         ocnssf-db-creds
    Namespace:    ocnssf
    Labels:       <none>
    Annotations:  <none>
    
    Type:  Opaque
    
    Data
    ====
    mysql-password:  10 bytes
    mysql-username:  7 bytes
    
  3. Update the command used in step 1 with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of NSSF deployment>". After update, the command is as follows:
    $ kubectl create secret generic <Application User secret name> --from-literal=mysql-username=<Application MySQL database username> --from-literal=mysql-password=<Application MySQL User database passsword> --dry-run -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>

    Where,

    <Application User secret name> is the secret name of the Application User.

    <Application MySQL database username> is the username of the Application User.

    <Application MySQL User database passsword> is the password of the Application User.

    <Namespace> is the namespace of NSSF deployment.

  4. Run the updated command. The following message is displayed:
    secret/<Application User secret name> replaced

    Where,

    <Application User secret name> is the updated secret name of the Application User.

2.2.1.8 Configuring Secrets for Enabling HTTPS

This section explains the steps to configure HTTPS at Ingress and Egress Gateways.

2.2.1.8.1 Managing HTTPS at Ingress Gateway

This section explains the steps to create and update the Kubernetes secret and enable HTTPS at Ingress Gateway.

Creating and Updating Secrets at Ingress Gateway

Note:

  • The passwords for TrustStore and KeyStore are stored in respective password files.
  • The process to create private keys, certificates, and passwords is at the discretion of the user or operator.
  • To create Kubernetes secret for HTTPS, the following files are required:
    • ECDSA private key and CA signed certificate of NSSF, if initialAlgorithm is ES256
    • RSA private key and CA signed certificate of NSSF, if initialAlgorithm is RS256
    • TrustStore password file
    • KeyStore password file
    • CA Root File

You can manage Kubernetes secrets for enabling HTTPS in NSSF using one of the following methods:

Managing Secrets Through OCCM at Ingress Gateway

To create secrets using Oracle Communications Certificate Manager (OCCM), see the "Managing Certificates" section in the Oracle Communications Cloud Native Core, Certificate Management User Guide.

After creating secrets using OCCM, you must patch them to add the KeyStore password, TrustStore password, and optionally, the ManagementCA file.

To manage the secrets using OCCM, follow the steps given below:

  1. Patch Secrets with the KeyStore Password File:
    1. Run the following command to patch the secret with a KeyStore password file:
      TLS_CRT=$(base64 < "key.txt" | tr -d '\n')
      kubectl patch secret <created_secret_name> -n <NF_name_space> -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"
      For example:
      TLS_CRT=$(base64 < "key.txt" | tr -d '\n')
      kubectl patch secret server-primary-ocnssf-secret-occm -n nssfsrv -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"

      Where,

      key.txt contains the KeyStore password.

      server-primary-ocnssf-secret-occm is the secret created by OCCM.

      nssfsrv is the namespace where the Network Function (NF), in this case NSSF, is deployed.

  2. Patch Secrets with the TrustStore Password File:
    1. Run the following command to patch the secret with a TrustStore password file:
      TLS_CRT=$(base64 < "trust.txt" | tr -d '\n')
      kubectl patch secret <created_secret_name> -n <NF_name_space> -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"
      For example:
      TLS_CRT=$(base64 < "trust.txt" | tr -d '\n')
      kubectl patch secret server-primary-ocnssf-secret-occm -n nssfsrv -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"

      Where,

      trust.txt contains the TrustStore password.

      server-primary-ocnssf-secret-occm is the secret created by OCCM.

  3. <Optional> Patch Secrets with the ManagementCA File:
    1. Run the following command to patch the secret with a ManagementCA file:
      TLS_CRT=$(base64 < "ManagementCA.pem" | tr -d '\n')
      kubectl patch secret <created_secret_name> -n <NF_name_space> -p "{\"data\":{\"ManagementCA.pem\":\"${TLS_CRT}\"}}"
      For example:
      TLS_CRT=$(base64 < "ManagementCA.pem" | tr -d '\n')
      kubectl patch secret server-primary-ocnssf-secret-occm -n nssfsrv -p "{\"data\":{\"ManagementCA.pem\":\"${TLS_CRT}\"}}"

      Where,

      ManagementCA.pem contains the Management CA certificate.

      server-primary-ocnssf-secret-occm is the secret created by OCCM.

Note:

To monitor the lifecycle management of certificates through OCCM, do not manually patch Kubernetes secrets to update TLS certificates or keys. Always use the OCCM GUI for any updates to ensure proper lifecycle management.

Managing Secrets Manually at Ingress Gateway

  1. Run the following command to create secret:
    $ kubectl create secret generic <ocingress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace>

    Where,

    <ocingress-secret-name> is the secret name for Ingress Gateway.

    <ssl_ecdsa_private_key.pem> is the ECDSA private key.

    <rsa_private_key_pkcs1.pem> is the RSA private key.

    <ssl_truststore.txt> is the SSL Truststore file.

    <ssl_keystore.txt> is the SSL Keystore file.

    <caroot.cer> is the CA Root file.

    <ssl_rsa_certificate.crt> is the SSL RSA certificate.

    <ssl_ecdsa_certificate.crt> is the SSL ECDSA certificate.

    <Namespace> of NSSF deployment.

    Note:

    Note down the command used during the creation of the secret. Use the command for updating the secrets in future.

    For example: The files and secret names used below are same as provided in custom_values.yaml in NSSF deployment.

    $ kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n ocnssf

    Note:

    It is recommended to use the same secret name as mentioned in the example. In case you change <ocingress-secret-name>, update the k8SecretName parameter under ingressgateway attributes section in the ocnssf_custom_values_25.1.201.yaml file.
  2. Run the following command to verify the details of the secret created:
    $ kubectl describe secret <ocingress-secret-name> -n <Namespace>

    Where,

    <ocingress-secret-name> is the secret name for Ingress Gateway.

    <Namespace> of NSSF deployment.

    For example:

    $ kubectl describe secret ocingress-secret -n ocnssf

    Sample output:

    
    Name:         ocingress-secret
    Namespace:    ocnssf
    Labels:       <none>
    Annotations:  <none>
    
    Type:  Opaque
  3. <Optional> Perform the following tasks to add, delete, or modify TLS or SSL certificates in the secret:
    • To add a certificate, run the following command:
      TLS_CRT=$(base64 < "<certificate-name>" | tr -d '\n')
      kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":\"${TLS_CRT}\"}}"

      Where,

      • <certificate-name> is the certificate file name.
      • <secret-name> is the name of the secret, for example, ocnssf-secret.

        Example:

        If you want to add a Certificate Authority (CA) Root from the caroot.cer file to the ocnssf-secret, run the following command:

        TLS_CRT=$(base64 < "caroot.cer" | tr -d '\n')
        kubectl patch secret ocnssf-secret  -p "{\"data\":{\"caroot.cer\":\"${TLS_CRT}\"}}" -n ocnssf

        Similarly, you can also add other certificates and keys to the ocnssf-secret.

    • To update an existing certificate, run the following command:
      TLS_CRT=$(base64 < "<updated-certificate-name>" | tr -d '\n')
      kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":\"${TLS_CRT}\"}}"

      Where, <updated-certificate-name> is the certificate file that contains the updated content.

      Example:

      If you want to update the privatekey present in the rsa_private_key_pkcs1.pem file to the ocnssf-secret, run the following command:

      TLS_CRT=$(base64 < "rsa_private_key_pkcs1.pem" | tr -d '\n') 
      kubectl patch secret ocnssf-secret -p "{\"data\":{\"rsa_private_key_pkcs1.pem\":\"${TLS_CRT}\"}}" -n ocnssf

      Similarly, you can also update other certificates and keys to the ocnssf-secret.

    • To remove an existing certificate, run the following command:
      kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":null}}"

      Where, <certificate-name> is the name of the certificate to be removed.

      The certificate must be removed when it expires or needs to be revoked.

      Example:

      To remove the CA Root from the secret, run the following command:

      kubectl patch secret ocnssf-secret  -p "{\"data\":{\"caroot.cer\":null}}" -n ocnssf
      

      Similarly, you can also remove other certificates and keys from the ocnssf-secret.

  4. To update the secret, update the command used in step 1 with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of OCNSSF deployment>".
    After the update is performed, use the following command:
    $ kubectl create secret generic <ocingress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>

    For example:

    $ kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n ocnssf | kubectl replace -f - -n ocnssf

    Note:

    The names used in the aforementioned command must be same as the names provided in the ocnssf_custom_values_25.1.201.yaml in NSSF deployment.
  5. Run the updated command.

    After the secret update is complete, the following message appears:

    secret/<ocingress-secret> replaced

Enabling HTTPS at Ingress Gateway

This step is required only when SSL settings needs to be enabled on Ingress Gateway microservice of NSSF.

  1. Enable enableIncomingHttps parameter under Ingress Gateway Global Parameters section in the ocnssf_custom_values_25.1.201.yaml file. For more information about enableIncomingHttps parameter, see under global parameters section of the ocnssf_custom_values_25.1.201.yaml file.
  2. Configure the following details in the ssl section under ingressgateway attributes, in case you have changed the attributes while creating secret:
    • Kubernetes namespace
    • Kubernetes secret name holding the certificate details
    • Certificate information
    
    ingress-gateway:
      nodeselector:
        nodekey: ""
        nodevalue: ""
    
        enableIncomingHttps: false
    service:
        ssl:
          tlsVersion: TLSv1.2
    
          privateKey:
            k8SecretName: accesstoken-secret
            k8NameSpace: *ns
            rsa:
              fileName: rsa_private_key_pkcs1.pem
            ecdsa:
              fileName: ec_private_key_pkcs8.pem
    
          certificate:
            k8SecretName: accesstoken-secret
            k8NameSpace: *ns
            rsa:
              fileName: rsa_apigatewayTestCA.cer
            ecdsa:
              fileName: apigatewayTestCA.cer
    
          caBundle:
            k8SecretName: accesstoken-secret
            k8NameSpace: *ns
            fileName: caroot.cer
    
          keyStorePassword:
            k8SecretName: accesstoken-secret
            k8NameSpace: *ns
            fileName: key.txt
    
          trustStorePassword:
            k8SecretName: accesstoken-secret
            k8NameSpace: *ns
            fileName: trust.txt
    
          initialAlgorithm: RSA256
  3. Save the ocnssf_custom_values_25.1.201.yaml file.
2.2.1.8.2 Managing HTTPS at Egress Gateway

This section explains the steps to create and update the Kubernetes secret and enable HTTPS at Egress Gateway.

Creating and Updating Secrets at Egress Gateway

Note:

  • The passwords for TrustStore and KeyStore are stored in respective password files.
  • The process to create private keys, certificates, and passwords is at the discretion of the user or operator.
  • To create Kubernetes secret for HTTPS, the following files are required:
    • ECDSA private key and CA signed certificate of NSSF, if initialAlgorithm is ES256
    • RSA private key and CA signed certificate of NSSF, if initialAlgorithm is RS256
    • TrustStore password file
    • KeyStore password file

You can manage Kubernetes secrets for enabling HTTPS in NSSF using one of the following methods:

Managing Secrets Through OCCM at Egress Gateway

To create secrets using Oracle Communications Certificate Manager (OCCM), see the "Managing Certificates" section in the Oracle Communications Cloud Native Core, Certificate Management User Guide.

After creating secrets using OCCM, you must patch them to add the KeyStore password, TrustStore password, and optionally, the ManagementCA file.

To manage the secrets using OCCM, follow the steps given below:

  1. Patch Secrets with the KeyStore Password File:
    1. Run the following command to patch the secret with a KeyStore password file:
      
      TLS_CRT=$(base64 < "key.txt" | tr -d '\n')
      kubectl patch secret <created_secret_name> -n <NF_name_space> -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"
      For example:
      TLS_CRT=$(base64 < "key.txt" | tr -d '\n')
      kubectl patch secret server-primary-ocnssf-secret-occm -n nssfsrv -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"

      Where,

      key.txt contains the KeyStore password.

      server-primary-ocnssf-secret-occm is the secret created by OCCM.

      nssfsrv is the namespace where the Network Function (NF), in this case NSSF, is deployed.

  2. Patch Secrets with the TrustStore Password File:
    1. Run the following command to patch the secret with a TrustStore password file:
      TLS_CRT=$(base64 < "trust.txt" | tr -d '\n')
      kubectl patch secret <created_secret_name> -n <NF_name_space> -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"
      For example:
      TLS_CRT=$(base64 < "trust.txt" | tr -d '\n')
      kubectl patch secret server-primary-ocnssf-secret-occm -n nssfsrv -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"

      Where,

      trust.txt contains the TrustStore password.

      server-primary-ocnssf-secret-occm is the secret created by OCCM.

  3. <Optional> Patch Secrets with the ManagementCA File:
    1. Run the following command to patch the secret with a ManagementCA file:
      TLS_CRT=$(base64 < "ManagementCA.pem" | tr -d '\n')
      kubectl patch secret <created_secret_name> -n <NF_name_space> -p "{\"data\":{\"ManagementCA.pem\":\"${TLS_CRT}\"}}"
      For example:
      TLS_CRT=$(base64 < "ManagementCA.pem" | tr -d '\n')
      kubectl patch secret server-primary-ocnssf-secret-occm -n nssfsrv -p "{\"data\":{\"ManagementCA.pem\":\"${TLS_CRT}\"}}"

      Where,

      ManagementCA.pem contains the Management CA certificate.

      server-primary-ocnssf-secret-occm is the secret created by OCCM.

Note:

To monitor the lifecycle management of certificates through OCCM, do not manually patch Kubernetes secrets to update TLS certificates or keys. Always use the OCCM GUI for any updates to ensure proper lifecycle management.

Managing Secrets Manually at Egress Gateway

  1. Run the following command to create secret.
    $ kubectl create secret generic <ocegress-secret-name> --from-file=<ssl_ecdsa_private_key.pem>  --from-file=<ssl_rsa_private_key.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<ssl_cabundle.crt> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace>

    Where,

    <ocegress-secret-name> is the secret name for Egress Gateway.

    <ssl_ecdsa_private_key.pem> is the ECDSA private key.

    <rsa_private_key_pkcs1.pem> is the RSA private key.

    <ssl_truststore.txt> is the SSL Truststore file.

    <ssl_keystore.txt> is the SSL Keystore file.

    <ssl_cabundle.crt> is the SSL CA Bundle certificate.

    <ssl_rsa_certificate.crt> is the SSL RSA certificate.

    <ssl_ecdsa_certificate.crt> is the SSL ECDSA certificate.

    <Namespace> of NSSF deployment.

    Note:

    Note down the command used during the creation of the secret. Use the command for updating the secrets in future.

    For example:

    $ kubectl create secret generic ocegress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=ssl_rsa_private_key.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=ssl_cabundle.crt --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n ocnssf

    Note:

    It is recommended to use the same secret name as mentioned in the example. In case you change <ocegress-secret-name>, update the k8SecretName parameter under egressgateway attributes section in the ocnssf_custom_values_25.1.201.yaml file.
  2. Run the following command to verify the details of the secret created:
    $ kubectl describe secret <ocegress-secret-name> -n <Namespace>

    Where,

    <ocegress-secret-name> is the secret name for Egress Gateway.

    <Namespace> of NSSF deployment.

    For example:

    $ kubectl describe secret ocegress-secret -n ocnssf

  3. Update the command used in step 1 with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of NSSF deployment>".
    After the update is performed, use the following command:
    kubectl create secret generic <ocegress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<ssl_rsa_private_key.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<ssl_cabundle.crt> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace of NSSF Egress Gateway secret> | kubectl replace -f - -n <Namespace>

    For example:

    $ kubectl create secret generic ocegress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n ocnssf | kubectl replace -f - -n ocnssf

    Note:

    The names used in the aforementioned command must be same as the names provided in the ocnssf_custom_values_25.1.201.yaml in NSSF deployment.
  4. Run the updated command. After successful secret update, the following message is displayed:
    secret/<ocegress-secret> replaced

Enabling HTTPS at Egress Gateway

This step is required only when SSL settings needs to be enabled on Egress Gateway microservice of NSSF.
  1. Enable enableOutgoingHttps parameter under egressgateway attributes section in the ocnssf_custom_values_25.1.201.yaml file. For more information about enableOutgoingHttps parameter, see the Egress Gateway section.
  2. Configure the following details in the ssl section under egressgateway attributes, in case you have changed the attributes while creating secret:
    • Kubernetes namespace
    • Kubernetes secret name holding the certificate details
    • Certificate information
    
    egress-gateway:
      nodeselector:
        nodekey: ""
        nodevalue: ""
    
        enableOutgoingHttps: false
    service:
        # Specify type of service - Possible values are :- ClusterIP, NodePort, LoadBalancer and ExternalName
        type: ClusterIP
        ssl:
          tlsVersion: TLSv1.2
    
          privateKey:
            k8SecretName: accesstoken-secret
            k8NameSpace: *ns
            rsa:
              fileName: rsa_private_key_pkcs1.pem
            ecdsa:
              fileName: ec_private_key_pkcs8.pem
    
          certificate:
            k8SecretName: accesstoken-secret
            k8NameSpace: *ns
            rsa:
              fileName: rsa_apigatewayTestCA.cer
            ecdsa:
              fileName: apigatewayTestCA.cer
    
          caBundle:
            k8SecretName: accesstoken-secret
            k8NameSpace: *ns
            fileName: caroot.cer
    
          keyStorePassword:
            k8SecretName: accesstoken-secret
            k8NameSpace: *ns
            fileName: key.txt
    
          trustStorePassword:
            k8SecretName: accesstoken-secret
            k8NameSpace: *ns
            fileName: trust.txt
    
          initialAlgorithm: RSA256
  3. Save the ocnssf_custom_values_25.1.201.yaml file.
2.2.1.9 Configuring Secrets to Enable Access Token

This section explains how to configure a secret for enabling access token.

2.2.1.9.1 Generating KeyPairs for NRF Instances

Note:

It is at the discretion of the user to create private keys and certificates, and it is not in the scope of NSSF. This section lists only samples to create KeyPairs.

Using the OpenSSL tool, you can generate KeyPairs for each of the NRF instances. Run the following commands to generate ec_private_key1.pem, ec_private_key_pkcs8.pem, and 4bc0c762-0212-416a-bd94-b7f1fb348bd4.crt files:

openssl ecparam -genkey -name prime256v1 -noout -out ec_private_key1.pem
openssl pkcs8 -topk8 -in ec_private_key1.pem -inform pem -out ec_private_key_pkcs8.pem -outform pem -nocrypt
openssl req -new -key ec_private_key_pkcs8.pem -x509 -nodes -days 365 -out 4bc0c762-0212-416a-bd94-b7f1fb348bd4.crt -subj "/C=IN/ST=KA/L=BLR/O=ORACLE/OU=CGBU/CN=ocnrf-endpoint.ocnrf.svc.cluster.local"
2.2.1.9.2 Enabling and Configuring Access Token

Enabling and Configuring Access Token

To enable access token validation, configure both Helm-based and REST-based configurations on Ingress Gateway.

Configuration using Helm:

For Helm-based configuration, perform the following steps:

  1. Create a secret that stores NRF public key certificates using the following commands:
    kubectl create secret generic <secret-name> --from-file=<filename.crt> -n <Namespace>

    Where,

    <secret-name> is the secret name.

    <Namespace> is the NSSF namespace.

    <filename.crt> is the public key certificate and we can have any number of certificates in the secret.

    For example:

    kubectl create secret generic oauthsecret --from-file=4bc0c762-0212-416a-bd94-b7f1fb348bd4.crt -n ocnssf
  2. Enable the oauthValidatorEnabled parameter on Ingress Gateway by setting its value to true. Further, configure the secret and namespace on Ingress Gateway in the OAUTH CONFIGURATION section of the ocnssf_custom_values_25.1.201.yaml file using the following fields:
    • oauthValidatorEnabled
    • nfType
    • nfInstanceId
    • producerScope
    • allowedClockSkewSeconds
    • enableInstanceIdConfigHook
    • nrfPublicKeyKubeSecret
    • nrfPublicKeyKubeNamespace
    • validationType
    • producerPlmnMNC
    • producerPlmnMCC
    • oauthErrorConfigForValidationFailure
    • oauthErrorConfigForValidationFailure.errorCode
    • oauthErrorConfigForValidationFailure.errorTitle
    • oauthErrorConfigForValidationFailure.errorDescription
    • oauthErrorConfigForValidationFailure.errorCause
    • oauthErrorConfigForValidationFailure.redirectUrl
    • oauthErrorConfigForValidationFailure.retryAfter
    • oauthErrorConfigForValidationFailure.errorTrigger
    • oauthErrorConfigForValidationFailure.errorTrigger.exceptionType
    The following is a sample Helm configuration. For more information on parameters and their supported values, see Ingress Gateway Parameters.
    oauthValidatorEnabled: true      # MANDATORY_FOR_ATS to pass
      nfType: NSSF
      nfInstanceId: 9faf1bbc-6e4a-4454-a507-aef01a101a01    # MANDATORY_FOR_ATS to pass
      producerScope: nnssf-nsselection,nnssf-nssaiavailability
      allowedClockSkewSeconds: 0
      enableInstanceIdConfigHook: true
      nrfPublicKeyKubeSecret: oauthsecret      # MANDATORY_FOR_ATS (needs to be exact "oauthsecret" for ats to pass)
      nrfPublicKeyKubeNamespace: *ns
      validationType: relaxed                  # MANDATORY_FOR_ATS (needs to be "relaxed" for ats to pass)
      signValidationServiceMeshEnabled: false
      producerPlmnMNC: 14
      producerPlmnMCC: 310
      oauthErrorConfigForValidationFailure:
        errorCode: 401
        errorTitle: "Validation failure"
        errorDescription: "UNAUTHORIZED"
        errorCause: "oAuth access Token validation failed"
        redirectUrl:
        retryAfter:
        errorTrigger:
          - exceptionType: OAUTH_CERT_EXPIRED
            errorCode: 408
            errorCause: certificate has expired
            errorTitle:
            errorDescription:
            retryAfter:
            redirectUrl:
          - exceptionType: OAUTH_MISMATCH_IN_KID
            errorCode: 407
            errorCause: kid configured does not match with the one present in the token
            errorTitle:
            errorDescription:
            retryAfter:
            redirectUrl:
          - exceptionType: OAUTH_PRODUCER_SCOPE_NOT_PRESENT
            errorCode: 406
            errorCause: producer scope is not present in token
            errorTitle:
            errorDescription:
            retryAfter:
            redirectUrl:
          - exceptionType: OAUTH_PRODUCER_SCOPE_MISMATCH
            errorCode: 403
            errorCause: produce scope in token does not match with the configuration
            errorTitle:
            errorDescription:
            retryAfter:
            redirectUrl:
          - exceptionType: OAUTH_MISMATCH_IN_NRF_INSTANCEID
            errorCode: 404
            errorCause: nrf id configured does not match with the one present in the token
            errorTitle:
            errorDescription:
            retryAfter:
            redirectUrl:
          - exceptionType: OAUTH_PRODUCER_PLMNID_MISMATCH
            errorCode: 403
            errorCause: producer plmn id in token does not match with the configuration
            errorTitle:
            errorDescription:
            retryAfter:
            redirectUrl:
          - exceptionType: OAUTH_AUDIENCE_NOT_PRESENT_OR_INVALID
            errorCode: 402
            errorCause: audience in token does not match with the configuration
            errorTitle:
            errorDescription:
            retryAfter:
            redirectUrl:
          - exceptionType: OAUTH_TOKEN_INVALID
            errorCode: 401
            errorCause: oauth token is corrupted
            errorTitle:
            errorDescription:
            retryAfter:
            redirectUrl:
      oauthErrorConfigOnTokenAbsence:
        errorCode: 400
        errorTitle: "Token not present"
        errorDescription: "UNAUTHORIZED"
        errorCause: "oAuth access Token is not present"
        redirectUrl:
        retryAfter:

Configuration using REST API

After Helm configuration, send the REST requests to Ingress Gateway to use configured public key certificates. Using REST-based configuration, you can distinguish between the certificates configured on different NRFs and can use these certificates to validate the token received from a specific NRF.

For more information about REST API configuration, see "OAuth Validator Configuration" section in Cloud Native Core, Network Slice Selection Function REST Specification Guide.

Note:

If there is an expiry of configured public key certificate or an addition of new certificate for different NRF, then change the existing configurations like:

  • Delete an existing secret and create a new secret with updated public key certificate. To delete a secret, run the following command:
    kubectl delete secret <secret-name> -n <namespace>

    Where,

    <secret-name> is the secret name.

    <Namespace> is the NSSF namespace.

    For example:

    kubectl delete secret oauthsecret -n ocnssf
  • Send the certificate configuration update request using REST API. The request should include the keyIdList and instanceIdList with new certificates.
2.2.1.10 Configuring NSSF to support Aspen Service Mesh

NSSF leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external TLS communication. The service mesh integration provides inter-NF communication and allows API gateway co-working with service mesh. The service mesh integration supports the services by deploying a special sidecar proxy container in each pod to intercept all network communication between microservices.

Supported ASM version: 1.14.6

For ASM installation and configuration, see official Aspen Service Mesh website for details.

Aspen Service Mesh (ASM) configurations are categorized as follows:

  • Control Plane: It involves adding labels or annotations to inject sidecar. The control plane configurations are part of the NF Helm chart.
  • Data Plane: It helps in traffic management, such as handling NF call flows by adding Service Entries (SE), Destination Rules (DR), Envoy Filters (EF), and other resources. This configuration can be done using the ocnssf_servicemesh_config_custom_values_25.1.201.yaml.

Configuring Service Mesh Data Plane

Data Plane configuration consists of the following Custom Resource Definitions (CRDs):

  • Service Entry (SE)
  • Destination Rule (DR)
  • Envoy Filter (EF)
  • Peer Authentication (PA)
  • Virtual Service (VS)
  • Request Authentication (RA)
  • Policy Authorization (PA)

Note:

Use ocnssf_servicemesh_config_custom_values_25.1.201.yaml to add or remove CRDs that you may require due to service mesh upgrades to configure features across different releases.

The Data Plane configuration is applicable in the following scenarios: For more information on Custom Resources (CR)s, see Service Mesh CRDs.

  • Service Entry: Enables adding additional entries into Sidecar's internal service registry, so that auto-discovered services in the mesh can access or route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints).
  • Destination Rule: Defines policies that apply to traffic intended for service after routing has occurred. These rules specify configuration for load balancing, connection pool size from the sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load-balancing pool.
  • Envoy Filter: Provides a mechanism to customize the Envoy configuration generated by Istio Pilot. Use Envoy Filter to modify values for certain fields, add specific filters, or even add entirely new listeners, clusters, and so on.
  • Peer Authentication: Used for service-to-service authentication to verify the client making the connection.
  • Virtual Service: A Virtual Service defines a set of traffic routing rules to apply when a host is addressed. Each routing rule defines matching criteria for the traffic of a specific protocol. If the traffic is matched, then it is sent to a named destination service (or subset or version of it) defined in the registry.
  • Request Authentication: Used for end-user authentication to verify the credential attached to the request.
  • Policy Authorization: Sidecar Authorization Policy enables access control on workloads in the mesh. Authorization policy supports CUSTOM, DENY, and ALLOW actions for access control. When CUSTOM, DENY, and ALLOW actions are used for a workload at the same time, the CUSTOM action is evaluated first, then the DENY action, and finally the ALLOW action.

Service Mesh Configuration File

A sample ocnssf_servicemesh_config_custom_values_25.1.201.yaml is available in the Custom_Templates folder. For downloading the file, see Customizing NSSF.

Table 2-13 Supported Fields in CRD

CRD Supported Fields
Service Entry hosts
exportTo
addresses
ports.name
ports.number
ports.protocol
resolution
Destination Rule host
mode
sbitimers
tcpConnectTimeout
tcpKeepAliveProbes
tcpKeepAliveTime
tcpKeepAliveInterval
Envoy Filters labelselector
applyTo
filtername
operation
typeconfig
configkey
configvalue
stream_idle_timeout
max_stream_duration
patchContext
networkFilter_listener_port
transport_socket_connect_timeout
filterChain_listener_port
route_idle_timeout
route_max_stream_duration
httpRoute_routeConfiguration_port
vhostname
Peer Authentication labelselector
tlsmode
Virtual Service host
destinationhost
port
exportTo
retryon
attempts
timeout
Request Authentication labelselector
issuer
jwks/jwksUri
Policy Authorization labelselector
action
hosts
paths
xfccvalues
2.2.1.10.1 Predeployment Configurations

This section explains the predeployment configuration procedure to install NSSF with Service Mesh support.

Follow the procedure as mentioned below:

  1. Create NSSF namespace
    1. Run the following command to verify if required namespace already exists in system:
      $ kubectl get namespaces
    2. In the output of the above command, check if required namespace is available. If not available, run the following command to create the namespace:
      $ kubectl create namespace <namespace>

      Where,

      <Namespace> is the NSSF namespace.

      For example:

      $ kubectl create namespace ocnssf

2.2.1.10.2 Installing Service Mesh Configuration Charts

Perform the below steps to configure Service Mesh CRDs using the Service Mesh Configuration chart:

  1. Download the service mesh chart ocnssf-servicemesh-config-25.1.201.0.0.tgz from ocnssf_pkg_25_1_201_0_0.tgz folder.
  2. Configure the ocnssf_servicemesh_config_custom_values_25.1.201.yaml file as follows:
    • Modify only the"SERVICE-MESH Custom Resource Configuration" section for configuring the CRDs as needed. For example, to add or modify a ServiceEntry CR required attributes, its value must be configured under the serviceEntries: section of "SERVICE-MESH Custom Resource Configuration". You can also comment on the CRDs that you do not need.
  3. Install the Service Mesh Configuration Charts as below:
    • Run the below Helm install command on the namespace you want to apply the changes:
      helm install <helm-release-name> <charts> --namespace <namespace-name> -f <custom-values.yaml-filename>

      For example:

      helm install ocnssf-servicemesh-config ocnssf-servicemesh-config-25.1.201.0.0.tgz --namespace ocnssf -f ocnssf_servicemesh_config_custom_values_25.1.201.yaml
    • Run the below command to verify if all CRDs are created:
      kubectl get <CRD-Name> -n <Namespace>

      For example:

      kubectl get se,dr,peerauthentication,envoyfilter,vs -n ocnssf

      Note:

      Any modification to the existing CRDs or adding new CRDs can be done by updating the ocnssf_servicemesh_config_custom_values_25.1.201.yaml file and running Helm upgrade.
2.2.1.10.3 Deploying NSSF with Service Mesh
  1. Create namespace label for auto sidecar injection to automatically add the sidecars in all of the pods spawned in NSSF namespace:
    $ kubectl label ns <Namespace> istio-injection=enabled

    Where,

    <Namespace> is the NSSF namespace.

    For example:

    $ kubectl label ns ocnssf istio-injection=enabled
  2. Update ocnssf_custom_values_25.1.201.yaml with the following annotations:
    1. Update the global section for adding annotation for the following use cases:
      1. To scrape metrics from NSSF pods, add oracle.com/cnc: "true" annotation.

        Note:

        This step is required only if OSO is deployed.
      2. Enable Prometheus to scrape metrics from NSSF pods by adding "9090" to traffic.sidecar.istio.io/excludeInboundPorts annotation.
      3. Enable Coherence to form cluster in ASM based deployment by adding "9090,8095,8096,7,53" to traffic.sidecar.istio.io/excludeInboundPorts annotation.
        For example:
        global:
          customExtension:
            allResources:
               labels: {}
               annotations: {}
            lbDeployments:
              annotations:
                oracle.com/cnc: "true"
                traffic.sidecar.istio.io/excludeInboundPorts: "9090,8095,8096,7,53" 
            nonlbDeployments:
              annotations:
                oracle.com/cnc: "true"
                traffic.sidecar.istio.io/excludeInboundPorts: "9090,8095,8096,7,53"
    2. Update the following attributes under the global section:

      Check if the serviceMeshCheck flag is set to true in the Global parameter section.

      Note:

      The serviceMeshCheck parameter is mandatory and the other two parameters are read-only.
       # Mandatory: This parameter must be set to "true" when NSSF is deployed with the Service Mesh
      serviceMeshCheck: true
      
        # Mandatory: needs to be set with correct url format http://127.0.0.1:<istio management port>/quitquitquit" if NSSF is deployed with the Service Mesh.
       istioSidecarQuitUrl: "http://127.0.0.1:15000/quitquitquit"
      
        # Mandatory: needs to be set with correct url format http://127.0.0.1:<istio management port>/ready" if NSSF is deployed with the Service Mesh.
      istioSidecarReadyUrl: "http://127.0.0.1:15000/ready"  
    3. Change ingress-gateway Service Type to ClusterIP under ingress-gateway's global section:
      global:
          # Service Type
          type: ClusterIP
    4. Update egress-gateway section for below attributes to enforce egress-gateway container to send non-TLS egress requests irrespective of the HTTP Scheme value of the message. Because, in a Service Mesh-based deployment, the sidecar container takes care of establishing a TLS connection with the peer.
      egress-gateway:
        # Mandatory: This flag needs to set it "true" if Service Mesh would be present where ocnssf will be deployed
        # This is to enable egress gateway to forward http2 (and not https) requests even when it receives https requests
        httpRuriOnly: "true"
    5. Update the following sidecar resource configuration in Global section:
      deployment:
          customExtension:
            labels: {}
            annotations: {
      	  # Enable this section for service-mesh based installation
      		sidecar.istio.io/proxyCPU: "2",
      		sidecar.istio.io/proxyCPULimit: "2",
      		sidecar.istio.io/proxyMemory: "2Gi",
      		sidecar.istio.io/proxyMemoryLimit: "2Gi"
      	  }
  3. Install NSSF using updated ocnssf_custom_values_25.1.201.yaml. For more information about NSSF installation, see Installation Tasks.
2.2.1.10.4 Postdeployment Configuration

This section explains the post-deployment configurations to install NSSF with support for service mesh.

Enable Inter-NF communication

For every new NF participating in call flows when NSSF is a client, DestinationRule, and ServiceEntry must be created in NSSF namespace to enable communication.

Following is the inter-NF communication with NSSF:
  • NSSF to AMF communication (for notification)
  • NSSF to NRF communication ( for registration and heartbeat)

Create CRDs using the ocnssf_servicemesh_config_custom_values_25.1.201.yaml file in Custom_Templates folder.

2.2.1.10.5 Redeploying NSSF without Service Mesh

This section describes the steps to redeploy NSSF without Service Mesh resources.

  1. To disable Service Mesh, run the following command:
    kubectl label ns <ocnssf_namespace> istio-injection=disabled

    Where,

    <ocnssf_namespace> is the namespace of NSSF.

    For example:
    kubectl label ns ocnssf istio-injection=disabled
  2. Remove the metrics scraping annotations from the ocnssf_custom_values_25.1.201.yaml file.
    1. To scrape metrics from NSSF pods, add oracle.com/cnc: "true" annotation.

      Note:

      This step is required only if OSO is deployed.
      For example:
      global:
        customExtension:
          allResources:
             labels: {}
             annotations: {}
          lbDeployments:
            annotations:
              oracle.com/cnc: "true"
            
          nonlbDeployments:
            annotations:
              oracle.com/cnc: "true"    
    2. Update the following attributes under the global section:

      Disable Service Mesh Flag and check if the serviceMeshCheck flag is set to false in the Global parameter section.

      Note:

      The serviceMeshCheck parameter is mandatory and the other two parameters are read-only.
      # Mandatory: This parameter must be set to "true" when NSSF is deployed with the Service Mesh
       serviceMeshCheck: false
      
        # Mandatory: needs to be set with correct url format http://127.0.0.1:<istio management port>/quitquitquit" if NSSF is deployed with the Service Mesh.
        istioSidecarQuitUrl: "http://127.0.0.1:15000/quitquitquit"
      
        # Mandatory: needs to be set with correct url format http://127.0.0.1:<istio management port>/ready" if NSSF is deployed with the Service Mesh.
       istioSidecarReadyUrl: "http://127.0.0.1:15000/ready" 
    3. Change Ingress-Gateway Service Type to LoadBalancer under ingress-gateway's global section:
      global:
          # Service Type
          type: LaodBalancer
    4. Update Egress-Gateway section for the below attributes to enforce Egress-Gateway container for not to send non-TLS Egress requests irrespective of the HTTP Scheme value of the message. Because, in a Service Mesh-based deployment, the sidecar container takes care of establishing a TLS connection with the peer.
      egress-gateway:
        # Mandatory: This flag needs to set it "false" if Service Mesh would not be present where ocnssf will be deployed
         httpRuriOnly: "false"
    5. Remove the sidecar resource configuration in Global section:
      deployment:
          customExtension:
            labels: {}
            annotations: {}
  3. Upgrade or install NSSF using updated ocnssf_custom_values_25.1.201.yaml. For more information about NSSF installation, see Installation Tasks.
2.2.1.10.6 Deleting Service Mesh Resources

This section describes the steps to delete Service Mesh resources.

  1. To delete Service Mesh resources, run the following command:
    helm delete <helm-release-name> -n <namespace-name>

    Where,

    • <helm-release-name> is the release name used by the helm command. This release name must be the same as the release name used for Service Mesh CR creation.
    • <namespace-name> is the deployment namespace used by Helm command.

    For example:

    helm delete ocnssf-servicemesh-config -n ocnssf
  2. To verify if Service Mesh resources are deleted, run the following command:
    kubectl get <CRD-Name> -n <Namespace>

    For example:

    kubectl get se,dr,peerauthentication,envoyfilter,vs -n ocnssf
2.2.1.11 Configuring Network Policies

Kubernetes network policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application.

These network policies enforce access restrictions for all the applicable data flows except the communication from Kubernetes node to pod for invoking container probe.

Note:

Configuring network policy is optional. Based on the security requirements, network policy can be configured.

For more information on the network policy, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.

Note:

  • If the traffic is blocked or unblocked between the pods even after applying network policies, check if any existing policy is impacting the same pod or set of pods that might alter the overall cumulative behavior.
  • If changing default ports of services such as Prometheus, Database, Jaegar, or if Ingress or Egress Gateway names is overridden, update them in the corresponding network policies.

Installing Network Policies

Prerequisite

Network Policies are implemented by using the network plug-in. To use network policies, you must be using a networking solution that supports Network Policy.

Note:

For a fresh installation, it is recommended to install Network Policies before installing NSSF. However, if NSSF is already installed, you can still install the Network Policies.
Following is the procedure for installing network policy:
  1. Open the ocnssf_network_policy_custom_values_25.1.201.yaml file provided in the release package zip file. For downloading the file, see Downloading the NSSF Package.
  2. Update the ocnssf_network_policy_custom_values_25.1.201.yaml file as per the requirement. For more information on the parameters, see the Table 2-14 parameter table.
  3. Run the following command to install the network policies:
    helm install <helm-release-name> <network-policy>/ -n <namepsace> -f <custom-value-file>

    Where,

    <helm-release-name>: ocnssf-network-policy helm release name.

    <custom-value-file>: ocnssf-network-policy custom value file.

    <namespace>: namespace must be the NSSF's namespace.

    For example:

    helm install ocnssf-network-policy ocnssf-network-policy/ -n ocnssf -f ocnssf_network_policy_custom_values_25.1.201.yaml

Note:

  • The connections created before installing network policy are not impacted by the new network policy. Only the new connections are impacted.
  • If you are using ATS suite along with network policies, it is required to install the NSSF and ATS in the same namespace.
  • While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.

Upgrading Network Policies

To add, delete, or update network policy:
  1. Modify the ocnssf_network_policy_custom_values_25.1.201.yaml file to update, add, and delete the network policy.
  2. Run the following command to upgrade the network policies:
    helm upgrade <helm-release-name> <network-policy>/ -n <namespace> -f <custom-value-file>

    Sample command:

    helm upgrade ocnssf-network-policy ocnssf-network-policy/ -n ocnssf -f ocnssf_network_policy_custom_values_25.1.201.yaml

Note:

While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.

Verifying Network Policies

Run the following command to verify that the network policies have been applied successfully:

kubectl get <helm-release-name> -n <namespace>

Where,

helm-release-name: Helm release name of the ocnssf-network-policy.

namespace: Namespace must be the NSSF's namespace.

Sample command:

kubectl get ocnssf-network-policy -n ocnssf
Sample output:

NAME                          POD-SELECTOR                              AGE
allow-egress-database         app.kubernetes.io/part-of=ocnssf           21h
allow-egress-dns              app.kubernetes.io/part-of=ocnssf           21h
allow-egress-jaeger           app.kubernetes.io/part-of=ocnssf           21h
allow-egress-k8-api           app.kubernetes.io/part-of=ocnssf           21h
allow-egress-sbi              app.kubernetes.io/name=egressgateway       21h
allow-egress-to-nssf-pods     app.kubernetes.io/part-of=ocnssf          21h
allow-from-node-port          app=ocats-nssf                             21h
allow-ingress-from-console    app.kubernetes.io/name=nssfconfiguration   21h
allow-ingress-from-nssf-pods  app.kubernetes.io/part-of=ocnssf          21h
allow-ingress-prometheus      app.kubernetes.io/part-of=ocnssf           21h
allow-ingress-sbi             app.kubernetes.io/name=ingressgateway      21h
deny-egress-all               app.kubernetes.io/part-of=ocnssf           21h
deny-ingress-all              app.kubernetes.io/part-of=ocnssf           21h

Uninstalling Network Policies

Run the following command to uninstall the network policies:
helm uninstall <helm-release-name> -n <namespace>

Sample command:

helm uninstall ocnssf-network-policy -n ocnssf

Note:

While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.

Configuration Parameters for Network Policies

Table 2-14 Supported Kubernetes Resource for Configuring Network Policy

Parameter Description Default Value
apiVersion

This is a mandatory parameter.

This indicates Kubernetes version for access control.

Note: This is the supported api version for network policy. This is a read-only parameter.

networking.k8s.io/v1
kind

This is a mandatory parameter.

This represents the REST resource this object represents.

Note: This is a read-only parameter.

NetworkPolicy

Table 2-15 Configuration Parameters for Network Policy

Parameter Description Default Value
metadata.name This is a mandatory parameter.

This indicates unique name for the network policy.

{{ .metadata.name }}
spec.{} This is a mandatory parameter.

This consists of all the information needed to define a particular network policy in the given namespace.

Note: NSSF supports the spec parameters defined in Kubernetes Resource Category.

NA

For more information about this functionality, see Network Policies in Oracle Communications Cloud Native Core, Network Slice Selection Function User Guide.

2.2.1.12 Configuring Traffic Segregation

This section provides information on how to configure Traffic Segregation in NSSF. For description of " Traffic Segregation" feature, see " Traffic Segregation" section in "NSSF Supported Features " chapter of Oracle Communications Cloud Native Core, Network Slice Selection Function User Guide.

To use one or multiple interfaces, you must configure annotations (for both ingress and egress gateway) in the deployment.customExtension.annotations parameter of the ocnssf_custom_values_25.1.201.yaml file.

Configuration at Ingress Gateway

Use the following annotation to configure traffic segregation at ingress-gateway.deployment.customExtension.annotations in ocnssf_custom_values_25.1.201.yaml file:

Annotation for a single interface

k8s.v1.cni.cncf.io/networks: default/<network interface>@<network interface>
oracle.com.cnc/cnlb: '[{"backendPortName": "<igw port name>", "cnlbIp": "<external IP>","cnlbPort":"<port number>"}]'

Here,

  • k8s.v1.cni.cncf.io/networks: Contains all the network attachment information the pod uses for network segregation.
  • oracle.com.cnc/cnlb: To define service IP and port configurations that the deployment will employ for ingress load balancing.

    Where,

    • cnlbIp is the front-end IP utilized by the application.
    • cnlbPort is the front-end port used in conjunction with the CNLB IP for load balancing.
    • backendPortName is the backend port name of the container that needs load balancing, retrievable from the deployment or pod spec of the application.

Sample annotation for a single interface:


k8s.v1.cni.cncf.io/networks: default/nf-sig1-int8@nf-sig1-int8
oracle.com.cnc/cnlb: '[{"backendPortName": "igw-port", "cnlbIp": "10.123.155.16","cnlbPort":"80"}]' 

Annotation for two or multiple interfaces

k8s.v1.cni.cncf.io/networks: default/<network interface1>@<network interface1>, default/<network interface2>@<network interface2>
oracle.com.cnc/cnlb: '[{"backendPortName": "<igw port name>", "cnlbIp": "<external IP1>, <external IP2>","cnlbPort":"<port number>"}]'
oracle.com.cnc/ingressMultiNetwork: "true" 
Sample annotation for two or multiple interfaces:

k8s.v1.cni.cncf.io/networks: default/nf-sig1-int8@nf-sig1-int8,default/nf-sig2-int9@nf-sig2-int9
oracle.com.cnc/cnlb: '[{"backendPortName": "igw-port", "cnlbIp": "nf-sig1-int8/10.123.155.16,nf-sig2-int9/10.123.155.30","cnlbPort":"80"}]'
oracle.com.cnc/ingressMultiNetwork: "true"
Sample annotation for multiport:
        k8s.v1.cni.cncf.io/networks: default/nf-oam-int5@nf-oam-int5
        oracle.com.cnc/cnlb: '[{"backendPortName": "query", "cnlbIp": "10.75.180.128","cnlbPort": "80"}, {"backendPortName": "admin", "cnlbIp": "10.75.180.128", "cnlbPort":"16687"}]''

In the above example, each item in the list refers to a different backend port name with the same CNLB IP, but the ports for the front end are distinct.

Ensure that the backend port name aligns with the container port name specified in the deployment's specification, which needs to be load balanced from the port list. The CNLB IP represents the external IP of the service, and cnlbPort is the external-facing port:
ports:
- containerPort: 16686
  name: query
  protocol: TCP
- containerPort: 16687
  name: admin
  protocol: TCP

Configuration at Egress Gateway

Use the following annotation to configure traffic segregation at egress-gateway.deployment.customExtension.annotations in ocnssf_custom_values_25.1.201.yaml file:

Annotation for a single interface
k8s.v1.cni.cncf.io/networks: default/<network interface>@<network interface>

Where,

  • k8s.v1.cni.cncf.io/networks: Contains all the network attachment information the pod uses for network segregation.
Sample annotation for a single interface:
k8s.v1.cni.cncf.io/networks: default/nf-sig-egr1@nf-sig-egr1
Sample annotation for a multiple interface:
k8s.v1.cni.cncf.io/networks: default/nf-oam-egr1@nf-oam-egr1,default/nf-sig-egr1@nf-sig-egr1

Note:

  • The network attachments will be deployed as a part of cluster installation only.
  • The network attachment name should be unique for all the pods.

For information about the above mentioned annotations, see "Configuring Cloud Native Load Balancer (CNLB)" in Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

2.2.2 Installation Tasks

This section explains how to install Network Slice Selection Function (NSSF).

Note:

  • Before installing NSSF, you must complete Prerequisites and Preinstallation Tasks.
  • In a multisite georedundant setup, perform the steps explained in this section on all the georedundant sites.

Caution:

The readOnlyRootFilesystem parameter is set to false by default. Do not set this parameter to true, as NSSF requires elevated privileges during installation. Enabling a read-only root filesystem would block the necessary permissions, resulting in installation failure.
2.2.2.1 Installing NSSF Package

To install the NSSF package, perform the following steps:

  1. Run the following command to access the extracted package:
    cd ocnssf-<release_number>

    For example:

    cd ocnssf-25.1.201.0.0

  2. Customize the ocnssf_custom_values_25.1.201.yaml file with the required deployment parameters. See Customizing NSSF chapter to customize the file. For more information about predeployment parameter configurations, see Preinstallation Tasks.

    Note:

    • In case of multisite georedundant setups, configure nfInstanceId uniquely for each NSSF site.
    • Ensure the nfInstanceId configuration in the global section is same as that in the appProfile section of NRF client.
  3. <Optional> Customize the ocnssf_servicemesh_config_custom_values_25.1.201.yaml file with the required parameters in case you are creating DestinationRule and service entry using the yaml file. See Configuring NSSF to support Aspen Service Mesh for the sample template.
  4. <Optional> Run the following command to create Destination Rule and Service Entry using yaml file:
    helm install <helm-release-name> <charts> --namespace <namespace-name> -f <custom-values.yaml-filename>

    For example:

    helm install ocnssf-servicemesh-config ocnssf-servicemesh-config/ --namespace ocnssf -f ocnssf_servicemesh_config_custom_values_25.1.201.yaml

  5. Run the following command to install NSSF:
    1. Using local Helm chart:
      helm install <helm-release-name> <charts> --namespace <namespace-name> -f <custom-values.yaml-filename>

      For example:

      helm install ocnssf ocnssf/ --namespace ocnssf -f ocnssf_custom_values_25.1.201.yaml

    2. Using chart from Helm repo:
      helm install <helm-release-name> <helm_repo/helm_chart> --version <chart_version> --namespace <namespace-name> -f <custom-values.yaml-filename>

      For example:

      helm install ocnssf ocnssf-helm-repo/ocnssf --version 25.1.201 --namespace ocnssf -f ocnssf_custom_values_25.1.201.yaml

    Where,

    <helm_repo> is the location where helm charts are stored.

    <helm_chart> is the chart to deploy the microservices.

    <helm-release-name> is the release name used by helm command.

    Note:

    <helm-release-name> must not exceed 20 characters.

    <namespace-name> is the deployment namespace used by helm command.

    <custom-values.yaml-filename> is the name of the custom values yaml file (including location).

Caution:

Do not exit from helm install command manually. After running the helm install command, it takes some time to install all the services. Do not press "ctrl+c" to exit from helm install command. It may lead to anomalous behavior.

2.2.3 Postinstallation Tasks

This section explains the postinstallation tasks for NSSF.

2.2.3.1 Verifying Installation

To verify the installation:

  1. Run the following command:
    helm status <helm-release> -n <namespace>

    Where,

    <hem-release> is the Helm release name of NSSF.

    <namespace> is the namespace of NSSF deployment.

    For example:

    helm status ocnssf -n ocnssf

    In the output, if STATUS is showing as deployed, then the installation is successful.

    Sample output:

    NAME: ocnssf
    LAST DEPLOYED: Fri Sep 18 10:08:03 2020 
    NAMESPACE: ocnssf
    STATUS: deployed
     REVISION: 1
    
  2. Run the following command to verify if the pods are up and active:
    kubectl get jobs,pods -n <Namespace>

    Where,

    <Namespace> is the namespace where NSSF is deployed.

    For example:

    kubectl get pods -n ocnssf

    In the output, the STATUS column of all the pods must be Running and the READY column of all the pods must be n/n, where n is the number of containers in the pod.

  3. Run the following command to verify if the services are deployed and active:
    kubectl get services -n <Namespace>

    For example:

    kubectl get services -n ocnssf

Note:

If the installation is unsuccessful or the STATUS of all the pods is not in the Running state, perform the troubleshooting steps provided in the Oracle Communications Cloud Native Core, Network Slice Selection Function Troubleshooting Guide.
2.2.3.2 Performing Helm Test

This section describes how to perform sanity check for NSSF installation through Helm test. The pods to be checked should be based on the namespace and label selector configured for the Helm test configurations.

Helm Test is a feature that validates successful installation of NSSF and determines if the NF is ready to take traffic.

Note:

  • Helm test can be performed only on Helm3.
  • Helm Test expects all of the pods of given microservice to be in READY state for a successful result. However, the NRF Client Management microservice comes with Active/Standby mode for the multi-pod support in the current release. When the multi-pod support for NRF Client Management service is enabled, you may ignore if the Helm Test for NRF-Client-Management pod fails.
  1. Complete the Helm test configurations under the "Helm Test Global Parameters" section of the ocnssf_custom_values_25.1.201.yaml file.
    nfName: ocnssf
        image:
          name: nf_test
          tag: 25.1.200
          registry: cgbu-cnc-comsvc-release-docker.dockerhub-phx.oci.oraclecorp.com/cgbu-ocudr-nftest
        config:
          logLevel: WARN
          timeout: 120      #Beyond this duration helm test will be considered failure
        resources:
        - horizontalpodautoscalers/v1
        - deployments/v1
        - configmaps/v1
        - prometheusrules/v1
        - serviceaccounts/v1
        - poddisruptionbudgets/v1
        - roles/v1
        - statefulsets/v1
        - persistentvolumeclaims/v1
        - services/v1
        - rolebindings/v1
        complianceEnable: true

    For more information on Helm test parameters, see Global Parameters.

  2. Run the following command to perform the Helm test:
    helm test <release_name> -n <namespace>

    Where,

    <release_name> is the release name.

    <namspace> is the deployment namespace where NSSF is installed.

    For example:

    helm test ocnssf -n ocnssf

    Sample output:

    NAME: ocnssf
    LAST DEPLOYED: Fri Sep 18 10:08:03 2020 
    NAMESPACE: ocnssf
    STATUS: deployed
     REVISION: 1
    TEST SUITE: ocnssf-test
    Last Started: Fri Sep 18 10:41:25 2020 
    Last Completed: Fri Sep 18 10:41:34 2020 
    Phase: Succeeded 
    NOTES:
    # Copyright 2020 (C), Oracle and/or its affiliates. All rights reserved 

    If the Helm test fails, see Oracle Communications Cloud Native Core, Network Slice Selection Function Troubleshooting Guide.

2.2.3.3 Taking a Backup
Take a backup of the following files, which are required during fault recovery:
  • Updated ocnssf_custom_values_25.1.201.yaml file
  • Updated Helm charts
  • Secrets, certificates, and keys that are used during installation