2 Installing NSSF
This chapter provides information about installing Oracle Communications Cloud Native Core, Network Slice Selection Function (NSSF) in a cloud native environment.
Note:
NSSF supports fresh installation, and it can also be upgraded from 23.3.x to 23.4.x. For more information on how to upgrade NSSF, see Upgrading NSSF section.2.1 Prerequisites
Before installing and configuring NSSF, ensure that the following prerequisites are met.
2.1.1 Software Requirements
This section lists the software that must be installed before installing NSSF:
Table 2-1 Preinstalled Software
Software | Version |
---|---|
Kubernetes | 1.27.x, 1.26.x, 1.25.x |
Helm | 3.12.3 |
Podman | 4.4.1 |
To check the versions of the preinstalled software in the cloud native environment, run the following commands:
kubectl version
helm version
podman version
The following software are available if NSSF is deployed in CNE. If you are deploying NSSF in any other cloud native environment, these additional software must be installed before installing NSSF.
To check the installed software, run the following command:
helm ls -A
Table 2-2 Additional Software
Software | Chart Version | Required for |
---|---|---|
OpenSearch | 2.3.0 | Logging |
Kyverno | 1.9.0 | Logging |
FluentBit | 1.9.4 | Logging |
Jaeger | 1.45.0 | Tracing |
Oracle OpenSearch Dashboard | 2.3.0 | Logging |
Elastic-curator | 5.5.4 | Logging |
Elastic-exporter | 1.1.0 | Logging |
Elastic-master | 7.9.3 | Logging |
Logs | 3.1.0 | Logging |
Grafana | 9.5.3 | Metrics |
Prometheus | 2.44.0 | Metrics |
Prometheus-kube-state-metrics | 2.5.0 | Metrics |
Prometheus-node-exporter | 1.3.1 | Metrics |
MetalLB | 0.13.11 | External IP |
Metrics-server | 0.6.0 | Metric Server |
Tracer | 1.22.0 | Tracing |
2.1.2 Environment Setup Requirements
This section describes the environment setup requirements for installing NSSF.
2.1.2.1 Client Machine Requirement
This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.
The client machine should have:
- Helm repository configured.
- To add a Helm repository, run the following command
helm repo add <helm-repo-name> <helm-repo-address>
Where,
<helm-repo-name>
is the name of the Helm repository.<helm-repo-address>
is the URL of the Helm repository.For example:
helm repo add ocnssf-helm-repo http://10.75.237.20:8081
- To verify that Helm repository has been added successfully, run the
following command:
helm repo list
The output must show the added Helm repository in the list.
- To add a Helm repository, run the following command
- network access to the Helm repository and Docker image repository.
- network access to the Kubernetes cluster.
- required environment settings to run the
kubectl
,docker
, andpodman
commands. The environment must have privileges to create a namespace in the Kubernetes cluster. - Helm client installed with the push plugin. Configure the environment
in such a manner that the
helm install
command deploys the software in the Kubernetes cluster.
2.1.2.2 Network Access Requirement
The Kubernetes cluster hosts must have network access to the following repositories:
- Local Helm repository: It contains the NSSF Helm charts.
To check if the Kubernetes cluster hosts can access the local Helm repository, run the following command:
helm repo update
- Local Docker image repository: It contains the NSSF Docker images.
To check if the Kubernetes cluster hosts can access the local Docker image repository, pull any image with an image-tag using either of the following commands:
docker pull <Docker-repo>/<image-name>:<image-tag>
podman pull <Podman-repo>/<image-name>:<image-tag>
Where:<Docker-repo>
is the IP address or host name of the Docker repository.<Podman-repo>
is the IP address or host name of the Podman repository.<image-name>
is the Docker image name.<image-tag>
is the tag assigned to the Docker image used for the NSSF pod.
For example:
docker pull CUSTOMER_REPO/oc-app-info:
23.4.0
podman pull ocnssf-repo-host:5000/ocnssf/oc-app-info:
23.4.0
Note:
Run thekubectl
andhelm
commands on a system based on the deployment infrastructure. For instance, they can be run on a client machine such as VM, server, local desktop, and so on.
2.1.2.3 Server or Space Requirement
For information about server or space requirements, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
2.1.2.4 CNE Requirement
This section is applicable only if you are installing NSSF on Cloud Native Environment (CNE).
NSSF supports CNE 23.4.x, 23.3.x, and 23.2.x.
To check the CNE version, run the following command:
echo $OCCNE_VERSION
For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
2.1.2.5 cnDBTier Requirement
NSSF supports cnDBTier 23.4.x, 23.3.x, and 23.2.x. cnDBTier must be configured and running before installing NSSF.
To install NSSF with recommended cnDBTier resources, install cnDBTier using
the ocnssf_dbtier_23.4.0_custom_values_23.4.0.yaml
file provided
in the ocnssf-custom-configtemplates-23_4_0_0_0
file. For information about the steps to download ocnssf-custom-configtemplates-23_4_0_0_0
file, see Customizing NSSF.
Note:
- If cnDBTier 23.4.0 is used during installation, set the
ndb_allow_copying_alter_table
parameter to 'ON
' in theocnssf_dbtier_23.4.0_custom_values_23.4.0.yaml
file before installing NSSF. After the NSSF installation, set the parameter value to its default value,OFF
. - If you have already installed a version of cnDBTier, run the
following command to upgrade your current cnDBTier installation using the
ocnssf_dbtier_23.4.0_custom_values_23.4.0.yaml
file:helm upgrade <release-name> <chart-path> -f <cndb-custom-values.yaml> -n <namespace>
For example:
helm upgrade mysql-cluster occndbtier/ -f
ocnssf_dbtier_23.4.0_custom_values_23.4.0.yaml
-n nssf-cndb
For more information about cnDBTier installation and upgrade procedure, see Oracle Communications Cloud Native Core, DBTier Installation, Upgrade, and Fault Recovery Guide.
2.1.2.6 OSO Requirement
NSSF supports Operations Services Overlay (OSO) 23.4.x for common operation services (Prometheus and components such as alertmanager, pushgateway) on a Kubernetes cluster, which does not have these common services. For more information about OSO installation, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation, Upgrade, and Fault Recovery Guide.
2.1.3 Resource Requirement
This section lists the resource requirements to install and run NSSF.
Note:
The performance and capacity of the NSSF system may vary based on the call model, Feature or Interface configuration, and underlying CNE and hardware environment.2.1.3.1 NSSF Services
The following table lists resource requirement for NSSF Services:
Table 2-3 NSSF Services
Service | Replica | CPU | Memory | Ephemeral Storage | |||
---|---|---|---|---|---|---|---|
Min | Max | Min | Max | Min (Mi) | Max (Gi) | ||
<helm-release-name>-alternate-route | 1 | 1 | 2 | 2Gi | 4Gi | 80 | 1 |
<helm-release-name>-appinfo | 1 | 200m | 200m | 1Gi | 1Gi | 80 | 1 |
<helm-release-name>-config-server | 1 | 500m | 1 | 1Gi | 1Gi | 80 | 1 |
<helm-release-name>-egress-gateway | 2 | 4 | 4 | 4Gi | 4Gi | 80 | 1 |
<helm-release-name>-ingress-gateway | 5 | 6 | 6 | 6Gi | 6Gi | 80 | 1 |
<helm-release-name>-nrf-client-nfdiscovery | 2 | 2 | 2 | 1Gi | 1Gi | 80 | 1 |
<helm-release-name>-nrf-client-nfmanagement | 2 | 1 | 1 | 1Gi | 1Gi | 80 | 1 |
<helm-release-name>-nsauditor | 1 | 500m | 2 | 512Mi | 1Gi | 80 | 1 |
<helm-release-name>-nsavailability | 2 | 4 | 4 | 4Gi | 4Gi | 80 | 1 |
<helm-release-name>-nsconfig | 1 | 2 | 2 | 2Gi | 2Gi | 80 | 1 |
<helm-release-name>-nsselection | 6 | 6 | 6 | 6Gi | 6Gi | 80 | 1 |
<helm-release-name>-nssubscription | 1 | 2 | 2 | 1Gi | 1Gi | 80 | 1 |
<helm-release-name>-perf-info | 1 | 2 | 2 | 1Gi | 1Gi | 80 | 1 |
- #: <helm-release-name> will be prefixed in each microservice name. Example: if helm release name is "ocnssf", then nsselection microservice name will be "ocnssf-nsselection"
- Init-service container's and Common Configuration Client Hook's resources are not counted because the container gets terminated after initialization completes.
- Helm Hooks Jobs: These are pre and post jobs that are invoked during installation, upgrade, rollback, and uninstallation of the deployment. These are short span jobs that get terminated after the deployment completion.
- Helm Test Job: This job is run on demand when the helm test command is initiated. This job runs the helm test and stops after completion. These are short-lived jobs that get terminated after the deployment is done. They are not part of active deployment resource, but are considered only during helm test procedures.
2.1.3.2 Debug Tool Container
The Debug Tools provides third party troubleshooting tools for debugging the runtime issues in lab. If Debug Tool Container injection is enabled during NSSF deployment or upgrade, this container is injected to each NSSF pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about configuring the Debug Tool, see Oracle Communications Cloud Native Core, Network Slice Selection Function Troubleshooting Guide.
Table 2-4 Debug Tool Container
Service Name | CPU Per Pod | Memory Per Pod (GB) | ||
---|---|---|---|---|
Min | Max | Min | Max | |
Helm test | 0 | 0 | 0 | 0 |
Helm Hook | 0 | 0 | 0 | 0 |
<helm-release-name>-nsselection | 0.5 | 1 | 1 | 2 |
<helm-release-name>-nsavailability | 0.5 | 1 | 1 | 2 |
<helm-release-name>-nssubscription | 0.5 | 1 | 1 | 2 |
<helm-release-name>-nsauditor | 0.5 | 1 | 1 | 2 |
<helm-release-name>-nsconfiguration | 0.5 | 1 | 1 | 2 |
<helm-release-name>-nrf-client-nfdiscovery | 0.5 | 1 | 1 | 2 |
<helm-release-name>-nrf-client-nfmanagement | 0.5 | 1 | 1 | 2 |
<helm-release-name>-ingressgateway | 0.5 | 1 | 1 | 2 |
<helm-release-name>-egressgateway | 0.5 | 1 | 1 | 2 |
<helm-release-name>-config-server | 0.5 | 1 | 1 | 2 |
<helm-release-name>-alternate-route | 0.5 | 1 | 1 | 2 |
<helm-release-name>-appinfo | 0.5 | 1 | 1 | 2 |
<helm-release-name>-perfinfo | 0.5 | 1 | 1 | 2 |
Note:
<helm-release-name> is the Helm release name. For example, if Helm release name is "ocnssf", then nsselection microservice name will be "ocnssf-nsselection".2.1.3.3 ASM Sidecar
NSSF leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external TLS communication. If ASM Sidecar injection is enabled during NSSF deployment or upgrade, this container is injected to each NSSF pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about installing ASM, see Configuring NSSF to support Aspen Service Mesh.
Table 2-5 ASM Sidecar
Pod Name | Pod Count | CPU | Memory | ||
---|---|---|---|---|---|
Min | Max | Min | Max | ||
<helm-release-name>-alternate-route | 1 | 250m | 250m | 512Mi | 512Mi |
<helm-release-name>-appinfo | 1 | 250m | 250m | 512Mi | 512Mi |
<helm-release-name>-egress-gateway | 2 | 250m | 250m | 512Mi | 512Mi |
<helm-release-name>-ingress-gateway | 5 | 3 | 3 | 512Mi | 512Mi |
<helm-release-name>-nsauditor | 1 | 250m | 250m | 512Mi | 512Mi |
<helm-release-name>-nsavailability | 2 | 250m | 250m | 512Mi | 512Mi |
<helm-release-name>-nsconfig | 1 | 250m | 250m | 512Mi | 512Mi |
<helm-release-name>-nsselection | 6 | 2 | 2 | 512Mi | 512Mi |
<helm-release-name>-nssubscription | 1 | 250m | 250m | 512Mi | 512Mi |
<helm-release-name>-ocnssf-nrf-client-nfdiscovery | 2 | 250m | 250m | 512Mi | 512Mi |
<helm-release-name>-ocnssf-nrf-client-nfmanagement | 2 | 250m | 250m | 512Mi | 512Mi |
<helm-release-name>-nsconfig | 1 | 250m | 250m | 512Mi | 512Mi |
<helm-release-name>-perf-info | 1 | 250m | 250m | 512Mi | 512Mi |
Note:
<helm-release-name> is the Helm release name. For example, if Helm release name is "ocnssf", then nsselection microservice name will be "ocnssf-nsselection".2.1.3.4 Upgrade
Following is the resource requirement for upgrading NSSF.
Table 2-6 Upgrade
Service Name | Pod Replicas | CPU Per Pod | Memory Per Pod (GB) | |||
---|---|---|---|---|---|---|
Min | Max | Min | Max | Min | Max | |
Helm test | 0 | 0 | 0 | 0 | 0 | 0 |
Helm Hook | 0 | 0 | 0 | 0 | 0 | 0 |
<helm-release-name>-nsselection | 1 | 2 | 2 | 2 | 2 | 2 |
<helm-release-name>-nsavailability | 1 | 2 | 4 | 4 | 2 | 2 |
<helm-release-name>-nssubscription | 1 | 2 | 2 | 2 | 2 | 2 |
<helm-release-name>-nsauditor | 1 | 1 | 6 | 6 | 3 | 3 |
<helm-release-name>-nsconfiguration | 1 | 1 | 2 | 2 | 2 | 2 |
<helm-release-name>-nrf-client-nfdiscovery | 1 | 2 | 2 | 2 | 2 | 2 |
<helm-release-name>-nrf-client-nfmanagement | 1 | 1 | 4 | 4 | 2 | 2 |
<helm-release-name>-ingressgateway | 1 | 2 | 6 | 6 | 4 | 4 |
<helm-release-name>-egressgateway | 1 | 2 | 6 | 6 | 4 | 4 |
<helm-release-name>-config-server | 1 | 2 | 2 | 2 | 4 | 4 |
<helm-release-name>-alternate-route | 1 | 2 | 2 | 2 | 4 | 4 |
<helm-release-name>-appinfo | 1 | 1 | 1 | 1 | 1 | 1 |
<helm-release-name>-perfinfo | 2 | 2 | 1 | 1 | 1 | 1 |
Note:
<helm-release-name> is the Helm release name. For example, if Helm release name is "ocnssf", then nsselection microservice name will be "ocnssf-nsselection".2.1.3.5 Common Services Container
Following is the resource requirement for Common Services Container.
Table 2-7 Common Services Container
Container Name | CPU | Memory (GB) | Kubernetes Init Container |
---|---|---|---|
init-service | 1 | 1 | Y |
update-service | 1 | 1 | N |
common_config_hook | 1 | 1 | N |
- Update Container service: Ingress or Egress Gateway services use this container service to periodically refresh private keys, CA Root Certificate for TLS, and other certificates for NSSF.
- Init Container service: Ingress or Egress Gateway services use this container to get private keys, CA Root Certificate for TLS during start up, and other certificates for NSSF.
- Common Configuration Hook: It is used for creating database for common service configuration.
2.1.3.6 NSSF Hooks
Following is the resource requirement for NSSF Hooks.
Table 2-8 NSSF Hooks
Hook Name | CPU Per Pod | Memory Per Pod (Mi) | ||
---|---|---|---|---|
Min | Max | Min | Max | |
<helm-release-name>-nsconfig-pre-install | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsconfig-post-install | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsconfig-pre-upgrade | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsconfig-post-upgrade | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsconfig-pre-rollback | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsconfig-post-rollback | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsconfig-pre-delete | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsconfig-post-delete | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsselection-pre-install | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsselection-post-install | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsselection-pre-upgrade | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsselection-post-upgrade | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsselection-pre-rollback | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsselection-post-rollback | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsselection-pre-delete | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsselection-post-delete | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsavailability-pre-install | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsavailability-post-install | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsavailability-pre-upgrade | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsavailability-post-upgrade | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsavailability-pre-rollback | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsavailability-post-rollback | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsavailability-pre-delete | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsavailability-post-delete | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nssubscription-pre-install | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nssubscription-post-install | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nssubscription-pre-upgrade | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nssubscription-post-upgrade | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nssubscription-pre-rollback | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nssubscription-post-rollback | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nssubscription-pre-delete | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nssubscription-post-delete | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsconfig-pre-install | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsauditor-post-install | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsauditor-pre-upgrade | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsauditor-post-upgrade | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsauditor-pre-rollback | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsauditor-post-rollback | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsauditor-pre-delete | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-nsauditor-post-delete | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-alternate-route-post-install | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-alternate-route-pre-upgrade | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-alternate-route-post-upgrade | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-alternate-route-pre-rollback | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-alternate-route-post-rollback | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-alternate-route-pre-delete | 0.25 | 0.5 | 256 | 512 |
<helm-release-name>-alternate-route-post-delete | 0.25 | 0.5 | 256 | 512 |
Note:
<helm-release-name> is the Helm release name. For example, if Helm release name is "ocnssf", then nsselection microservice name will be "ocnssf-nsselection"2.1.3.7 CNC Console Resources
Oracle Communications Cloud Native Configuration Console (CNC Console) is a Graphical User Interface (GUI) for NFs and Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) common services. For information about CNC Console resources required by NSSF, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.
2.1.3.8 cnDBTier Resources
The following table lists resource requirement for cnDBTier:
Table 2-9 cnDBTier Resources
cnDBTier Pods | Replica | vCPU | Memory | Ephemeral Storage | |||
---|---|---|---|---|---|---|---|
Min | Max | Min | Max | Min (Mi) | Max (Gi) | ||
ndbmysqld | 2 | 8 | 8 | 10Gi | 10Gi | 90 | 1 |
ndbappmysqld | 4 | 8 | 8 | 10Gi | 10Gi | 90 | 1 |
ndbmgmd | 2 | 4 | 4 | 10Gi | 10Gi | NA | NA |
ndbmtd | 4 | 10 | 10 | 18Gi | 18Gi | 90 | 1 |
db-backup-manager-svc | 1 | 0.1 | 0.1 | 128Mi | 128Mi | NA | NA |
db-replication-svc | 1 | 2 | 2 | 12Gi | 12Gi | 90 | 1 |
db-monitor-svc | 1 | 1 | 1 | 1Gi | 1Gi | NA | NA |
2.2 Installation Sequence
This section describes NSSF preinstallation, installation, and postinstallation tasks for NSSF.
2.2.1 Preinstallation Tasks
Before installing NSSF, perform the tasks described in this section.
2.2.1.1 Downloading the NSSF Package
To download the NSSF package from My Oracle Support (MOS), perform the following steps:
- Log in to My Oracle Support using the appropriate credentials.
- Select Patches & Updates tab.
- In Patch Search console, select Product or Family (Advanced) option.
- Enter
Oracle Communications Cloud Native Core - 5G
in Product field and select the product from the Product drop-down list. - From the Release drop-down list, select
"Oracle Communications Cloud Native Core Network Slice Selection
Function <release_number>".
Where,
<release_number>
indicates the required release number of NSSF. - Click Search.
The Patch Advanced Search Results list appears.
- Select the required patch from the list.
The Patch Details window appears.
- Click Download.
File Download window appears.
- Click
<p********_<release_number>_Tekelec>.zip
to download the release package.
2.2.1.2 Pushing the Images to Customer Docker Registry
NSSF deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.
Following table lists the Docker images of NSSF:
Table 2-11 Images for NSSF
Service Name | Image Name | Image Tag |
---|---|---|
<helm-release-name>-nsauditor | ocnssf-nsauditor | 23.4.0 |
<helm-release-name>-nssubscription | ocnssf-nssubscription | 23.4.0 |
<helm-release-name>-nsselection | ocnssf-nsselection | 23.4.0 |
<helm-release-name>-nsconfiguration | ocnssf-nsconfig | 23.4.0 |
<helm-release-name>-nsavailability | ocnssf-nsavailability | 23.4.0 |
<helm-release-name>-ingressgateway | ocingress_gateway | 23.4.3 |
<helm-release-name>-configurationinit | configurationinit | 23.4.3 |
<helm-release-name>-configurationupdate | configurationupdate | 23.4.3 |
<helm-release-name>-egressgateway | ocegress_gateway | 23.4.3 |
<helm-release-name>-common_config_hook | common_config_hook | 23.4.3 |
<helm-release-name>-alternate-route | alternate-route | 23.4.3 |
<helm-release-name>-nrf-client | Nrf-client | 23.4.2 |
<helm-release-name>-appinfo | occnp/oc-app-info | 23.4.1 |
<helm-release-name>-perfinfo | occnp/oc-perf-info | 23.4.1 |
<helm-release-name>-oc-config-server | occnp/oc-config-server | 23.4.0 |
<helm-release-name>-debug-tool | ocdebug-tools | 23.4.0 |
<helm-release-name>-helm-test | helm-test | 23.4.0 |
To push the images to the registry:
- Unzip the release package to the location where you want to install NSSF. The
NSSF package is as follows:
ocnssf_pkg_23_4_0_0_0.tgz
- Untar the NSSF package zip file to get NSSF image tar file:
tar -xvzf ocnssf_pkg_23_4_0_0_0.tgz
The directory consists of the following:ocnssf-23.4.0.0.0.tgz
: Helm Chartsocnssf-23.4.0.0.0.tgz.sha256
: Checksum for Helm chart tgz fileocnssf-images-23.4.0.0.0.tar
: NSSF images fileocnssf-images-23.4.0.0.0.tar.sha256
: Checksum for images tar fileocnssf-servicemesh-config-23.4.0.0.0.tgz
: Servicemesh configuration chartocnssf-servicemesh-config-23.4.0.0.0.tgz.sha256
: Checksum for servicemesh configuration tgz chartocnssf_limit_range.yaml
: Limit range configuration file used to limit the resource quotaocnssf_resource_quota.yaml
: Resource quota configuration file used to define the resource quota-
Readme.txt
: Readme txt file
- Run one of the following commands to load the
ocnssf-images-23.4.0.0.0.tar
file:docker load --input /IMAGE_PATH/ocnssf-images-23.4.0.0.0.tar
podman load --input /IMAGE_PATH/ocnssf-images-23.4.0.0.0.tar
- Run one of the following commands to verify if the images are
loaded:
docker images
podman images
Verify the list of images shown in the output with the list of images shown in the table Table 2-11. If the list does not match, reload the image tar file.
- Run one of the following commands to tag each imported image to the
registry:
docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
- Run one of the following commands to push the image to the
registry:
docker push <docker-repo>/<image-name>:<image-tag>
podman push <docker-repo>/<image-name>:<image-tag>
Note:
It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise, docker push command may fail.
2.2.1.3 Verifying and Creating Namespace
This section explains how to verify and create a namespace in the system.
Note:
This is a mandatory procedure, run this before proceeding further with the installation procedure. The namespace created or verified in this procedure is an input for the next procedures.- Run the following command to verify if the required namespace already
exists in the system:
kubectl get namespace
In the output of the above command, if the namespace exists, continue with Creating Service Account, Role, and RoleBinding.
- If the required namespace is unavailable, create the namespace using
the following
command:
kubectl create namespace <required namespace>
Where,
<required namespace>
is the namespace to be used for NSSF installation.For example, the following command creates the namespace,
ocnssf
:kubectl create namespace ocnssf
Sample output:
namespace/ocnssf created
- Update the
global.nameSpace
parameter in theocnssf_custom_values_23.4.0.yaml
file with the namespace created in the previous step:Here is a sample configuration snippet from the ocnssf_custom_values_23.4.0.yaml file:
global: # NameSpace where secret is deployed nameSpace: ocnssf
Naming Convention for Namespace
The namespace should:
- start and end with an alphanumeric character
- contain 63 characters or less
- contain only alphanumeric characters or '-'
Note:
It is recommended to avoid using the prefixkube-
when creating a
namespace. The prefix is reserved for Kubernetes system namespaces.
2.2.1.4 Creating Service Account, Role, and RoleBinding
This section is optional and it describes how to manually create a service account, role, and rolebinding. It is required only when customer needs to create a role, rolebinding, and service account manually before installing NSSF.
Note:
The secret(s) should exist in the same namespace where NSSF is getting deployed. This helps to bind the Kubernetes role with the given service account.Creating Service Account, Role, and RoleBinding
- Run the following command to create a NSSF resource
file:
vi <ocnssf-resource-file>
Where,
<ocnssf-resource-file>
is the name of the resource file.Example:
vi ocnssf-resource-template.yaml
- Update the
ocnssf-resource-template.yaml
with release specific information:Note:
Update <helm-release> and <namespace> with its respective NSSF namespace and NSSF Helm release name.A sample template to update the
ocnssf-resource-template.yaml
file with is given below:## Sample template start## Copyright 2018 (C), Oracle and/or its affiliates. All rights reserved. apiVersion: v1 kind: ServiceAccount metadata: name: <helm-release>-ocnssf-serviceaccount namespace: <namespace> labels: {{- include "labels.allResources" . }} annotations: {{- include "annotations.allResources" . }} --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: <helm-release>-ocnssf-role namespace: <namespace> labels: {{- include "labels.allResources" . }} annotations: {{- include "annotations.allResources" . }} rules: - apiGroups: - "" # "" indicates the core API group resources: - services - configmaps - pods - secrets - endpoints - persistentvolumeclaims - serviceaccounts verbs: - get - watch - list - update - apiGroups: - policy resources: - poddisruptionbudgets verbs: - get - watch - list - update - apiGroups: - apps resources: - deployments - statefulsets verbs: - get - watch - list - update - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - watch - list - update - apiGroups: - rbac.authorization.k8s.io resources: - roles - rolebindings verbs: - get - watch - list - update - apiGroups: - monitoring.coreos.com resources: - prometheusrules verbs: - get - watch - list - update --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <helm-release>-ocnssf-rolebinding namespace: <namespace> labels: {{- include "labels.allResources" . }} annotations: {{- include "annotations.allResources" . }} roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: <helm-release>-ocnssf-role subjects: - kind: ServiceAccount name: <helm-release>-ocnssf-serviceaccount namespace: <namespace> --- ## Sample template end#
- Run the following command to create service account, role, and
rolebinding:
$ kubectl -n <namespace> create -f ocnssf-resource-template.yaml
Where,
<namespace>
is the namespace where NSSF is deployed.Example:
$ kubectl -n ocnssf create -f ocnssf-resource-template.yaml
- Update the
serviceAccountName
parameter in theocnssf_custom_values_23.4.0.yaml
file with the value updated inname
field underkind: ServiceAccount
. For more information aboutserviceAccountName
parameter, see the Global Parameters section.
Note:
PodSecurityPolicy
kind
is required for Pod Security Policy service account. For more information, see Oracle Communications Cloud Native Core, Network Slice Selection
Function Troubleshooting
Guide.
2.2.1.5 Configuring Database, Creating Users, and Granting Permissions
This section explains how database administrators can create users and database in a single and multisite deployment.
NSSF has five databases (Provisional, State, Release, Leaderpod, and NRF Client Database) and two users (Application and Privileged).
Note:
- Before running the procedure for georedundant sites, ensure that the cnDBTier for georedundant sites is up and replication channels are enabled.
- While performing a fresh installation, if NSSF release is already deployed, purge the deployment and remove the database and users that were used for the previous deployment. For uninstallation procedure, see Uninstalling NSSF.
NSSF Database
- Provisional Database: Provisional Database contains configuration
information. The same configuration must be done on each site by the operator. Both
Privileged User and Application User have access to this database. In case of multisite
georedundant setups, each site must have a unique Provisional Database. NSSF sites can
access only the information in their unique Provisional Database.
For example:
- For Site 1: nssfProvSite1DB
- For Site 2: nssfProvSite2DB
- For Site 3: nssfProvSite3DB
- State Database: This database maintains the running state of NSSF sites and has information of subscriptions, pending notification triggers, and availability data. It is replicated and the same configuration is maintained by all NSSF georedundant sites. Both Privileged User and Application User have access to this database.
- Release Database: This database maintains release version state, and it is used during upgrade and rollback scenarios. Only Privileged User has access to this database.
- Leaderpod Database: This database is used to store leader and
follower if PDB is enabled for microservices that require a single pod to be up in all
the instances. The configuration of this database must be done on each site. In case of
georedundant deployments, each site must have a unique Leaderpod database.
For example:
- For Site 1: LeaderPod1Db
- For Site 2: LeaderPod2Db
- For Site 3: LeaderPod3Db
Note:
This database is used only whennrf-client-nfmanagement.enablePDBSupport
is set totrue
in theocnssf_custom_values_23.4.0.yaml
. For more information, see NRF Client. - NRF Client Database: This database is used to store
discovery cache tables, and it also supports NRF Client features. Only Privileged User
has access to this database and it is used only when the caching feature is enabled. In
case of georedundant deployments, each site must have a unique NRF Client database and
its configuration must be done on each site.
For example:
- For Site 1: nrf_client_db1
- For Site 2: nrf_client_db2
- For Site 3: nrf_client_db3
NSSF Users
There are two types of NSSF database users with different set of permissions:
- Privileged User: This user has a complete set of permissions. This user can perform create, alter, or drop operations on tables to perform install, upgrade, rollback, or delete operations.
- Application User: This user has a limited set of permissions and is used by NSSF application to handle service operations. This user can insert, update, get, or remove the records. This user will not be able to create, alter, or drop the database or tables.
Note:
In examples given in this document-- Application User's username is '
nssfusr
' and password is 'nssfpasswd'
. - Privileged User's username is '
nssfprivilegedusr
' and password is 'nssfpasswd'
.
2.2.1.5.1 Single Site
This section explains how a database administrator can create database and users for a single site deployment.
- Log in to the machine where SSH keys are stored and have permission to access the SQL nodes of NDB cluster.
- Connect to the SQL nodes.
- Log in to the MySQL prompt using root permission, or log in as a user
who has the permission to create users as per conditions explained in the next step.
For example:
mysql -h 127.0.0.1 -uroot -p
Note:
This command varies between systems, path for MySQL binary, root user, and root password. After running this command, enter the password specific to the user mentioned in the command. - Run the following command to check if both the NSSF users already
exist:
$ SELECT User FROM mysql.user;
If the users already exist, go to the next step. Else, create the respective user or users by following the steps below:- Run the following command to create Privileged
User:
$ CREATE USER '<NSSF Privileged Username>'@'%' IDENTIFIED BY '<NSSF Privileged User Password>';
Where,
<NSSF Privileged Username>
is the username of the Privileged user.<NSSF Privileged User Password>
is the password of the Privileged user.For example:
$ CREATE USER 'nssfprivilegedusr'@'%' IDENTIFIED BY 'nssfpasswd';
- Run the following command to create Application
User:
$ CREATE USER '<Application User Name>'@'%' IDENTIFIED BY '<NSSF APPLICATION User Password>';
Where,
<NSSF Application Username>
is the username of the Application user.<NSSF Application User Password>
is the password of the Application user.For example:
$ CREATE USER 'nssfusr'@'%' IDENTIFIED BY 'nssfpasswd';
Note:
You must create both the users on all the SQL nodes for all georedundant sites. - Run the following command to create Privileged
User:
- Run the following command to check whether any of the NSSF database
already exists:
$ show database;
- If any of the previously configured database is already present,
remove them. Otherwise, skip this step.
Caution:
In case you have multisite georedundant setup configured, removal of the database from any one of the SQL nodes of any cluster will remove the database from all georedundant sites.Run the following command to remove a preconfigured NSSF database:
$ DROP DATABASE if exists <DB Name>;
Where,
<DB Name>
is the database.For example:
Run the following command if State Database already exists:$ DROP DATABASE if exists nssfStateDB;
- Run the following command to create a new NSSF database if it
does not exist, or after dropping an existing database:
$ CREATE DATABASE IF NOT EXISTS <DB Name> CHARACTER SET latin1;
For example: Sample illustration for creating all five databases required for NSSF installation.
$ CREATE DATABASE IF NOT EXISTS nssfStateDB CHARACTER SET latin1;
$ CREATE DATABASE IF NOT EXISTS nssfProvSite1DB CHARACTER SET latin1;
$ CREATE DATABASE IF NOT EXISTS ocnssfReleaseDB CHARACTER SET latin1;
$ CREATE DATABASE IF NOT EXISTS LeaderPodDb CHARACTER SET latin1;
$ CREATE DATABASE IF NOT EXISTS nrf_client_db CHARACTER SET latin1;
Note:
Ensure that you use the same database names while creating database that you have used in the global parameters ofocnssf_custom_values_23.4.0.yaml
file. Following is an example of what are the names of the three NSSF database names configured in theocnssf_custom_values_23.4.0.yaml
file:
Hence, if you want to create any of these five databases, you must ensure that you create it with the same name as configured in theglobal.stateDbName : nssfStateDB global.provisionDbName: nssfProvSite1DB global.releaseDbName: ocnssfReleaseDB global.leaderPodDbName: LeaderPodDb nrfClientDbName: nrf_client_db
ocnssf_custom_values_23.4.0.yaml
file. In this case, nssfStateDB, nssfProvSite1DB, ocnssfReleaseDB, LeaderPodDb, and nrf_client_db.
- If any of the previously configured database is already present,
remove them. Otherwise, skip this step.
- Grant permissions to users on the database:
Note:
- Run this step on all the SQL nodes for each NSSF standalone site in a multisite georedundant setup.
- Creation of database is optional if grant is scoped to all database, that is, database name is not mentioned in grant command.
- Run the following command to grant NDB_STORED_USER permissions
to the Privileged
User:
GRANT NDB_STORED_USER ON *.* TO 'nssfprivilegedusr'@'%';
- Run the following commands to grant Privileged User permission
on Provisional, State, Release, Leaderpod, and NRF Client databases:
- Privileged User on Provisional
Database:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Userame>`@`%`;
For example:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON nssfProvSite1DB.*TO `nssfprivilegedusr`@`%`;
- Privileged User on State
Database:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Username>`@`%`;
For example:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON nssfStateDB.*TO `nssfprivilegedusr`@`%`;
- Privileged User on Release
Database:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Username>`@`%`;
For example:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON ocnssfReleaseDB.*TO `nssfprivilegedusr`@`%`;
- Privileged User on NSSF Leaderpod
Database:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Username>`@`%`;
For example:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON LeaderPodDb.*TO `nssfprivilegedusr`@`%`;
- Privileged User on NRF Client Database:
$ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<NSSF Privileged Username>'@'%';
For example:
$ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nrf_client_db.* TO 'nssfprivilegedusr'@'%';
- Privileged User on Provisional
Database:
- Run the following command to grant NDB_STORED_USER permissions
to the Application
User:
GRANT NDB_STORED_USER ON *.* TO 'nssfusr'@'%';
- Run the following commands to grant Application User permission
on Provisional Database and State Database:
- Application User on Provisional
Database:
$ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<Application User Name>'@'%';
For example:
$ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nssfProvSite1DB.* TO 'nssfusr'@'%';
- Application User on State
Database:
$ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<Application User Name>'@'%';
For example:
$ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nssfStateDB.* TO 'nssfusr'@'%';
- Application User on Provisional
Database:
- Run the following command to flush
privileges:
FLUSH PRIVILEGES;
- Exit from MySQL prompt and SQL nodes.
2.2.1.5.2 Multisite
This section explains how database administrator can create the database and users for a multisite deployment.
Note:
For multisite georedudnant setups, change the parameter values of the unique database (provisionDbName
, leaderPodDbName
, and
nrf_client_db
) in the ocnssf_custom_values_23.4.0.yaml
file. For
example, change the values as mentioned below in case of two-site and three-site
setup, respectively:
- Change the value of
global.provisionDbName
asnssfProvSite1DB
andnssfProvSite2DB
for one-site and two-site scenarios, respectively. - Change the value of
global.leaderPodDbName
asLeaderPod1Db
andLeaderPod2Db
for one-site and two-site scenarios, respectively. - Change the value of
global.nrfClientDbName
asnrf_client_db1
andnrf_client_db2
for one-site and two-site scenarios, respectively.
- Change the value of
global.provisionDbName
asnssfProvSite1DB,
nssfProvSite2DB,
andnssfProvSite3DB
for one-site, two-site, and three-site scenarios, respectively. - Change the value of
global.leaderPodDbName
asLeaderPod1Db,
LeaderPod2Db
, andLeaderPod3Db
for one-site, two-site, and three-site scenarios, respectively. - Change the value of
global.nrfClientDbName
asnrf_client_db1,
,nrf_client_db2
, andnrf_client_db3
for one-site, two-site, and three-site scenarios, respectively.
- Log in to the machine where SSH keys are stored and have permission to access the SQL nodes of NDB cluster.
- Connect to the SQL nodes.
- Log in to the MySQL prompt using root permission, or log in as a user
who has the permission to create users as per conditions explained in the next step.
For example:
mysql -h 127.0.0.1 -uroot -p
Note:
This command varies from system to system, path for MySQL binary, root user, and root password. After running this command, enter the password specific to the user mentioned in the command. - Run the following command to check if both the NSSF users already
exist:
$ SELECT User FROM mysql.user;
If users already exist, go to the next step. Otherwise, create the respective new user or users by following the steps below:- Run the following command to create a new Privileged
User:
$ CREATE USER '<NSSF Privileged Username>'@'%' IDENTIFIED BY '<NSSF Privileged User Password>';
For example:
$ CREATE USER 'nssfprivilegedusr'@'%' IDENTIFIED BY 'nssfpasswd';
- Run the following command to create a new NSSF Application
User:
$ CREATE USER '<NSSF APPLICATION Username>'@'%' IDENTIFIED BY '<NSSF APPLICATION Password>';
For example:
$ CREATE USER 'nssfusr'@'%' IDENTIFIED BY 'nssfpasswd';
Note:
You must create both users on all the SQL Nodes for all georedundant sites. - Run the following command to create a new Privileged
User:
- Run the following command to check any of the NSSF database already
exists:
$ show database;
- If any of the previously configured database is already present,
remove it (them). Otherwise, skip this step.
Caution:
In case you have multisite georedundant setup configured, removal of the database from any one of the SQL nodes of any cluster will remove the database from all georedundant sites.Run the following command to remove a preconfigured NSSF database:
$ DROP DATABASE if exists <DB Name>;
For example:
Run the following command if you find that State Database already exists:$ DROP DATABASE if exists nssfStateDB;
- Run the following command to create a new database for NSSF if
it does not exist, or after dropping a database:
$ CREATE DATABASE IF NOT EXISTS <DB Name> CHARACTER SET latin1;
For example: Sample illustration for creating all five database required for NSSF installation.
$ CREATE DATABASE IF NOT EXISTS nssfStateDB CHARACTER SET latin1;
$ CREATE DATABASE IF NOT EXISTS nssfProvSite1DB CHARACTER SET latin1;
$ CREATE DATABASE IF NOT EXISTS ocnssfReleaseDB CHARACTER SET latin1;
$ CREATE DATABASE IF NOT EXISTS LeaderPod1Db CHARACTER SET latin1;
$ CREATE DATABASE IF NOT EXISTS nrf_client_db1 CHARACTER SET latin1;
Note:
Ensure that you use the same database names while creating database that you have used in the global parameters ofocnssf_custom_values_23.4.0.yaml
file. Following is an example of the three database names configured in theocnssf_custom_values_23.4.0.yaml
file:
Hence, if you want to create any of these five databases, you must ensure that you create it with the same name as configured in theglobal.stateDbName : nssfStateDB global.provisionDbName: nssfProvSite1DB global.releaseDbName: ocnssfReleaseDB global.leaderPodDbName: LeaderPod1Db nrfClientDbName: nrf_client_db1
ocnssf_custom_values_23.4.0.yaml
file. In this case, the names are nssfStateDB, nssfProvSite1DB, ocnssfReleaseDB, LeaderPod1Db, and nrf_client_db1.
- If any of the previously configured database is already present,
remove it (them). Otherwise, skip this step.
- Grant permissions to users on the database:
Note:
- Run this step on all the SQL nodes for each NSSF standalone site in a multisite georedundant setup.
- Creation of database is optional if grant is scoped to all database, that is, database name is not mentioned in grant command.
- Run the following command to grant NDB_STORED_USER permissions
to the Privileged
User:
GRANT NDB_STORED_USER ON *.* TO 'nssfprivilegedusr'@'%';
- Run the following commands to grant Privileged User permission
on Provisional, State, Release, Leaderpod, and NRF Client databases:
- Privileged User on Provisional
Database:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Username>`@`%`;
Example Site 1:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON nssfProvSite1DB.*TO `nssfprivilegedusr`@`%`;
Example Site 2:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON nssfProvSite2DB.*TO `nssfprivilegedusr`@`%`;
- Privileged User on State
Database:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Username>`@`%`;
For example:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON nssfStateDB.*TO `nssfprivilegedusr`@`%`;
- Privileged User on NSSF Release
Database:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Username>`@`%`;
For example:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON ocnssfReleaseDB.*TO `nssfprivilegedusr`@`%`;
- Privileged User on NSSF Leaderpod
Database:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<NSSF Privileged Username>`@`%`;
Example Site 1:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON LeaderPod1Db.*TO `nssfprivilegedusr`@`%`;
Example Site 2:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON LeaderPod2Db.*TO `nssfprivilegedusr`@`%`;
- Privileged User on NRF Client Database:
$ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<NSSF Privileged Username>'@'%';
Example Site 1:
$ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nrf_client_db1.* TO 'nssfprivilegedusr'@'%';
Example Site 2:
$ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nrf_client_db2.* TO 'nssfprivilegedusr'@'%';
- Privileged User on Provisional
Database:
- Run the following command to grant NDB_STORED_USER permissions
to the Application
User:
GRANT NDB_STORED_USER ON *.* TO 'nssfusr'@'%';
- Run the following commands to grant Application User permission
on Provisional Database and State Database:
- NSSF Application User on Provisional
Database:
$ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<NSSF APPLICATION Username>'@'%';
Example: Site 1
$ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nssfProvSite1DB.* TO 'nssfusr'@'%';
Example Site 2:
$ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nssfProvSite2DB.* TO 'nssfusr'@'%';
- NSSF Application User on State
Database:
$ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<NSSF APPLICATION Username>'@'%';
For example:
$ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON nssfStateDB.* TO 'nssfusr'@'%';
- NSSF Application User on Provisional
Database:
- Run the following command to grant read permission to NSSF
Application User for
replication_info:
$ GRANT SELECT ON replication_info.* TO '<NSSF APPLICATION Username>'@'%';
For example:
$ GRANT SELECT ON replication_info.* TO 'nssfusr'@'%';
- Run the following command to grant read permission to
Privileged User for
replication_info:
$ GRANT SELECT ON replication_info.* TO '<NSSF Privileged Username>'@'%';
For example:
$ GRANT SELECT ON replication_info.* TO 'nssfprivilegedusr'@'%';
- Run the following command to flush
privileges:
FLUSH PRIVILEGES;
- Exit from MySQL prompt and SQL nodes.
2.2.1.6 Configuring Resource Quota and Limit Range
Configuring Resource Quota
Resource quota, defined by a ResourceQuota
object, provides
constraints to limit combined resource consumption per namespace. You can limit the
quantity of objects that can be created in your namespace by type and by the total
amount of resources required.
Note:
This is an optional step. You can perform it if you want to limit the resources for a namespace.- Run the following command to create a
ocnssf_resource_quota.yaml
file using the template given below:kubectl create -f <path of ocnssf_resource_quota.yaml file> -n <namespace>
For example:
kubectl create -f ./ocnssf_resource_quota.yaml -n ocnssf
Template:apiVersion: v1 kind: ResourceQuota metadata: name: ocnssf-resource-quota spec: hard: requests.cpu: "200" requests.memory: 200Gi limits.cpu: "200" limits.memory: 200Gi
- Run the following command to create resource quota for a given
namespace:
kubectl apply -f <path of ocnssf_resource_quota.yaml file> -n <namespace>
For example:
kubectl apply -f ./ocnssf_resource_quota.yaml -n ocnssf
Configuring Limit Range
Limit Range is a policy to limit the resource allocations (limits and requests) that you can specify for each applicable object kind (for example, container) in a namespace.
Note:
If Resource Quota is configured for a namespace, then it is mandatory to configure a Limit Range as well.To configure the Limit Range for an object in a namespace, perform the steps given below:
- Run the following command to create a
ocnssf_limit_range.yaml
file using the template given below:kubectl create -f <path of ocnssf_limit_range.yaml file> -n <namespace>
For example:
kubectl create -f ./ocnssf_limit_range.yaml -n ocnssf
Template:apiVersion: v1 kind: LimitRange metadata: name: ocnssf-limit-range spec: limits: - default: memory: 512Mi cpu: 0.5 defaultRequest: memory: 256Mi cpu: 250m type: Container
- Run the following command to create resource quota for a given
namespace:
kubectl apply -f <path of ocnssf_limit_range.yaml file> -n <namespace>
For example:
kubectl apply -f ./ocnssf_limit_range.yaml -n ocnssf
2.2.1.7 Configuring Kubernetes Secret for Accessing Database
This section explains how to configure Kubernetes secrets for accessing NSSF database.
2.2.1.7.1 Creating and Updating Secret for Privileged Database User
This section explains how to create and update Kubernetes secret for Privileged User to access the database.
- Run the following command to create Kubernetes
secret:
kubectl create secret generic <Privileged User secret name> --from-literal=mysql-username=<Privileged Mysql database username> --from-literal=mysql-password=<Privileged Mysql User database passsword> -n <Namespace>
Where,
<Privileged User secret name>
is the secret name of the Privileged User.<Privileged MySQL database username>
is the username of the Privileged User.<Privileged MySQL User database passsword>
is the password of the Privileged User.<Namespace>
is the namespace of NSSF deployment.Note:
Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in future.For example:
$ kubectl create secret generic privileged-db-creds --from-literal=mysql-username=nssfprivilegedusr --from-literal=mysql-password=nssfpasswd -n ocnssf
- Run the following command to verify the secret
created:
$ kubectl describe secret <Privileged User secret name> -n <Namespace>
Where,
<Privileged User secret name>
is the secret name of the database.<Namespace>
is the namespace of NSSF deployment.For example:
$ kubectl describe secret privileged-db-creds -n ocnssf
Sample output:Name: privileged-db-creds Namespace: ocnssf Labels: <none> Annotations: <none> Type: Opaque Data ==== mysql-password: 10 bytes mysql-username: 17 bytes
- Update the command used in step 1 with string
"
--dry-run -o yaml
" and "kubectl replace -f - -n <Namespace of NSSF deployment>
". After the update is performed, use the following command:$ kubectl create secret generic <Privileged User secret name> --from-literal=dbUsername=<Privileged MySQL database username> --from-literal=dbPassword=<Privileged Mysql database password> --dry-run -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>
Where,
<Privileged User secret name>
is the secret name of the Privileged User.<Privileged MySQL database username>
is the username of the Privileged User.<Privileged MySQL User database passsword>
is the password of the Privileged User.<Namespace>
is the namespace of NSSF deployment. - Run the updated command. The following message is
displayed:
secret/<Privileged User secret name> replaced
Where,
<Privileged User secret name>
is the updated secret name of the Privileged User.
2.2.1.7.2 Creating and Updating Secret for Application Database User
This section explains how to create and update Kubernetes secret for application user to access the database.
- Run the following command to create Kubernetes
secret:
$ kubectl create secret generic <Application User secret name> --from-literal=mysql-username=<Application MySQL Database Username> --from-literal=mysql-password=<Application MySQL User database passsword> -n <Namespace>
Where,
<Application User secret name>
is the secret name of the Application User.<Application MySQL database username>
is the username of the Application User.<Application MySQL User database passsword>
is the password of the Application User.<Namespace>
is the namespace of NSSF deployment.Note:
Note down the command used during the creation of Kubernetes secret, this command will be used for updating the secrets in future.For example:
$ kubectl create secret generic ocnssf-db-creds --from-literal=mysql-username=nssfusr --from-literal=mysql-password=nssfpasswd -n ocnssf
- Run the following command to verify the secret
created:
$ kubectl describe secret <Application User secret name> -n <Namespace>
Where,
<Application User secret name>
is the secret name of the database.<Namespace>
is the namespace of NSSF deployment.For example:
$ kubectl describe secret ocnssf-db-creds -n ocnssf
Sample output:Name: ocnssf-db-creds Namespace: ocnssf Labels: <none> Annotations: <none> Type: Opaque Data ==== mysql-password: 10 bytes mysql-username: 7 bytes
- Update the command used in step 1 with string
"
--dry-run -o yaml
" and "kubectl replace -f - -n <Namespace of NSSF deployment>
". After update, the command is as follows:$ kubectl create secret generic <Application User secret name> --from-literal=mysql-username=<Application MySQL database username> --from-literal=mysql-password=<Application MySQL User database passsword> --dry-run -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>
Where,
<Application User secret name>
is the secret name of the Application User.<Application MySQL database username>
is the username of the Application User.<Application MySQL User database passsword>
is the password of the Application User.<Namespace>
is the namespace of NSSF deployment. - Run the updated command. The following message is
displayed:
secret/<Application User secret name> replaced
Where,
<Application User secret name>
is the updated secret name of the Application User.
2.2.1.8 Configuring Secrets for Enabling HTTPS
This section explains the steps to configure HTTPS at Ingress and Egress Gateways.
2.2.1.8.1 Managing HTTPS at Ingress Gateway
This section explains the steps to create and update the Kubernetes secret and enable HTTPS at Ingress Gateway.
Creating and Updating Secrets at Ingress Gateway
Note:
- The passwords for TrustStore and KeyStore are stored in respective password files.
- The process to create private keys, certificates, and passwords is at the discretion of the user or operator.
- To create Kubernetes secret for HTTPS, the following files are required:
- ECDSA private key and CA signed certificate of NSSF, if initialAlgorithm is ES256
- RSA private key and CA signed certificate of NSSF, if initialAlgorithm is RS256
- TrustStore password file
- KeyStore password file
- CA Root File
- Run the following command to create secret:
$ kubectl create secret generic <ocingress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace>
Where,
<ocingress-secret-name>
is the secret name for Ingress Gateway.<ssl_ecdsa_private_key.pem>
is the ECDSA private key.<rsa_private_key_pkcs1.pem>
is the RSA private key.<ssl_truststore.txt>
is the SSL Truststore file.<ssl_keystore.txt>
is the SSL Keystore file.<caroot.cer>
is the CA Root file.<ssl_rsa_certificate.crt>
is the SSL RSA certificate.<ssl_ecdsa_certificate.crt>
is the SSL ECDSA certificate.<Namespace>
of NSSF deployment.Note:
Note down the command used during the creation of the secret. Use the command for updating the secrets in future.For example: The files and secret names used below are same as provided in custom_values.yaml in NSSF deployment.
$ kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n ocnssf
Note:
It is recommended to use the same secret name as mentioned in the example. In case you change<ocingress-secret-name>
, update thek8SecretName
parameter underingressgateway attributes
section in theocnssf_custom_values_23.4.0.yaml
file. - Run the following command to verify the details of the secret
created:
$ kubectl describe secret <ocingress-secret-name> -n <Namespace>
Where,
<ocingress-secret-name>
is the secret name for Ingress Gateway.<Namespace>
of NSSF deployment.For example:
$ kubectl describe secret ocingress-secret -n ocnssf
Sample output:
Name: ocingress-secret Namespace: ocnssf Labels: <none> Annotations: <none> Type: Opaque
- <Optional> Perform the following tasks to add, delete, or modify TLS or SSL
certificates in the secret:
- To add a certificate, run the following command:
TLS_CRT=$(base64 < "<certificate-name>" | tr -d '\n') kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":\"${TLS_CRT}\"}}"
Where,
<certificate-name>
is the certificate file name.<secret-name>
is the name of the secret, for example, ocnssf-secret.Example:
If you want to add a Certificate Authority (CA) Root from the
caroot.cer
file to theocnssf-secret
, run the following command:TLS_CRT=$(base64 < "caroot.cer" | tr -d '\n') kubectl patch secret ocnssf-secret -p "{\"data\":{\"caroot.cer\":\"${TLS_CRT}\"}}" -n ocnssf
Similarly, you can also add other certificates and keys to the ocnssf-secret.
- To update an existing certificate, run the following
command:
TLS_CRT=$(base64 < "<updated-certificate-name>" | tr -d '\n') kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":\"${TLS_CRT}\"}}"
Where,
<updated-certificate-name>
is the certificate file that contains the updated content.Example:
If you want to update the privatekey present in the
rsa_private_key_pkcs1.pem
file to the ocnssf-secret, run the following command:TLS_CRT=$(base64 < "rsa_private_key_pkcs1.pem" | tr -d '\n') kubectl patch secret ocnssf-secret -p "{\"data\":{\"rsa_private_key_pkcs1.pem\":\"${TLS_CRT}\"}}" -n ocnssf
Similarly, you can also update other certificates and keys to the ocnssf-secret.
- To remove an existing certificate, run the following
command:
kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":null}}"
Where,
<certificate-name>
is the name of the certificate to be removed.The certificate must be removed when it expires or needs to be revoked.
Example:
To remove the CA Root from the ocnrf-secret, run the following command:
kubectl patch secret ocnssf-secret -p "{\"data\":{\"caroot.cer\":null}}" -n ocnssf
Similarly, you can also remove other certificates and keys from the ocnssf-secret.
- To add a certificate, run the following command:
- To update the secret, update the command used in step 1 with
string "
--dry-run -o yaml
" and "kubectl replace -f - -n <Namespace of OCNSSF deployment>
".After the update is performed, use the following command:$ kubectl create secret generic <ocingress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>
For example:
$ kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n ocnssf | kubectl replace -f - -n ocnssf
Note:
The names used in the aforementioned command must be same as the names provided in theocnssf_custom_values_23.4.0.yaml
in NSSF deployment. - Run the updated command.
After the secret update is complete, the following message appears:
secret/<ocingress-secret> replaced
Enabling HTTPS at Ingress Gateway
This step is required only when SSL settings needs to be enabled on Ingress Gateway microservice of NSSF.
- Enable
enableIncomingHttps
parameter under Ingress Gateway Global Parameters section in theocnssf_custom_values_23.4.0.yaml
file. For more information aboutenableIncomingHttps
parameter, see under global parameters section of theocnssf_custom_values_23.4.0.yaml
file. - Configure the following details in the
ssl
section underingressgateway attributes
, in case you have changed the attributes while creating secret:- Kubernetes namespace
- Kubernetes secret name holding the certificate details
- Certificate information
ingress-gateway: nodeselector: nodekey: "" nodevalue: "" enableIncomingHttps: false service: ssl: tlsVersion: TLSv1.2 privateKey: k8SecretName: accesstoken-secret k8NameSpace: *ns rsa: fileName: rsa_private_key_pkcs1.pem ecdsa: fileName: ec_private_key_pkcs8.pem certificate: k8SecretName: accesstoken-secret k8NameSpace: *ns rsa: fileName: rsa_apigatewayTestCA.cer ecdsa: fileName: apigatewayTestCA.cer caBundle: k8SecretName: accesstoken-secret k8NameSpace: *ns fileName: caroot.cer keyStorePassword: k8SecretName: accesstoken-secret k8NameSpace: *ns fileName: key.txt trustStorePassword: k8SecretName: accesstoken-secret k8NameSpace: *ns fileName: trust.txt initialAlgorithm: RSA256
- Save the
ocnssf_custom_values_23.4.0.yaml
file.
2.2.1.8.2 Managing HTTPS at Egress Gateway
This section explains the steps to create and update the Kubernetes secret and enable HTTPS at Egress Gateway.
Creating and Updating Secrets at Egress Gateway
Note:
- The passwords for TrustStore and KeyStore are stored in respective password files.
- The process to create private keys, certificates, and passwords is at the discretion of the user or operator.
- To create Kubernetes secret for HTTPS, the following files are required:
- ECDSA private key and CA signed certificate of NSSF, if initialAlgorithm is ES256
- RSA private key and CA signed certificate of NSSF, if initialAlgorithm is RS256
- TrustStore password file
- KeyStore password file
- Run the following command to create secret.
$ kubectl create secret generic <ocegress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<ssl_rsa_private_key.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<ssl_cabundle.crt> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace>
Where,
<ocegress-secret-name>
is the secret name for Egress Gateway.<ssl_ecdsa_private_key.pem>
is the ECDSA private key.<rsa_private_key_pkcs1.pem>
is the RSA private key.<ssl_truststore.txt>
is the SSL Truststore file.<ssl_keystore.txt>
is the SSL Keystore file.<ssl_cabundle.crt>
is the SSL CA Bundle certificate.<ssl_rsa_certificate.crt>
is the SSL RSA certificate.<ssl_ecdsa_certificate.crt>
is the SSL ECDSA certificate.<Namespace>
of NSSF deployment.Note:
Note down the command used during the creation of the secret. Use the command for updating the secrets in future.For example:
$ kubectl create secret generic ocegress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=ssl_rsa_private_key.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=ssl_cabundle.crt --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n ocnssf
Note:
It is recommended to use the same secret name as mentioned in the example. In case you change<ocegress-secret-name>
, update thek8SecretName
parameter underegressgateway attributes
section in theocnssf_custom_values_23.4.0.yaml
file. - Run the following command to verify the details of the secret
created:
$ kubectl describe secret <ocegress-secret-name> -n <Namespace>
Where,
<ocegress-secret-name>
is the secret name for Egress Gateway.<Namespace>
of NSSF deployment.For example:
$ kubectl describe secret ocegress-secret -n ocnssf
- Update the command used in step 1 with string
"
--dry-run -o yaml
" and "kubectl replace -f - -n <Namespace of NSSF deployment>
".After the update is performed, use the following command:kubectl create secret generic <ocegress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<ssl_rsa_private_key.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<ssl_cabundle.crt> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace of NSSF Egress Gateway secret> | kubectl replace -f - -n <Namespace>
For example:
$ kubectl create secret generic ocegress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n ocnssf | kubectl replace -f - -n ocnssf
Note:
The names used in the aforementioned command must be same as the names provided in theocnssf_custom_values_23.4.0.yaml
in NSSF deployment. - Run the updated command. After successful secret update, the
following message is
displayed:
secret/<ocegress-secret> replaced
Enabling HTTPS at Egress Gateway
- Enable
enableOutgoingHttps
parameter under egressgateway attributes section in theocnssf_custom_values_23.4.0.yaml
file. For more information aboutenableOutgoingHttps
parameter, see the Egress Gateway section. - Configure the following details in the
ssl
section underegressgateway attributes
, in case you have changed the attributes while creating secret:- Kubernetes namespace
- Kubernetes secret name holding the certificate details
- Certificate information
egress-gateway: nodeselector: nodekey: "" nodevalue: "" enableOutgoingHttps: false service: # Specify type of service - Possible values are :- ClusterIP, NodePort, LoadBalancer and ExternalName type: ClusterIP ssl: tlsVersion: TLSv1.2 privateKey: k8SecretName: accesstoken-secret k8NameSpace: *ns rsa: fileName: rsa_private_key_pkcs1.pem ecdsa: fileName: ec_private_key_pkcs8.pem certificate: k8SecretName: accesstoken-secret k8NameSpace: *ns rsa: fileName: rsa_apigatewayTestCA.cer ecdsa: fileName: apigatewayTestCA.cer caBundle: k8SecretName: accesstoken-secret k8NameSpace: *ns fileName: caroot.cer keyStorePassword: k8SecretName: accesstoken-secret k8NameSpace: *ns fileName: key.txt trustStorePassword: k8SecretName: accesstoken-secret k8NameSpace: *ns fileName: trust.txt initialAlgorithm: RSA256
- Save the
ocnssf_custom_values_23.4.0.yaml
file.
2.2.1.9 Configuring Secrets to Enable Access Token
This section explains how to configure a secret for enabling access token.
2.2.1.9.1 Generating KeyPairs for NRF Instances
Note:
It is at the discretion of the user to create private keys and certificates, and it is not in the scope of NSSF. This section lists only samples to create KeyPairs.Using the OpenSSL tool, you can generate KeyPairs for each of the NRF
instances. Run the following commands to generate ec_private_key1.pem
,
ec_private_key_pkcs8.pem
, and
4bc0c762-0212-416a-bd94-b7f1fb348bd4.crt
files:
openssl ecparam -genkey -name prime256v1 -noout -out ec_private_key1.pem
openssl pkcs8 -topk8 -in ec_private_key1.pem -inform pem -out ec_private_key_pkcs8.pem -outform pem -nocrypt
openssl req -new -key ec_private_key_pkcs8.pem -x509 -nodes -days 365 -out 4bc0c762-0212-416a-bd94-b7f1fb348bd4.crt -subj "/C=IN/ST=KA/L=BLR/O=ORACLE/OU=CGBU/CN=ocnrf-endpoint.ocnrf.svc.cluster.local"
2.2.1.9.2 Enabling and Configuring Access Token
Enabling and Configuring Access Token
To enable access token validation, configure both Helm-based and REST-based configurations on Ingress Gateway.
Configuration using Helm:
For Helm-based configuration, perform the following steps:
- Create a secret that stores NRF public key certificates using
the following
commands:
kubectl create secret generic <secret-name> --from-file=<filename.crt> -n <Namespace>
Where,
<secret-name>
is the secret name.<Namespace>
is the NSSF namespace.<filename.crt>
is the public key certificate and we can have any number of certificates in the secret.For example:
kubectl create secret generic oauthsecret --from-file=4bc0c762-0212-416a-bd94-b7f1fb348bd4.crt -n ocnssf
- Enable the
oauthValidatorEnabled
parameter on Ingress Gateway by setting its value totrue
. Further, configure the secret and namespace on Ingress Gateway in the OAUTH CONFIGURATION section of theocnssf_custom_values_23.4.0.yaml
file using the following fields:oauthValidatorEnabled
nfType
nfInstanceId
producerScope
allowedClockSkewSeconds
enableInstanceIdConfigHook
nrfPublicKeyKubeSecret
nrfPublicKeyKubeNamespace
validationType
producerPlmnMNC
producerPlmnMCC
oauthErrorConfigForValidationFailure
oauthErrorConfigForValidationFailure.errorCode
oauthErrorConfigForValidationFailure.errorTitle
oauthErrorConfigForValidationFailure.errorDescription
oauthErrorConfigForValidationFailure.errorCause
oauthErrorConfigForValidationFailure.redirectUrl
oauthErrorConfigForValidationFailure.retryAfter
oauthErrorConfigForValidationFailure.errorTrigger
oauthErrorConfigForValidationFailure.errorTrigger.exceptionType
The following is a sample Helm configuration. For more information on parameters and their supported values, see Ingress Gateway Parameters.#OAUTH CONFIGURATION oauthValidatorEnabled: true nfType: NSSF nfInstanceId: 9faf1bbc-6e4a-4454-a507-aef01a101a01 producerScope: nnssf-configuration allowedClockSkewSeconds: 0 enableInstanceIdConfigHook: true nrfPublicKeyKubeSecret: oauthsecret nrfPublicKeyKubeNamespace: ocnssf validationType: strict producerPlmnMNC: 14 producerPlmnMCC: 310 oauthErrorConfigForValidationFailure: errorCode: 401 errorTitle: "Validation failure" errorDescription: "UNAUTHORIZED" errorCause: "oAuth access Token validation failed" redirectUrl: retryAfter: errorTrigger: - exceptionType: OAUTH_CERT_EXPIRED errorCode: 408 errorCause: certificate has expired errorTitle: errorDescription: retryAfter: redirectUrl:- exceptionType: OAUTH_MISMATCH_IN_KID errorCode: 407 errorCause: kid configured does not match with the one present in the token errorTitle: errorDescription: retryAfter: redirectUrl: - exceptionType: OAUTH_PRODUCER_SCOPE_NOT_PRESENT errorCode: 406 errorCause: producer scope is not present in token errorTitle: errorDescription: retryAfter: redirectUrl: - exceptionType: OAUTH_PRODUCER_SCOPE_MISMATCH errorCode: 405 errorCause: produce scope in token does not match with the configuration errorTitle: errorDescription: retryAfter: redirectUrl: - exceptionType: OAUTH_MISMATCH_IN_NRF_INSTANCEID errorCode: 404 errorCause: nrf id configured does not match with the one present in the token errorTitle: errorDescription: retryAfter: redirectUrl: - exceptionType: OAUTH_PRODUCER_PLMNID_MISMATCH errorCode: 403 errorCause: producer plmn id in token does not match with the configuration errorTitle: errorDescription: retryAfter: redirectUrl: - exceptionType: OAUTH_AUDIENCE_NOT_PRESENT_OR_INVALID errorCode: 402 errorCause: audience in token does not match with the configuration errorTitle: errorDescription: retryAfter: redirectUrl: - exceptionType: OAUTH_TOKEN_INVALID errorCode: 401 errorCause: oauth token is corrupted errorTitle: errorDescription: retryAfter: redirectUrl:oauthErrorConfigOnTokenAbsence: errorCode: 400 errorTitle: "Token not present" errorDescription: "UNAUTHORIZED" errorCause: "oAuth access Token is not present" redirectUrl: retryAfter:
Configuration using REST API
After Helm configuration, send the REST requests to Ingress Gateway to use configured public key certificates. Using REST-based configuration, you can distinguish between the certificates configured on different NRFs and can use these certificates to validate the token received from a specific NRF.
For more information about REST API configuration, see "OAuth Validator Configuration" section in Cloud Native Core, Network Slice Selection Function REST Specification Guide.
Note:
If there is an expiry of configured public key certificate or an addition of new certificate for different NRF, then change the existing configurations like:
- Delete an existing secret and create a new secret with updated
public key certificate. To delete a secret, run the following
command:
kubectl delete secret <secret-name> -n <namespace>
Where,
<secret-name>
is the secret name.<Namespace>
is the NSSF namespace.For example:
kubectl delete secret oauthsecret -n ocnssf
- Send the certificate configuration update request using REST
API. The request should include the
keyIdList
andinstanceIdList
with new certificates.
2.2.1.10 Configuring NSSF to support Aspen Service Mesh
NSSF leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external TLS communication. The service mesh integration provides inter-NF communication and allows API gateway co-working with service mesh. The service mesh integration supports the services by deploying a special sidecar proxy container in each pod to intercept all network communication between microservices.
Supported ASM version: 1.14.6 and 1.11.8
For ASM installation and configuration, see official Aspen Service Mesh website for details.
Aspen Service Mesh (ASM) configurations are categorized as follows:
- Control Plane: It involves adding labels or annotations to inject sidecar. The control plane configurations are part of the NF Helm chart.
- Data Plane: It helps in traffic management, such as handling NF
call flows by adding Service Entries (SE), Destination Rules (DR), Envoy Filters
(EF), and other resources. This configuration can be done using the
ocnssf_servicemesh_config_custom_values_23.4.0.yaml
.
Configuring Service Mesh Data Plane
Data Plane configuration consists of the following Custom Resource Definitions (CRDs):
- Service Entry (SE)
- Destination Rule (DR)
- Envoy Filter (EF)
- Peer Authentication (PA)
- Virtual Service (VS)
- Request Authentication (RA)
- Policy Authorization (PA)
Note:
Useocnssf_servicemesh_config_custom_values_23.4.0.yaml
to add or remove
CRDs that you may require due to service mesh upgrades to configure features across
different releases.
The Data Plane configuration is applicable in the following scenarios: For more information on Custom Resources (CR)s, see Service Mesh CRDs.
- Service Entry: Enables adding additional entries into Sidecar's internal service registry, so that auto-discovered services in the mesh can access or route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints).
- Destination Rule: Defines policies that apply to traffic intended for service after routing has occurred. These rules specify configuration for load balancing, connection pool size from the sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load-balancing pool.
- Envoy Filter: Provides a mechanism to customize the Envoy configuration generated by Istio Pilot. Use Envoy Filter to modify values for certain fields, add specific filters, or even add entirely new listeners, clusters, and so on.
- Peer Authentication: Used for service-to-service authentication to verify the client making the connection.
- Virtual Service: A Virtual Service defines a set of traffic routing rules to apply when a host is addressed. Each routing rule defines matching criteria for the traffic of a specific protocol. If the traffic is matched, then it is sent to a named destination service (or subset or version of it) defined in the registry.
- Request Authentication: Used for end-user authentication to verify the credential attached to the request.
- Policy Authorization: Sidecar Authorization Policy enables access control
on workloads in the mesh. Authorization policy supports
CUSTOM
,DENY
, andALLOW
actions for access control. WhenCUSTOM
,DENY
, andALLOW
actions are used for a workload at the same time, theCUSTOM
action is evaluated first, then theDENY
action, and finally theALLOW
action.
Service Mesh Configuration File
A sample ocnssf_servicemesh_config_custom_values_23.4.0.yaml
is available in
the Custom_Templates folder. For downloading the file, see Customizing NSSF.
Table 2-12 Supported Fields in CRD
CRD | Supported Fields |
---|---|
Service Entry | hosts |
exportTo | |
addresses | |
ports.name | |
ports.number | |
ports.protocol | |
resolution | |
Destination Rule | host |
mode | |
sbitimers | |
tcpConnectTimeout | |
tcpKeepAliveProbes | |
tcpKeepAliveTime | |
tcpKeepAliveInterval | |
Envoy Filters | labelselector |
applyTo | |
filtername | |
operation | |
typeconfig | |
configkey | |
configvalue | |
stream_idle_timeout | |
max_stream_duration | |
patchContext | |
networkFilter_listener_port | |
transport_socket_connect_timeout | |
filterChain_listener_port | |
route_idle_timeout | |
route_max_stream_duration | |
httpRoute_routeConfiguration_port | |
vhostname | |
Peer Authentication | labelselector |
tlsmode | |
Virtual Service | host |
destinationhost | |
port | |
exportTo | |
retryon | |
attempts | |
timeout | |
Request Authentication | labelselector |
issuer | |
jwks/jwksUri | |
Policy Authorization | labelselector |
action | |
hosts | |
paths | |
xfccvalues |
2.2.1.10.1 Predeployment Configurations
This section explains the predeployment configuration procedure to install NSSF with Service Mesh support.
Follow the procedure as mentioned below:
- Create NSSF namespace
- Run the following command to verify if required namespace
already exists in
system:
$ kubectl get namespaces
- In the output of the above command, check if required namespace
is available. If not available, run the follwing command to create the
namespace:
$ kubectl create namespace <namespace>
Where,
<Namespace>
is the NSSF namespace.For example:
$ kubectl create namespace ocnssf
- Run the following command to verify if required namespace
already exists in
system:
2.2.1.10.2 Installing Service Mesh Configuration Charts
Perform the below steps to configure Service Mesh CRDs using the Service Mesh Configuration chart:
- Download the service mesh chart
ocnssf-servicemesh-config-23.4.0.0.0.tgz
fromocnssf_pkg_23_4_0_0_0.tgz
folder. - Configure the
ocnssf_servicemesh_config_custom_values_23.4.0.yaml
file as follows:- Modify only the"SERVICE-MESH Custom Resource Configuration"
section for configuring the CRDs as needed. For example, to add or
modify a ServiceEntry CR required attributes, its value must be configured
under the
serviceEntries
: section of "SERVICE-MESH Custom Resource Configuration
". You can also comment on the CRDs that you do not need.
- Modify only the"SERVICE-MESH Custom Resource Configuration"
section for configuring the CRDs as needed. For example, to add or
modify a ServiceEntry CR required attributes, its value must be configured
under the
- Install the Service Mesh Configuration Charts as below:
- Run the below Helm install command on the namespace you want to
apply the
changes:
helm install <helm-release-name> <charts> --namespace <namespace-name> -f <custom-values.yaml-filename>
For example:
helm install ocnssf-servicemesh-config ocnssf-servicemesh-config-23.4.0.0.0.tgz --namespace ocnssf -f ocnssf_servicemesh_config_custom_values_23.4.0.yaml
- Run the below command to verify if all CRDs are
created:
kubectl get <CRD-Name> -n <Namespace>
For example:
kubectl get se,dr,peerauthentication,envoyfilter,vs -n ocnssf
Note:
Any modification to the existing CRDs or adding new CRDs can be done by updating theocnssf_servicemesh_config_custom_values_23.4.0.yaml
file and running Helm upgrade.
- Run the below Helm install command on the namespace you want to
apply the
changes:
2.2.1.10.3 Deploying NSSF with Service Mesh
- Create namespace label for auto sidecar injection to automatically
add the sidecars in all of the pods spawned in NSSF
namespace:
$ kubectl label ns <Namespace> istio-injection=enabled
Where,
<Namespace>
is the NSSF namespace.For example:
$ kubectl label ns ocnssf istio-injection=enabled
- Update
ocnssf_custom_values_23.4.0.yaml
with the following annotations:- Update the global section for adding annotation for the following use cases:
- To scrape metrics from NSSF pods, add
oracle.com/cnc: "true"
annotation.Note:
This step is required only if OSO is deployed. - Enable Prometheus to scrape metrics from NSSF pods by adding
"
9090
" totraffic.sidecar.istio.io/excludeInboundPorts
annotation. - Enable Coherence to form cluster in ASM based deployment by adding
"9090,8095,8096,7,53"
totraffic.sidecar.istio.io/excludeInboundPorts
annotation.For example:global: customExtension: allResources: labels: {} annotations: {} lbDeployments: annotations: oracle.com/cnc: "true" traffic.sidecar.istio.io/excludeInboundPorts: "9090,8095,8096,7,53" nonlbDeployments: annotations: oracle.com/cnc: "true" traffic.sidecar.istio.io/excludeInboundPorts: "9090,8095,8096,7,53"
- To scrape metrics from NSSF pods, add
- Update the following attributes under the global section:
Check if the
serviceMeshCheck
flag is set totrue
in the Global parameter section.Note:
TheserviceMeshCheck
parameter is mandatory and the other two parameters are read-only.# Mandatory: This parameter must be set to "true" when NSSF is deployed with the Service Mesh serviceMeshCheck: true # Mandatory: needs to be set with correct url format http://127.0.0.1:<istio management port>/quitquitquit" if NSSF is deployed with the Service Mesh. istioSidecarQuitUrl: "http://127.0.0.1:15000/quitquitquit" # Mandatory: needs to be set with correct url format http://127.0.0.1:<istio management port>/ready" if NSSF is deployed with the Service Mesh. istioSidecarReadyUrl: "http://127.0.0.1:15000/ready"
- Change ingress-gateway
Service Type
toClusterIP
under ingress-gateway's global section:global: # Service Type type: ClusterIP
- Update egress-gateway section for below attributes to enforce
egress-gateway container to send non-TLS egress requests irrespective of the HTTP
Scheme value of the message. Because, in a Service Mesh-based deployment, the sidecar
container takes care of establishing a TLS connection with the
peer.
egress-gateway: # Mandatory: This flag needs to set it "true" if Service Mesh would be present where ocnssf will be deployed # This is to enable egress gateway to forward http2 (and not https) requests even when it receives https requests httpRuriOnly: "true"
- Update the following sidecar resource configuration in Global
section:
deployment: customExtension: labels: {} annotations: { # Enable this section for service-mesh based installation sidecar.istio.io/proxyCPU: "2", sidecar.istio.io/proxyCPULimit: "2", sidecar.istio.io/proxyMemory: "2Gi", sidecar.istio.io/proxyMemoryLimit: "2Gi" }
- Update the global section for adding annotation for the following use cases:
- Install NSSF using updated
ocnssf_custom_values_23.4.0.yaml
. For more information about NSSF installation, see Installation Tasks.
2.2.1.10.4 Postdeployment Configuration
This section explains the post-deployment configurations to install NSSF with support for service mesh.
Enable Inter-NF communication
For every new NF participating in call flows when NSSF is a client, DestinationRule, and ServiceEntry must be created in NSSF namespace to enable communication.
- NSSF to AMF communication (for notification)
- NSSF to NRF communication ( for registration and heartbeat)
Create CRDs using the ocnssf_servicemesh_config_custom_values_23.4.0.yaml
file in Custom_Templates folder.
2.2.1.10.5 Redeploying NSSF without Service Mesh
This section describes the steps to redeploy NSSF without Service Mesh resources.
- To disable Service Mesh, run the following command:
kubectl label ns <ocnssf_namespace> istio-injection=disabled
Where,
<ocnssf_namespace>
is the namespace of NSSF.For example:kubectl label ns ocnssf istio-injection=disabled
- Remove the metrics scraping annotations from the
ocnssf_custom_values_23.4.0.yaml
file.- To scrape metrics from NSSF pods, add
oracle.com/cnc: "true"
annotation.Note:
This step is required only if OSO is deployed.For example:global: customExtension: allResources: labels: {} annotations: {} lbDeployments: annotations: oracle.com/cnc: "true" nonlbDeployments: annotations: oracle.com/cnc: "true"
- Update the following attributes under the global section:
Disable Service Mesh Flag and check if the
serviceMeshCheck
flag is set tofalse
in the Global parameter section.Note:
The serviceMeshCheck parameter is mandatory and the other two parameters are read-only.# Mandatory: This parameter must be set to "true" when NSSF is deployed with the Service Mesh serviceMeshCheck: false # Mandatory: needs to be set with correct url format http://127.0.0.1:<istio management port>/quitquitquit" if NSSF is deployed with the Service Mesh. istioSidecarQuitUrl: "http://127.0.0.1:15000/quitquitquit" # Mandatory: needs to be set with correct url format http://127.0.0.1:<istio management port>/ready" if NSSF is deployed with the Service Mesh. istioSidecarReadyUrl: "http://127.0.0.1:15000/ready"
- Change Ingress-Gateway
Service Type
toLoadBalancer
under ingress-gateway's global section:global: # Service Type type: LaodBalancer
- Update Egress-Gateway section for the below attributes to enforce
Egress-Gateway container for not to send non-TLS Egress requests
irrespective of the HTTP Scheme value of the message. Because, in a Service
Mesh-based deployment, the sidecar container takes care of establishing a
TLS connection with the peer.
egress-gateway: # Mandatory: This flag needs to set it "false" if Service Mesh would not be present where ocnssf will be deployed httpRuriOnly: "false"
- Remove the sidecar resource configuration in Global
section:
deployment: customExtension: labels: {} annotations: {}
- To scrape metrics from NSSF pods, add
- Upgrade or install NSSF using updated
ocnssf_custom_values_23.4.0.yaml
. For more information about NSSF installation, see Installation Tasks.
2.2.1.10.6 Deleting Service Mesh Resources
This section describes the steps to delete Service Mesh resources.
- To delete Service Mesh resources, run the following
command:
helm delete <helm-release-name> -n <namespace-name>
Where,
<helm-release-name>
is the release name used by the helm command. This release name must be the same as the release name used for Service Mesh CR creation.<namespace-name>
is the deployment namespace used by Helm command.
For example:
helm delete ocnssf-servicemesh-config -n ocnssf
- To verify if Service Mesh resources are deleted, run the following
command:
kubectl get <CRD-Name> -n <Namespace>
For example:
kubectl get se,dr,peerauthentication,envoyfilter,vs -n ocnssf
2.2.1.11 Configuring Network Policies
Kubernetes network policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application.
These network policies enforce access restrictions for all the applicable data flows except the communication from Kubernetes node to pod for invoking container probe.
Note:
Configuring network policy is optional. Based on the security requirements, network policy can be configured.For more information on the network policy, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.
Note:
- If the traffic is blocked or unblocked between the pods even after applying network policies, check if any existing policy is impacting the same pod or set of pods that might alter the overall cumulative behavior.
- If changing default ports of services such as Prometheus, Database, Jaegar, or if Ingress or Egress Gateway names is overridden, update them in the corresponding network policies.
Installing Network Policies
Prerequisite
Network Policies are implemented by using the network plug-in. To use network policies, you must be using a networking solution that supports Network Policy.
Note:
For a fresh installation, it is recommended to install Network Policies before installing NSSF. However, if NSSF is already installed, you can still install the Network Policies.- Open the
ocnssf_network_policy_custom_values_23.4.0.yaml
file provided in the release package zip file. For downloading the file, see Downloading the NSSF Package. - Update the
ocnssf_network_policy_custom_values_23.4.0.yaml
file as per the requirement. For more information on the parameters, see the Table 2-13 parameter table. - Run the following command to install the network
policies:
helm install <helm-release-name> <network-policy>/ -n <namepsace> -f <custom-value-file>
Where,
<helm-release-name>
: ocnssf-network-policy helm release name.<custom-value-file>
: ocnssf-network-policy custom value file.<namespace>
: namespace must be the NSSF's namespace.For example:
helm install ocnssf-network-policy ocnssf-network-policy/ -n ocnssf -f ocnssf_network_policy_custom_values_23.4.0.yaml
Note:
- The connections created before installing network policy are not impacted by the new network policy. Only the new connections are impacted.
- If you are using ATS suite along with network policies, it is required to install the NSSF and ATS in the same namespace.
- While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.
Upgrading Network Policies
- Modify the
ocnssf_network_policy_custom_values_23.4.0.yaml
file to update, add, and delete the network policy. - Run the following command to upgrade the network
policies:
helm upgrade <helm-release-name> <network-policy>/ -n <namespace> -f <custom-value-file>
Sample command:
helm upgrade ocnssf-network-policy ocnssf-network-policy/ -n ocnssf -f ocnssf_network_policy_custom_values_23.4.0.yaml
Note:
While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.Verifying Network Policies
Run the following command to verify that the network policies have been applied successfully:
kubectl get <helm-release-name> -n <namespace>
Where,
helm-release-name
: Helm release name of the ocnssf-network-policy.
namespace
: Namespace must be the NSSF's namespace.
Sample command:
kubectl get ocnssf-network-policy -n ocnssf
NAME POD-SELECTOR AGE
allow-egress-database app.kubernetes.io/part-of=ocnssf 21h
allow-egress-dns app.kubernetes.io/part-of=ocnssf 21h
allow-egress-jaeger app.kubernetes.io/part-of=ocnssf 21h
allow-egress-k8-api app.kubernetes.io/part-of=ocnssf 21h
allow-egress-sbi app.kubernetes.io/name=egressgateway 21h
allow-egress-to-nssf-pods app.kubernetes.io/part-of=ocnssf 21h
allow-from-node-port app=ocats-nssf 21h
allow-ingress-from-console app.kubernetes.io/name=nssfconfiguration 21h
allow-ingress-from-nssf-pods app.kubernetes.io/part-of=ocnssf 21h
allow-ingress-prometheus app.kubernetes.io/part-of=ocnssf 21h
allow-ingress-sbi app.kubernetes.io/name=ingressgateway 21h
deny-egress-all app.kubernetes.io/part-of=ocnssf 21h
deny-ingress-all app.kubernetes.io/part-of=ocnssf 21h
Uninstalling Network Policies
helm uninstall <helm-release-name> -n <namespace>
Sample command:
helm uninstall ocnssf-network-policy -n ocnssf
Note:
While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.Configuration Parameters for Network Policies
Table 2-13 Supported Kubernetes Resource for Configuring Network Policy
Parameter | Description | Default Value |
---|---|---|
apiVersion |
This is a mandatory parameter. This indicates Kubernetes version for access control.Note: This is the supported api version for network policy. This is a read-only parameter. |
networking.k8s.io/v1 |
kind |
This is a mandatory parameter. This represents the REST resource this object represents.Note: This is a read-only parameter. |
NetworkPolicy |
Table 2-14 Configuration Parameters for Network Policy
Parameter | Description | Default Value |
---|---|---|
metadata.name |
This is a mandatory parameter.
This indicates unique name for the network policy. |
{{ .metadata.name }} |
spec.{} |
This is a mandatory parameter.
This consists of all the information needed to define a particular network policy in the given namespace. Note: NSSF supports the spec parameters defined in Kubernetes Resource Category. |
NA |
For more information about this functionality, see Network Policies in Oracle Communications Cloud Native Core, Network Slice Selection Function User Guide.
2.2.2 Installation Tasks
This section explains how to install Network Slice Selection Function (NSSF).
Note:
- Before installing NSSF, you must complete Prerequisites and Preinstallation Tasks.
- In a multisite georedundant setup, perform the steps explained in this section on all the georedundant sites.
2.2.2.1 Installing NSSF Package
To install the NSSF package, perform the following steps:
- Run the following command to access the extracted package:
cd ocnssf-<release_number>
For example:
cd ocnssf-23.4.0.0.0
- Customize the
ocnssf_custom_values_23.4.0.yaml
file with the required deployment parameters. See Customizing NSSF chapter to customize the file. For more information about predeployment parameter configurations, see Preinstallation Tasks.Note:
- In case of multisite georedundant setups, configure
nfInstanceId
uniquely for each NSSF site. - Ensure the
nfInstanceId
configuration in the global section is same as that in theappProfile
section of NRF client.
- In case of multisite georedundant setups, configure
- <Optional> Customize the
ocnssf_servicemesh_config_custom_values_23.4.0.yaml
file with the required parameters in case you are creating DestinationRule and service entry using the yaml file. See Configuring NSSF to support Aspen Service Mesh for the sample template. - <Optional> Run the following command to create Destination Rule and Service
Entry using yaml
file:
helm install <helm-release-name> <charts> --namespace <namespace-name> -f <custom-values.yaml-filename>
For example:
helm install ocnssf-servicemesh-config ocnssf-servicemesh-config/ --namespace ocnssf -f ocnssf_servicemesh_config_custom_values_23.4.0.yaml
- Run the following command to install NSSF:
- Using local Helm
chart:
helm install <helm-release-name> <charts> --namespace <namespace-name> -f <custom-values.yaml-filename>
For example:
helm install ocnssf ocnssf/ --namespace ocnssf -f ocnssf_custom_values_23.4.0.yaml
- Using chart from Helm
repo:
helm install <helm-release-name> <helm_repo/helm_chart> --version <chart_version> --namespace <namespace-name> -f <custom-values.yaml-filename>
For example:
helm install ocnssf ocnssf-helm-repo/ocnssf --version 23.4.0 --namespace ocnssf -f ocnssf_custom_values_23.4.0.yaml
Where,
<helm_repo>
is the location where helm charts are stored.<helm_chart>
is the chart to deploy the microservices.<helm-release-name>
is the release name used by helm command.Note:
<helm-release-name>
must not exceed 20 characters.<namespace-name>
is the deployment namespace used by helm command.<custom-values.yaml-filename>
is the name of the custom values yaml file (including location). - Using local Helm
chart:
Caution:
Do not exit fromhelm
install
command manually. After running the helm install
command, it takes some time to install all the services. Do not press "ctrl+c" to exit
from helm install
command. It may lead to anomalous behavior.
2.2.3 Postinstallation Tasks
This section explains the postinstallation tasks for NSSF.
2.2.3.1 Verifying Installation
To verify the installation:
- Run the following
command:
helm status <helm-release> -n <namespace>
Where,
<hem-release>
is the Helm release name of NSSF.<namespace>
is the namespace of NSSF deployment.For example:
helm status ocnssf -n ocnssf
In the output, if
STATUS
is showing asdeployed
, then the installation is successful.Sample output:
NAME: ocnssf LAST DEPLOYED: Fri Sep 18 10:08:03 2020 NAMESPACE: ocnssf STATUS: deployed REVISION: 1
- Run the following command to verify if the pods are up and
active:
kubectl get jobs,pods -n <Namespace>
Where,
<Namespace>
is the namespace where NSSF is deployed.For example:
kubectl get pods -n ocnssf
In the output, the
STATUS
column of all the pods must beRunning
and theREADY
column of all the pods must ben/n
, where n is the number of containers in the pod. - Run the following command to verify if the services are deployed and
active:
kubectl get services -n <Namespace>
For example:
kubectl get services -n ocnssf
Note:
If the installation is unsuccessful or theSTATUS
of all the pods is not in the Running
state, perform the troubleshooting steps provided in the Oracle Communications Cloud Native Core, Network Slice Selection
Function Troubleshooting Guide.2.2.3.2 Performing Helm Test
This section describes how to perform sanity check for NSSF installation through Helm test. The pods to be checked should be based on the namespace and label selector configured for the Helm test configurations.
Note:
- Helm test can be performed only on Helm3.
- Helm Test expects all of the pods of given microservice to be in
READY
state for a successful result. However, the NRF Client Management microservice comes with Active/Standby mode for the multi-pod support in the current release. When the multi-pod support for NRF Client Management service is enabled, you may ignore if the Helm Test for NRF-Client-Management pod fails.
- Complete the Helm test configurations under the "Helm Test Global
Parameters" section of the
ocnssf_custom_values_23.4.0.yaml
file.nfName: ocnssf image: name: nf_test tag: 23.4.0 registry: cgbu-cnc-comsvc-release-docker.dockerhub-phx.oci.oraclecorp.com/cgbu-ocudr-nftest config: logLevel: WARN timeout: 120 #Beyond this duration helm test will be considered failure resources: - horizontalpodautoscalers/v1 - deployments/v1 - configmaps/v1 - prometheusrules/v1 - serviceaccounts/v1 - poddisruptionbudgets/v1 - roles/v1 - statefulsets/v1 - persistentvolumeclaims/v1 - services/v1 - rolebindings/v1 complianceEnable: true
For more information on Helm test parameters, see Global Parameters.
- Run the following command to perform the Helm
test:
helm test <release_name> -n <namespace>
Where,
<release_name>
is the release name.<namspace>
is the deployment namespace where NSSF is installed.For example:
helm test ocnssf -n ocnssf
Sample output:
NAME: ocnssf LAST DEPLOYED: Fri Sep 18 10:08:03 2020 NAMESPACE: ocnssf STATUS: deployed REVISION: 1 TEST SUITE: ocnssf-test Last Started: Fri Sep 18 10:41:25 2020 Last Completed: Fri Sep 18 10:41:34 2020 Phase: Succeeded NOTES: # Copyright 2020 (C), Oracle and/or its affiliates. All rights reserved
If the Helm test fails, see Oracle Communications Cloud Native Core, Network Slice Selection Function Troubleshooting Guide.