2 Installing NRF

This chapter provides information about installing Oracle Communications Cloud Native Core, Network Repository Function (NRF) in a cloud native environment using Continuous Delivery Control Server (CDCS) or Command Line Interface (CLI) procedures, including the prerequisites and downloading the deployment package.

CDCS is a centralized server that automates NRF deployment processes such as installation, upgrade, and rollback NRF. Whereas, CLI provides an interface to run various commands required to install, upgrade, and roll back NRF.

Note:

NRF supports fresh installation, and it can also be upgraded from 23.3.x and 23.4.x to 23.4.6. For more information on how to upgrade NRF, see the Upgrading NRF section.
NRF installation comprises of prerequisites, preinstallation, installation, and postinstallation tasks. You must perform NRF installation tasks in the same sequence as outlined in the following table:

2.1 Prerequisites

Before installing and configuring NRF, ensure that the following prerequisites are met.

2.1.1 Software Requirements

This section lists the software that must be installed before installing NRF:

Table 2-2 Preinstalled Software

Software Version
Kubernetes 1.27.x, 1.26.x, and 1.25.x
Helm 3.12.3
Podman 4.4.1
To check the versions for the preinstalled software in the cloud native environment, run the following commands:
kubectl version
helm version
podman version

By default, the following software are available in CNE 23.4.0. If you are deploying NRF in any other cloud native environment, these additional software must be installed before installing NRF. To check the installed software, run the following command:

helm ls -A
The list of additional software items, along with the supported versions and usage, is provided in the following table:

Table 2-3 Additional Software

Software Version Required For
Opensearch 2.3.0 Logging
OpenSearch Dashboard 2.3.0 Logging
Fluentd OpenSearch 1.16.2 Logging
Kyverno 1.9.0 Logging
FluentBit 1.9.4 Logging
Grafana 9.5.3 Metrics
Prometheus 2.44.0 Metrics
MetalLB 0.13.11 External IP
Jaeger 1.45.0 Tracing

2.1.2 Environment Setup Requirements

This section describes the environment setup requirements for installing NRF.

2.1.2.1 Client Machine Requirement

This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.

The client machine should have:
  • Helm repository configured.
  • network access to the Helm repository and Docker image repository.
  • network access to the Kubernetes cluster.
  • required environment settings to run the docker or podman, and kubectl, commands. The environment must have the privileges to create a namespace in the Kubernetes cluster.
  • Helm client installed with the push plugin. Configure the environment in such a manner that the helm install command deploys the software in the Kubernetes cluster.
2.1.2.2 Network Access Requirement

The Kubernetes cluster hosts must have network access to the following:

  • Local Helm repository: It contains the NRF Helm charts.
    To check if the Kubernetes cluster hosts can access the local Helm repository, run the following command:
    helm repo update
  • Local Docker image repository: It contains the NRF Docker images.
    To check if the Kubernetes cluster hosts can access the local Docker image repository, pull any image with an image-tag using either of the following commands:
    podman pull <podman-repo>/<image-name>:<image-tag>
    docker pull <docker-repo>/<image-name>:<image-tag>
    Where:
    • <podman-repo> is the IP address or host name of the Podman repository.
    • <docker-repo> is the IP address or host name of the Docker repository.
    • <image-name> is the Docker image name.
    • <image-tag> is the tag assigned to the Docker image used for the NRF pod.
    For example:

    podman pull bumblebee-bastion-1:5000/occne/ocnrf/ocnrf-nfaccesstoken:23.4.6

    docker pull bumblebee-bastion-1:5000/occne/ocnrf/ocnrf-nfaccesstoken:23.4.6

Note:

Run the kubectl and helm commands on a system based on the deployment infrastructure. For instance, they can be run on a client machine such as VM, server, local desktop, and so on.
2.1.2.3 Server or Space Requirement

For information about the server or space requirements, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

2.1.2.4 CNE Requirement

This section is applicable only if you are installing NRF on Cloud Native Environment (CNE). NRF supports CNE 23.4.x, 23.3.x, and 23.2.x.

To check the CNE version, run the following command:
echo $OCCNE_VERSION

For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

2.1.2.5 cnDBTier Requirement

NRF supports cnDBTier 23.4.x, 23.3.x, and 23.2.x. cnDBTier must be configured and running before installing NRF. For more information about cnDBTier installation procedure, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

For more information about the resource requirement, see cnDBTier Resource Requirement.

Note:

In georedundant deployment, each site must have a dedicated cnDBTier.

Recommended cnDBTier Configurations

Following are the modified or additional parameters for cnDBTier:

Table 2-4 cnDBTier Parameters

Parameter Modified or Added Default Value Recommended Value
global.ndbconfigurations.ndb.NoOfFragmentLogFiles Modified 128 32
global.ndb.datamemory Modified 12G 2G
ndbconfigurations.ndb.MaxNoOfExecutionThreads Modified 8 6
global.api.binlogpurgetimer Added NA 20000
global.api.binlogpurgesizecheckpercentage Added NA 10
global.api.binlogretentionsizepercentage Added NA 90
api.logrotate.rotateSize Added NA 50
api.logrotate.rotateQueryLogSize Added NA 200
api.logrotate.checkInterval Added NA 100
api.logrotate.maxRotateCounter Added NA 2
api.logrotate.maxRotateQueryLogCounter Added NA 5
global.additionalndbconfigurations.mysqld.ndb_batch_size Modified 2000000 2147483648
global.additionalndbconfigurations.mysqld.ndb_blob_write_batch_bytes Modified 2000000 1073741824
global.additionalndbconfigurations.ndb.HeartbeatIntervalDbDb Modified 500 1250

Note: The value of this parameter should be modified in two steps. For more details about modifying the value of this parameter, see Upgrading NRF.

global.ndbconfigurations.ndb.MaxNoOfOrderedIndexes Modified 1024 3630
global.additionalndbconfigurations.mysqld.ndb_allow_copying_alter_table Modified OFF ON
global.additionalndbconfigurations.mysqld.ndb_eventbuffer_max_alloc Modified 0 1610612736
global.storageClassName Modified occne-dbtier-sc standard
db-replication-svc.useClusterIpForReplication Modified false true
db-backup-manager-svc.scheduler.cronjobExpression Modified 0 0 */7 * * 0 0 * * *

Note:

The values for certain attributes which are mentioned above cannot be changed as part of cnDBTier software upgrade. This has to be performed as separate upgrades. For more information, see "Rolling Back cnDBTier" section in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.
2.1.2.6 OCCM Requirements

NRF supports OCCM 23.4.x. To support automated certificate lifecycle management, NRF integrates with Oracle Communications Cloud Native Core, Certificate Management (OCCM) in compliance with 3GPP security recommendations. For more information about OCCM in NRF, see the "Support for Automated Certificate Lifecycle Management" section in Oracle Communications Cloud Native Core, Network Repository Function User Guide.

For more information about OCCM, see the following guides:

  • Oracle Communications Cloud Native Core, Certificate Manager Installation, Upgrade, and Fault Recovery Guide
  • Oracle Communications Cloud Native Core, Certificate Manager User Guide
2.1.2.7 OSO Requirement

NRF supports Operations Services Overlay (OSO) 23.4.x for common operation services (Prometheus and components such as alertmanager, pushgateway) on a Kubernetes cluster, which does not have these common services. For more information about OSO installation, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation and Upgrade Guide.

2.1.3 Resource Requirements

This section lists the resource requirements to install and run NRF.

Note:

The performance and capacity of the NRF system may vary based on the call model, Feature or Interface configuration, and underlying CNE and hardware environment.
2.1.3.1 NRF Resource Requirement

This section provides the resource requirement for NRF deployment.

2.1.3.1.1 NRF Services

Table 2-5 NRF Services Resource Requirements

Service Name Pod Replica # CPU/Pod Memory/Pod (in G) Ephemeral Storage
Min Max Min Max Min Max Min (Mi) Max (Gi)
Helm test 1 1 1 2 1 2 78.1 1
<helm-release-name>-nfregistration 2 2 2 2 3 3 78.1 1
<helm-release-name>-nfdiscovery 2 60 4 4 3 3 78.1 2
<helm-release-name>-nfsubscription 2 2 2 2 3 3 78.1 1
<helm-release-name>-nrfauditor 2 2 2 2 3 3 78.1 1
<helm-release-name>-nrfconfiguration 1 1 2 2 2 2 78.1 1
<helm-release-name>-nfaccesstoken 2 2 2 2 2 2 78.1 1
<helm-release-name>-nrfartisan 1 1 2 2 2 2 78.1 1
<helm-release-name>-nrfcachedata 2 2 4 4 3 3 78.1 1
<helm-release-name>-ingressgateway 2 27 4 4 4 4 78.1 1
<helm-release-name>-egressgateway 2 19 4 4 4 4 78.1 1
<helm-release-name>-alternate-route 2 2 2 2 4 4 78.1 1
<helm-release-name>-appinfo 2 2 1 1 1 1 78.1 1
<helm-release-name>-perfinfo 2 2 1 1 1 1 78.1 1

Note:

If you enable Message Feed feature at Ingress Gateway and Egress Gateway, approximately 33% pod capacity is impacted.
Where:
  • <helm-release-name> is prefixed in each microservice name. For example, if helm-release-name is "ocnrf", then nfregistration microservice name is "ocnrf-nfregistration".
  • CPU Limit or Request Per Pod and Memory Limit or Request Per Pod must to added as additional resources for Ingressgateway/egressgateway pods if TLS needs to be enabled.

    init-service container's are not counted because the container gets terminate after initialization completes.
  • Helm Hooks Jobs: These are pre and post jobs that are invoked during installation, upgrade, rollback, and uninstallation of the deployment. These are short span jobs that get terminated after the deployment completion. They are not part of active deployment resource, but needs to be considered only during installation, upgrade, rollback and uninstallation procedures.

  • Helm Test Job: This job is run on demand when the helm test command is initiated. This job runs the helm test and stops after completion. These are short-lived jobs which get terminated after the deployment is done. They are not part of active deployment resource, but are considered only during Helm test procedures.

2.1.3.1.2 Upgrade

Table 2-6 Upgrade Resource Requirements

Service Name Pod replica CPU/Pod Memory/Pod (in Gi)
Min Max Min Max Min Max
Helm test 0 0 0 0 0 0
<helm-release-name>-nfregistration 1 1 2 2 3 3
<helm-release-name>-nfdiscovery 1 11 4 4 3 3
<helm-release-name>-nfsubscription 1 1 2 2 3 3
<helm-release-name>-nrfauditor 1 1 2 2 3 3
<helm-release-name>-nrfconfiguration 1 1 2 2 2 2
<helm-release-name>-nfaccesstoken 1 1 2 2 2 2
<helm-release-name>-nrfartisan 1 1 2 2 2 2
<helm-release-name>-nrfcachedata 1 1 4 4 3 3
<helm-release-name>-ingressgateway 1 5 4 4 4 4
<helm-release-name>-egressgateway 1 3 4 4 4 4
<helm-release-name>-alternate-route 1 1 2 2 4 4
<helm-release-name>-appinfo 1 1 1 1 1 1
<helm-release-name>-perfinfo 1 1 1 1 1 1
2.1.3.1.3 Common Services Container
Ingress or Egress Gateway services use Init Container service to get private keys, CA Root Certificate for TLS during start up, and other certificates for NRF.

Table 2-7 Resources for Containers

Container Name CPU Request and Limit Per Container Memory Request and Limit Per Container Kubernetes Init Container (Job)
init-service 1 cpu 1 gb Yes
2.1.3.1.4 Service Mesh Sidecar

NRF leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external TLS communication. If ASM Sidecar injection is enabled during NRF deployment or upgrade, this container is injected to each pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist.

Table 2-8 Service Mesh Sidecar Resource Requirements

Service Name CPU/Pod Memory/Pod (in G) Concurrency
Min Max Min Max
Helm test 0 0 0 0 NA
<helm-release-name>-nfregistration 2 2 3 3 2
<helm-release-name>-nfdiscovery 2 2 3 3 4
<helm-release-name>-nfsubscription 2 2 3 3 2
<helm-release-name>-nrfauditor 2 2 3 3 2
<helm-release-name>-nrfconfiguration 2 2 3 3 2
<helm-release-name>-nfaccesstoken 2 2 3 3 2
<helm-release-name>-nrfartisan 2 2 3 3 2
<helm-release-name>-nrfcachedata 2 2 3 3 2
<helm-release-name>-ingressgateway 4 4 3 3 8
<helm-release-name>-egressgateway 4 4 3 3 8
<helm-release-name>-alternate-route 2 2 3 3 2
<helm-release-name>-appinfo 2 2 3 3 2
<helm-release-name>-perfinfo 2 2 3 3 2
2.1.3.1.5 Debug Tool Container

The Debug Tools Container provides third party troubleshooting tools for debugging the runtime issues on both lab and production environment. If Debug Tool Container injection is enabled during NRF deployment or upgrade, this container is injected to each NRF pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about Debug Tool, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

Table 2-9 Debug Tool Container Resource Requirements

Service Name CPU/Pod Memory/Pod (in G) Ephemeral Storage
Min Max Min Max Min (Mi) Max (Gi)
Helm test 0 0 0 0 512 0.5
<helm-release-name>-nfregistration 0.5 0.5 1 2 512 0.5
<helm-release-name>-nfdiscovery 0.5 0.5 1 2 512 0.5
<helm-release-name>-nfsubscription 0.5 0.5 1 2 512 0.5
<helm-release-name>-nrfauditor 0.5 0.5 1 2 512 0.5
<helm-release-name>-nrfconfiguration 0.5 0.5 1 2 512 0.5
<helm-release-name>-nfaccesstoken 0.5 0.5 1 2 512 0.5
<helm-release-name>-nrfartisan 0.5 0.5 1 2 512 0.5
<helm-release-name>-nrfcachedata 0.5 0.5 1 2 512 0.5
<helm-release-name>-ingressgateway 0.5 0.5 1 2 512 0.5
<helm-release-name>-egressgateway 0.5 0.5 1 2 512 0.5
<helm-release-name>-alternate-route 0.5 0.5 1 2 512 0.5
<helm-release-name>-appinfo 0.5 0.5 1 2 512 0.5
<helm-release-name>-perfinfo 0.5 0.5 1 2 512 0.5

Note:

<helm-release-name> is the Helm release name. For example, if helm-release-name is "ocnrf", then nfsubscription microservice name will be "ocnrf-nfsubscription".

2.1.3.1.6 NRF Hooks

Table 2-10 NRF Hooks Resource Requirements

Service Name CPU/Pod Memory/Pod (in G) Ephemeral Storage
Min Max Min Max Min (Mi) Max (Gi)
<helm-release-name>-nfregistration-pre-install 1 1 1 2 78.1 1
<helm-release-name>-nfregistration-post-install 1 1 1 2 78.1 1
<helm-release-name>-nfregistration-pre-upgrade 1 1 1 2 78.1 1
<helm-release-name>-nfregistration-post-upgrade 1 1 1 2 78.1 1
<helm-release-name>-nfregistration-pre-rollback 1 1 1 2 78.1 1
<helm-release-name>-nfregistration-post-rollback 1 1 1 2 78.1 1
<helm-release-name>-nfregistration-pre-delete 1 1 1 2 78.1 1
<helm-release-name>-nfregistration-post-delete 1 1 1 2 78.1 1
<helm-release-name>-nfsubscription-pre-install 1 1 1 2 78.1 1
<helm-release-name>-nfsubscription-post-install 1 1 1 2 78.1 1
<helm-release-name>-nfsubscription-pre-upgrade 1 1 1 2 78.1 1
<helm-release-name>-nfsubscription-post-upgrade 1 1 1 2 78.1 1
<helm-release-name>-nfsubscription-pre-rollback 1 1 1 2 78.1 1
<helm-release-name>-nfsubscription-post-rollback 1 1 1 2 78.1 1
<helm-release-name>-nfsubscription-pre-delete 1 1 1 2 78.1 1
<helm-release-name>-nfsubscription-post-delete 1 1 1 2 78.1 1
<helm-release-name>-nrfAuditor-pre-install 1 1 1 2 78.1 1
<helm-release-name>-nrfAuditor-post-install 1 1 1 2 78.1 1
<helm-release-name>-nrfAuditor-pre-upgrade 1 1 1 2 78.1 1
<helm-release-name>-nrfAuditor-post-upgrade 1 1 1 2 78.1 1
<helm-release-name>-nrfAuditor-pre-rollback 1 1 1 2 78.1 1
<helm-release-name>-nrfAuditor-post-rollback 1 1 1 2 78.1 1
<helm-release-name>-nrfAuditor-pre-delete 1 1 1 2 78.1 1
<helm-release-name>-nrfAuditor-post-delete 1 1 1 2 78.1 1
<helm-release-name>-nrfconfiguration-pre-install 1 1 1 2 78.1 1
<helm-release-name>-nrfconfiguration-post-install 1 1 1 2 78.1 1
<helm-release-name>-nrfconfiguration-pre-upgrade 1 1 1 2 78.1 1
<helm-release-name>-nrfconfiguration-post-upgrade 1 1 1 2 78.1 1
<helm-release-name>-nrfconfiguration-pre-rollback 1 1 1 2 78.1 1
<helm-release-name>-nrfconfiguration-post-rollback 1 1 1 2 78.1 1
<helm-release-name>-nrfconfiguration-pre-delete 1 1 1 2 78.1 1
<helm-release-name>-nrfconfiguration-post-delete 1 1 1 2 78.1 1
<helm-release-name>-ingressgateway-pre-install 1 1 1 2 78.1 1
<helm-release-name>-ingressgateway-post-install 1 1 1 2 78.1 1
<helm-release-name>-ingressgateway-pre-upgrade 1 1 1 2 78.1 1
<helm-release-name>-ingressgateway-post-upgrade 1 1 1 2 78.1 1
<helm-release-name>-ingressgateway-pre-rollback 1 1 1 2 78.1 1
<helm-release-name>-ingressgateway-post-rollback 1 1 1 2 78.1 1
<helm-release-name>-ingressgateway-pre-delete 1 1 1 2 78.1 1
<helm-release-name>-ingressgateway-post-delete 1 1 1 2 78.1 1
<helm-release-name>-egressgateway-pre-install 1 1 1 2 78.1 1
<helm-release-name>-egressgateway-post-install 1 1 1 2 78.1 1
<helm-release-name>-egressgateway-pre-upgrade 1 1 1 2 78.1 1
<helm-release-name>-egressgateway-post-upgrade 1 1 1 2 78.1 1
<helm-release-name>-egressgateway-pre-rollback 1 1 1 2 78.1 1
<helm-release-name>-egressgateway-post-rollback 1 1 1 2 78.1 1
<helm-release-name>-egressgateway-pre-delete 1 1 1 2 78.1 1
<helm-release-name>-egressgateway-post-delete 1 1 1 2 78.1 1
<helm-release-name>-alternate-route-pre-install 1 1 1 2 78.1 1
<helm-release-name>-alternate-route-post-install 1 1 1 2 78.1 1
<helm-release-name>-alternate-route-pre-upgrade 1 1 1 2 78.1 1
<helm-release-name>-alternate-route-post-upgrade 1 1 1 2 78.1 1
<helm-release-name>-alternate-route-pre-rollback 1 1 1 2 78.1 1
<helm-release-name>-alternate-route-post-rollback 1 1 1 2 78.1 1
<helm-release-name>-alternate-route-pre-delete 1 1 1 2 78.1 1
<helm-release-name>-alternate-route-post-delete 1 1 1 2 78.1 1
<helm-release-name>-appinfo-pre-install 1 1 1 2 78.1 1
<helm-release-name>-appinfo-post-install 1 1 1 2 78.1 1
<helm-release-name>-appinfo-pre-upgrade 1 1 1 2 78.1 1
<helm-release-name>-appinfo-post-upgrade 1 1 1 2 78.1 1
<helm-release-name>-appinfo-pre-rollback 1 1 1 2 78.1 1
<helm-release-name>-appinfo-post-rollback 1 1 1 2 78.1 1
<helm-release-name>-appinfo-pre-delete 1 1 1 2 78.1 1
<helm-release-name>-appinfo-post-delete 1 1 1 2 78.1 1
<helm-release-name>-perfinfo-pre-install 1 1 1 2 78.1 1
<helm-release-name>-perfinfo-post-install 1 1 1 2 78.1 1
<helm-release-name>-perfinfo-pre-upgrade 1 1 1 2 78.1 1
<helm-release-name>-perfinfo-post-upgrade 1 1 1 2 78.1 1
<helm-release-name>-perfinfo-pre-rollback 1 1 1 2 78.1 1
<helm-release-name>-perfinfo-post-rollback 1 1 1 2 78.1 1
<helm-release-name>-perfinfo-pre-delete 1 1 1 2 78.1 1
<helm-release-name>-perfinfo-post-delete 1 1 1 2 78.1 1

Where,

<helm-release-name> is prefixed in each microservice name. For example, if helm-release-name is "ocnrf", then nfregistration microservice name is "ocnrf-nfregistration".

2.1.3.1.7 Total Ephemeral Resources

Table 2-11 Total Ephemeral Resources

Service Name Ephemeral Storage
Min (Mi) Max (Gi)
Helm test 590.1 1.5
<helm-release-name>-nfregistration 1770.3 4.5
<helm-release-name>-nfdiscovery 1770.3 205
<helm-release-name>-nfsubscription 1770.3 4.5
<helm-release-name>-nrfauditor 1770.3 4.5
<helm-release-name>-nrfconfiguration 1180.2 3
<helm-release-name>-nfaccesstoken 1770.3 4.5
<helm-release-name>-nrfartisan 1180.2 3
<helm-release-name>-nrfcachedata 1770.3 4.5
<helm-release-name>-ingressgateway 1770.3 51
<helm-release-name>-egressgateway 1770.3 28.5
<helm-release-name>-alternate-route 1770.3 4.5
<helm-release-name>-appinfo 1770.3 4.5
<helm-release-name>-perfinfo 1770.3 4.5

Where: <helm-release-name> is prefixed in each microservice name. For example, if helm-release-name is "ocnrf", then nfregistration microservice name is "ocnrf-nfregistration".

2.1.3.2 cnDBTier Resource Requirement

This section provides the cnDBTier resource requirement for NRF deployment.

2.1.3.2.1 cnDBTier Services

Table 2-12 cnDBTier Services Resource Requirements

Service Name Pod Replica # CPU/Pod Memory/Pod (in Gi) PVC Size (in Gi) Ephemeral Storage
Min Min Max Min Max PVC1 PVC2 Min (Mi) Max (Gi)
MGMT (ndbmgmd) 2 4 4 8 10 15 NA 90 1
DB (ndbmtd) 4 4 4 5 5 4 5 90 1
SQL (ndbmysqld) 2 4 4 11 11 13 NA 90 1
SQL (ndbappmysqld) 2 2 2 3 3 1 NA 90 1
Monitor Service (db-monitor-svc) 1 4 4 4 4 NA NA 90 1
Backup Manager Service (db-backup-manager-svc) 1 0.1 0.1 130(Mi) 130 (Mi) NA NA 90 1
Replication Service - Leader 1 2 2 12 12 4 NA 90 1
Replication Service - Other 0 0.6 1 1 2 NA NA 90 1

Note:

  • Node profiles in the above tables are for two-site replication cnDBTier clusters. Modify the ndbmysqld and Replication Service pods based on the number of georeplication sites.
  • In case, any of the service requires a vertical scaling of any of their PVC, see the respective sub-section in "Vertical Scaling" section in Oracle Communications Cloud Native Core, cnDBTier User Guide.
  • PVC shrinking (downsizing) is not supported. It is recommended to retain the existing vertically scaled up PVC sizes, eventhough cnDBTier is rolledback to previous releases.
2.1.3.2.2 cnDBTier Sidecars
The following table indicates the sidecars for cnDBTier services.

Table 2-13 Sidecars per cnDBTier Service

Service Name init-sidecar db-executor-svc init-discover-sql-ips db-infra-monitor-svc
MGMT (ndbmgmd) No No No Yes
DB (ndbmtd) No Yes No Yes
SQL (ndbmysqld) Yes No No Yes
SQL (ndbappmysqld) Yes No No Yes
Monitor Service (db-monitor-svc) No No No No
Backup Manager Service (db-backup-manager-svc) No No No No
Replication Service No No No No

Table 2-14 cnDBTier Additional Containers

Sidecar CPU/Pod Memory/Pod (in Gi) PVC Size (in Gi) Ephemeral Storage
Min Max Min Max PVC1 PVC2 Min (Mi) Max(Gi)Min (Mi)
db-executor-svc 1 1 2 2 4 NA 90 1
init-sidecar 0.1 0.1 025 0.25 NA NA 90 1
init-discover-sql-ips 0.2 0.2 0.5 0.5 NA NA 90 1
db-infra-monitor-svc 0.1 0.1 0.25 0.25 NA NA 90 1
2.1.3.2.3 Service Mesh Sidecar

Table 2-15 Service Mesh Sidecar

Service Name CPU Memory (in Gi) Concurrency
Min Max Min Max
MGMT (ndbmgmd) 2 2 1 1 8
DB (ndbmtd) 2 2 1 1 8
SQL (ndbmysqld) 2 2 1 1 8
SQL (ndbappmysqld) 2 2 1 1 8
Monitor Service (db-monitor-svc) 2 2 1 1 2
Backup Manager Service (db-backup-manager-svc) 2 2 1 1 2
Replication Service-Leader 2 2 1 1 2
Replication Service-Other 2 2 1 1 2
2.1.3.2.4 Total Ephemeral Resources

Table 2-16 Total Ephemeral Resources

Service Name Ephemeral Storage
Min(Mi) Max(Gi)
MGMT (ndbmgmd) 1204 3
DB (ndbmtd) 2408 6
SQL (ndbmysqld) 1204 3
SQL (ndbappmysqld) 1204 3
Monitor Service (db-monitor-svc) 602 1.5
Backup Manager Service (db-backup-manager-svc) 602 1.5
Replication Service-Leader 602 1.5
Replication Service-Other 0 0

Note:

Node profiles in the above tables are for two-site replication cnDBTier clusters. Modify the ndbmysqld and Replication Service pods based on the number of georeplication sites.

2.2 Installation Sequence

This section describes preinstallation, installation, and postinstallation tasks for NRF.

You must perform these tasks after completing Prerequisites and in the same sequence as outlined in the following table for CDCS and CLI installation methods as applicable.

Note:

This section does not provide instructions to download the NRF package and install NRF using CDCS. For more information, see Oracle Communications CD Control Server User Guide.

Table 2-17 NRF Installation Sequence

Installation Sequence Applicable for CDCS Applicable for CLI
Preinstallation Tasks Yes Yes
Installation Tasks See Oracle Communications CD Control Server User Guide Yes
Postinstallation Tasks Yes Yes

2.2.1 Preinstallation Tasks

To install NRF through CDCS and CLI methods, perform the tasks described in this section.

2.2.1.1 Downloading the NRF package
To download the NRF package from My Oracle Support, perform the following procedure:
  1. Log in to My Oracle Support using your login credentials.
  2. Click the Patches & Updates tab to locate the patch.
  3. In the Patch Search console, click the Product or Family (Advanced) option.
  4. In the Product field, enter Oracle Communications Cloud Native Core - 5G and select the product from the Product drop-down list.
  5. From the Release drop-down list, select Oracle Communications Cloud Native Core Network Repository Function <release_number>.

    Where, <release_number> indicates the required release number of NRF.

  6. Click Search.

    The Patch Advanced Search Results list appears.

  7. Select the required patch from the list.

    The Patch Details window appears.

  8. Click Download.

    The File Download window appears.

  9. Click the <p********_<release_number>_Tekelec>.zip file to download the release package.

    Where, <p********> is the MOS patch number and <release_number> is the release number of NRF.

2.2.1.2 Pushing the Images to Customer Docker Registry

NRF deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.

Following table lists the Docker images of NRF.

Table 2-18 NRF Images

Services Image Tag
<helm-release-name>-nfregistration ocnrf-nfregistration 23.4.6
<helm-release-name>-nfsubscription ocnrf-nfsubscription 23.4.6
<helm-release-name>-nfdiscovery ocnrf-nfdiscovery 23.4.6
<helm-release-name>-nrfauditor ocnrf-nrfauditor 23.4.6
<helm-release-name>-nrfconfiguration ocnrf-nrfconfiguration 23.4.6
<helm-release-name>-appinfo oc-app-info 23.4.12
<helm-release-name>-nfaccesstoken ocnrf-nfaccesstoken 23.4.6
<helm-release-name>-nrfartisan ocnrf-nrfartisan 23.4.6
<helm-release-name>-alternate-route alternate_route 23.4.11
<helm-release-name>-performance oc-perf-info 23.4.12
<helm-release-name>-egressgateway configurationinit 23.4.11
ocegress_gateway 23.4.11
<helm-release-name>-ingressgateway configurationinit 23.4.11
ocingress_gateway 23.4.11

Note:

Ingress Gateway and Egress Gateway use the same configurationinit images.

Apart from above images, the following additional images are available in ocnrf-images-<release_number>.tar.

Table 2-19 Additional Images

Image Tag
ocdebug-tools 23.4.3
helm-test 23.4.3
common_config_hook 23.4.11

To push the images to the registry:

  1. Navigate to the location where you want to install NRF. Unzip the NRF release package (<p********>_<release_number>_Tekelec.zip) to retrieve the following CSAR package.

    The NRF package is as follows:

    <ReleaseName>_csar_<Releasenumber>.zip

    Where,

    ReleaseName is a name that is used to track this installation instance.

    Releasenumber is the release number.

    For example, ocnrf_csar_23_4_6_0_0.zip
  2. Untar the NRF CSAR package to retrieve NRF image tar file:
    tar -xvzf <ReleaseName>_csar_<Releasenumber>.zip
    For example:
    tar -xvzf ocnrf_csar_23_4_6_0_0.zip
    The directory consists of the following:
    .
    ├── Definitions
    │   ├── ocnrf_cne_compatibility.yaml
    │   └── ocnrf.yaml
    ├── Files
    │   ├── alternate_route-23.4.11.tar
    │   ├── ChangeLog.txt
    │   ├── common_config_hook-23.4.11.tar
    │   ├── configurationinit-23.4.11.tar
    │   ├── Helm
    │   │   ├── ocnrf-23.4.6.tgz
    │   │   ├── ocnrf-network-policy-23.4.6.tgz
    │   │   └── ocnrf-servicemesh-config-23.4.6.tgz
    │   ├── helm-test-23.4.3.tar
    │   ├── Licenses
    │   ├── oc-app-info-23.4.12.tar
    │   ├── ocdebug-tools-23.4.3.tar
    │   ├── ocegress_gateway-23.4.11.tar
    │   ├── ocingress_gateway-23.4.11.tar
    │   ├── ocnrf-nfaccesstoken-23.4.6.tar
    │   ├── ocnrf-nfdiscovery-23.4.6.tar
    │   ├── ocnrf-nfregistration-23.4.6.tar
    │   ├── ocnrf-nfsubscription-23.4.6.tar
    │   ├── ocnrf-nrfartisan-23.4.6.tar
    │   ├── ocnrf-nrfauditor-23.4.6.tar
    │   ├── ocnrf-nrfconfiguration-23.4.6.tar
    │   ├── oc-perf-info-23.4.12.tar
    │   ├── Oracle.cert
    │   └── Tests
    ├── ocnrf.mf
    ├── Scripts
    │   ├── ocnrf_alertrules_23.4.6.yaml
    │   ├── ocnrf_alertrules_promha_23.4.6.yaml
    │   ├── ocnrf_configuration_openapi_23.4.6.yaml
    │   ├── ocnrf_custom_values_23.4.6.yaml
    │   ├── ocnrf_dashboard_23.4.6.json
    │   ├── ocnrf_dashboard_promha_23.4.6.yaml
    │   ├── ocnrf_dbresource_2site.sql
    │   ├── ocnrf_dbresource_3site.sql
    │   ├── ocnrf_dbresource_4site.sql
    │   ├── ocnrf_dbresource_standalone.sql
    │   ├── ocnrf_dbtier_23.4.6_custom_values_23.4.6.yaml
    │   ├── ocnrf_mib_23.4.6.mib
    │   ├── ocnrf_mib_tc_23.4.6.mib
    │   ├── ocnrf_network_policy_custom_values_23.4.0.yaml
    │   ├── ocnrf_servicemesh_config_custom_values_23.4.0.yaml
    │   └── toplevel_23.4.6.mib
    └── TOSCA-Metadata
        └── TOSCA.meta
    
  3. Open the Files folder and run one of the following commands to load the NRF images:
    podman load --input /IMAGE_PATH/<microservicename-releasenumber>.tar
    docker load --input /IMAGE_PATH/<microservicename-releasenumber>.tar

    Where,

    IMAGE_PATH is the location where the NRF docker image tar file is archived.

    Sample command:
    podman load --input /IMAGE_PATH/ocnrf-nfregistration-23.4.6.tar
  4. Run one of the following commands to verify that the images are loaded:
    podman images
    docker images
    Verify the list of images shown in the output with the list of images shown in the table Table 2-18. If the list does not match, reload the image tar file.
    Sample output:
    podman images
    docker.io/ocnrf/ocnrf-nrfartisan        23.4.6  8518be6dad6e  8m42s ago  703 MB
    docker.io/ocnrf/ocnrf-nfaccesstoken     23.4.6 5e8d766476ec  8m42s ago    650 MB
    docker.io/ocnrf/ocnrf-nrfconfiguration  23.4.6 d6a39a514897  8m42s ago    653 MB
    docker.io/ocnrf/ocnrf-nrfauditor        23.4.6 5bbde830092e  8m42s ago    650 MB
    docker.io/ocnrf/ocnrf-nfdiscovery       23.4.6 0df8d9401674  8m42s ago    650 MB
    docker.io/ocnrf/ocnrf-nfsubscription    23.4.6 a4b04fe9a0b0  8m42s ago    650 MB
    docker.io/ocnrf/ocnrf-nfregistration    23.4.6 6ea2ccd0f568  8m42s ago    650 MB
    docker.io/ocnrf/oc-app-info             23.4.12 9d03147abf17  8m42s ago   486 MB
    docker.io/ocnrf/ocingress_gateway       23.4.11 879743d2a454  8m42s ago   605 MB
    docker.io/ocnrf/ocegress_gateway        23.4.11 b580eb8ded9b  8m42s ago   596 MB
    docker.io/ocnrf/common_config_hook      23.4.11 85a04360b8aa  8m42s ago   561 MB
    docker.io/ocnrf/alternate_route         23.4.11 3684cf6bc379  8m42s ago    546 MB
    docker.io/ocnrf/configurationinit       23.4.11 e791e48c4e7d  8m42s ago  559 MB
    docker.io/ocnrf/ocdebug-tools           23.4.3   ab0fd4202122  8m42s ago  592 MB
    docker.io/ocnrf/helm-test               23.4.3  d9b90fe68848  8m42s ago  549 MB
    docker.io/ocnrf/oc-perf-info            23.4.12  f8c4e7d18928  8m42s ago  600 MB
    
  5. Run one of the following commands to tag the docker images to docker registry:
    podman tag <image-name>:<image-tag> <docker-repo>/ <image-name>:<image-tag>
    docker tag <image-name>:<image-tag> <docker-repo>/ <image-name>:<image-tag>
    Where,

    image-name is the NRF docker image name in the tar file.

    image-tag is the release number.

    docker-repo is the docker registry address with Port Number, if registry has port attached. This is a repository to store the images.

    Sample command:
    docker tag ocnrf/ocnrf-nfaccesstoken:23.4.6 bumblebee-bastion-1:5000/occne/ocnrf/ocnrf-nfaccesstoken:23.4.6

    Note:

    Perform this step for all the docker images.
  6. Run the following command to push the image to docker registry:
    docker push <docker-repo>/<image-name>:<image-tag> 
    Sample command:
    docker push bumblebee-bastion-1:5000/occne/ocnrf/ocnrf-nfaccesstoken:23.4.6

    Note:

    • Perform this step for all the docker images.
    • It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise docker push command may fail.
2.2.1.3 Verifying and Creating Namespace

This section explains how to verify and create a namespace in the system.

Note:

This is a mandatory procedure, run this before proceeding further with the installation procedure. The namespace created or verified in this procedure is an input for the next procedures.
  1. Run the following command to verify if the required namespace already exists in the system:
    kubectl get namespace

    In the output of the above command, if the namespace exists, continue with the Creating Service Account, Role, and RoleBinding section.

  2. If the required namespace is unavailable, create the namespace using the following command:
    kubectl create namespace <required namespace>

    Where,

    <required namespace> is the name of the namespace.

    For example:

    kubectl create namespace ocnrf

    Sample output:

    namespace/ocnrf created

  3. Update the database.nameSpace parameter in the ocnrf-custom-values-23.4.6.yaml file with the namespace that is created in the previous step.
    Here is the sample configuration snippet from the ocnrf-custom-values-23.4.6.yaml file:
      database:
        # Namespace where the Secret is created
        nameSpace: "ocnrf"
Naming Convention for Namespaces
The namespace should:
  • start and end with an alphanumeric character.
  • contain 63 characters or less.
  • contain only alphanumeric characters or '-'.

Note:

It is recommended to avoid using the prefix kube- when creating a namespace. The prefix is reserved for Kubernetes system namespaces.
2.2.1.4 Creating Service Account, Role, and RoleBinding
This section is optional and it describes how to manually create a service account, role, and rolebinding. It is required only when customer needs to create a role, rolebinding, and service account manually before installing NRF.

Note:

  • The secret(s) should exist in the same namespace where NRF is getting deployed. This helps to bind the Kubernetes role with the given service account.
  • This procedure is a sample. In case the service account with role and role binding is already configured or the user has any in-house procedure to create a service account, skip this procedure. In case the deployment has Service Mesh, see Configuring NRF with ASM for the details and skip this procedure.
To create service account, role, and rolebinding:
  1. Run the following command to create an NRF resource file:
    vi <ocnrf-resource-file>
    Where, <ocnrf-resource-file> is the file name for service account resource.

    Example:

    vi ocnrf-resource-template.yaml
  2. Update the ocnrf-resource-template.yaml with release specific information:

    Note:

    Copy and paste the following sample in the ocnrf-resource-template.yaml file and replace <helm-release> with your own release name and <namespace> with your own namespace value throughout the file. Save it.
    ## Sample template start#
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: <helm-release>-ocnrf-serviceaccount
      namespace: <namespace>
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: <helm-release>-ocnrf-role
      namespace: <namespace>
    rules:
    - apiGroups:
      - autoscaling
      resources:
      - horizontalpodautoscalers
      verbs:
      - get
      - watch
      - list
    - apiGroups:
      - apps
      resources:
      - deployments
      - replicasets
      - statefulsets
      verbs:
      - get
      - watch
      - list
    - apiGroups:
      - "" # "" indicates the core API group
      resources:
      - services
      - configmaps
      - pods
      - secrets
      - endpoints
      - deployments
      - persistentvolumeclaims
      verbs:
      - get
      - watch
      - list
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: <helm-release>-ocnrf-rolebinding
      namespace: <namespace>
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: <helm-release>-ocnrf-role
    subjects:
    - kind: ServiceAccount
      name: <helm-release>-ocnrf-serviceaccount
      namespace: <namespace>
    
    Where,
    • <helm-release> is a name provided by the user to identify the Helm deployment.
    • <namespace> is a name provided by the user to identify the Kubernetes namespace of NRF. All the NRF microservices are deployed in this Kubernetes namespace.

    Note:

    • autoscaling and apps apiGroups are required for Overload Control feature.
    • PodSecurityPolicy kind is required for Pod Security Policy service account. For more information, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  3. Run the following command to create service account, role, and role binding:
    kubectl -n <ocnrf-namespace> create -f <ocnrf-resource-file>.yaml
    Where,
    • <ocnrf-namespace> is the name of the namespace.
    • <ocnrf-resource-file> is the file name for service account resource.

    For example:

    kubectl -n ocnrf create -f ocnrf-resource-template.yaml

  4. Update the serviceAccountName parameter in the ocnrf_custom_values_23.4.6.yaml file with the value updated in name field under kind: ServiceAccount. For more information about serviceAccountName parameter, see the Global Parameters section.
2.2.1.5 Configuring Database, Creating Users, and Granting Permissions

This section explains how database administrators can create users and database in a single and multisite deployment.

Note:

  • Before running the procedure for georedundant sites, ensure that the cnDBTier for georedundant sites is already up and replication channels are enabled.
  • While performing a fresh installation, if NRF is already deployed, purge the deployment and remove the database and users that were used for the previous deployment. For uninstallation procedure, see Uninstalling NRF.

NRF Databases

For NRF application, four types of databases are required:
  1. NRF application database: This database consists of tables used by application to perform functionality of NRF network function.
  2. NRF network database: This database consists of tables used by NRF to store the network details such as system details and database backups.
  3. common configuration database: This database consists of tables used by common configuration. In case of georedundant deployments, each site must have a unique common configuration database.
  4. leaderElectionDB database: This database is used by the microservices such as perf-info, appInfo, and auditor to detect the leader pod of the respective microservices in case of multipod deployment. A unique table is created for each of the microservice to monitor the leader pod of that microservice. For georedundant deployments, each site must have a unique leaderElectionDB database.

    For example:

    • For Site 1: ocnrf_leaderElectionDB_site1
    • For Site 2: ocnrf_leaderElectionDB_site2
    • For Site 3: ocnrf_leaderElectionDB_site3
    • For Site 4: ocnrf_leaderElectionDB_site4

NRF Users

There are two types of NRF database users with different set of permissions:
  1. NRF privileged user: This user has complete set of permissions. This user can perform create, alter, drop operations on tables to perform install, upgrade, rollback, delete operations.
  2. NRF application user: This user has limited set of permissions and is used by NRF application during service operations handling. This user can insert, update, get, and remove the records. This user cannot create, alter, and drop the database and the tables.
2.2.1.5.1 Single Site

This section explains how database administrator can create the database, users and grant permissions to the users for single NRF site.

  1. Log in to the machine where ssh keys are stored. The machine must have permission to access the SQL nodes of NDB cluster.
  2. Copy ocnrf-db-resource-standalone.sql file to the current directory. This file is available in NRF CSAR package, see NRF Customization for more information.

    Note:

    This MySQL script needs to be run only on one of the MySQL nodes of only one site.
  3. Update the user name and password in the ocnrf-db-resource-standalone.sql file.
  4. Update the names of NRF application database, network database, and common configuration database in the ocnrf-db-resource-standalone.sql file.
  5. Log in to the MySQL prompt using root permission or user.
  6. Check if NRF privileged user already exists by running the following query in the MySQL prompt:
    mysql> select user from mysql.user where user='<OCNRF-Privileged-User-Name>';

    Note:

    If the result is not an empty set, comment out the line which is creating the NRF privileged user in the script.
  7. Check if NRF application user already exists by running the following query in the MySQL prompt:
    mysql> select user from mysql.user where user='<OCNRF-Application-User-Name>';

    Note:

    If the result is not an empty set, comment out the line which is creating the NRF application user in the script.
  8. Copy the updated MySQL script to only one of the MySQL nodes of the site where you want to run:
    $ kubectl cp <sitename> ndbappmysqld-0:/home/mysql -n <cnDBTier namespace> -c mysqlndbcluster

    For example:

    $ kubectl cp ocnrf-db-resource-2-site.sql ndbappmysqld-0:/home/mysql -n chicago -c mysqlndbcluster
  9. Connect to the MySQL node to which the script was copied.
  10. Assuming that this MySQL script is in the present working directory, it needs to be run (as root MySQL User) as shown below:
    $ ls -lrt
    
    total 4
    
    -rw-------. 1 mysql mysql 1695 Jun 10 04:12 ocnrf-db-resource-2-site.sql
    
    $
    
    $ mysql -h 127.0.0.1 -uroot -p < ocnrf-db-resource-2-site.sql
    
    Enter password:
    
    $
  11. After successful running, the script returns to shell-prompt.
2.2.1.5.2 Multisite
This section explains how database administrators can create the database and users for a multisite deployment.

Note:

For georedundant scenarios, change the parameter values of the unique databases in ocnrf_custom_values_23.4.6.yaml file.
  1. Log in to the machine where ssh keys are stored. The machine must have permission to access the SQL nodes of NDB cluster.
  2. Copy ocnrf-db-resource-<site_number>-site.sql file to the current directory. This file is available in NRF CSAR package, see NRF Customization for more information.

    Where,

    <site_number> is the number of sites deployed. Copy the corresponding file based on the number of sites deployed:
    • In case of two sites, use ocnrf-db-resource-2-site.sql file.
    • In case of three sites, use ocnrf-db-resource-3-site.sql file.
    • In case of four sites, use ocnrf-db-resource-4-site.sql file.

    Note:

    Run this MySQL script before the deployment of a georedundant NRF. The database replication must be up between the sites. This MySQL script must be run only on one of the MySQL nodes of only one site.
  3. Update the user name and password in the ocnrf-db-resource-<site_number>-site.sql file.
  4. Update the names of NRF application database, network database, leaderElectionDB database, and common configuration database in the ocnrf-db-resource-<site_number>-site.sql file.

    Caution:

    For each georedundant site, the common configuration database and leaderElectionDB name must be different.
  5. Log in to the MySQL prompt using root permission or user.
  6. Check if NRF privileged user already exists by running the following query in the MySQL prompt:
    mysql> select user from mysql.user where user='<OCNRF-Privileged-User-Name>';

    Note:

    If output of the command displays the privileged user, comment the line in the ocnrf-db-resource-<site_number>-site.sql script which is creating the NRF privileged user.
  7. Check if NRF application user already exists by running the following query in the MySQL prompt:
    mysql> select user from mysql.user where user='<NRF-Application-User-Name>';

    Note:

    If output of the command displays the application user, comment the line in the ocnrf-db-resource-<site_number>-site.sql script which is creating the NRF application user.
  8. Copy the updated MySQL script to only one of the MySQL nodes of the site where you want to run:

    For example:

    $ kubectl cp ocnrf-db-resource-<site_number>-site.sql ndbmysqld-0:/home/mysql -n chicago -c mysqlndbcluster
  9. Connect to the MySQL node to which the script was copied.
  10. Assuming that this MySQL script is in the present working directory, run the script (as root MySQL User) as shown follows:
    $ ls -lrt
    
    total 4
    
    -rw-------. 1 mysql mysql 1695 Jun 10 04:12 ocnrf-db-resource-<site_number>-site.sql
    
    $
    
    $ mysql -h 127.0.0.1 -uroot -p < ocnrf-db-resource-<site_number>-site.sql
    
    Enter password:
    
    $
  11. After successful running, the script returns to shell-prompt.
2.2.1.6 Configuring Kubernetes Secret for Accessing NRF Database

This section explains how to configure Kubernetes secrets for accessing NRF database.

2.2.1.6.1 Creating and Updating Secret for Privileged Database User

This section explains how to create and update Kubernetes secret for privileged user to access the database.

  1. Run the following command to create Kubernetes secret:
    $ kubectl create secret generic <privileged user secret name> --from-literal=dbUsername=<NRF Privileged Mysql database username> --from-literal=dbPassword=<NRF Privileged Mysql User database passsword> --from-literal=appDbName=<NRF Mysql database name> --from-literal=networkScopedDbName=<NRF Mysql Network database name> --from-literal=commonConfigDbName=<NRF Mysql Common Configuration DB> --from-literal=leaderElectionDbName=<leaderElectionDB for multipod service> -n <Namespace of NRF deployment>

    Where,

    <privileged user secret name> is the secret name of the Privileged User.

    <NRF Privileged Mysql database username> is the username of the Privileged User.

    <NRF Privileged Mysql User database passsword> is the password of the Privileged User.

    <NRF Mysql database name> is the database name.

    <NRF Mysql Network database name> is the MySQL network database name.

    <leaderElectionDB for multipod service> is the MySQL database name for multipod service.

    <Namespace of NRF deployment> is the namespace of NRF deployment.

    Note:

    Note down the command used during the creation of Kubernetes secret, this command will be used for updating the secrets in future.
    For example:
    $ kubectl create secret generic privilegeduser-secret --from-literal=dbUsername=nrfPrivilegedUsr --from-literal=dbPassword=nrfPrivilegedPasswd --from-literal=appDbName=nrfApplicationDB --from-literal=networkScopedDbName=nrfNetworkDB --from-literal=commonConfigDbName=commonConfigurationDB --from-literal=leaderElectionDbName=leaderElectionDB -n ocnrf

    Note:

    • The value of commonConfigDbName and leaderElectionDbName must have the same value as configured in database.commonConfigDbName and database.leaderElectionDbName under Global Parameters section respectively.
    • It is recommended to use the same secret name as mentioned in the example. In case you change <privileged user secret name>, then update the privilegedUserSecretName parameter in the ocnrf-custom-values-23.4.6.yaml file. For more information about privilegedUserSecretName parameter, see the Global Parameters section.
  2. Run the following command to verify the secret created:
    $ kubectl describe secret <database secret name> -n <Namespace of NRF deployment>

    Where,

    <database secret name> is the secret name of the database.

    <Namespace of NRF deployment> is the namespace of NRF deployment.

    For example:
    $ kubectl describe secret privilegeduser-secret -n ocnrf
    Sample output:
    Name:         privilegeduser-secret
    Namespace:    ocnrf
    Labels:       <none>
    Annotations:  <none>
    
    Type:  Opaque
    
    Data
    ====
    mysql-password:  10 bytes
    mysql-username:  17 bytes
    
  3. To update the Kubernetes secret, update the command used in step 1 with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of NRF deployment>". After the update is performed, use the following command:
    $ kubectl create secret generic <privileged user secret name> --from-literal=dbUsername=<NRF Privileged Mysql database username> --from-literal=dbPassword=<NRF Privileged Mysql database password> --from-literal=appDbName=<NRF Mysql database name> --from-literal=networkScopedDbName=<NRF Mysql Network database name> --from-literal=commonConfigDbName=<NRF Mysql Common Configuration DB> --dry-run -o yaml -n <Namespace of NRF deployment> | kubectl replace -f - -n <Namespace of NRF deployment>

    Where,

    <privileged user secret name> is the secret name of the Privileged User.

    <NRF Privileged Mysql database username> is the username of the Privileged User.

    <NRF Privileged Mysql User database passsword> is the password of the Privileged User.

    <NRF Mysql database name> is the database name.

    <NRF Mysql Network database name> is the MySQL network database name.

    <NRF Mysql Common Configuration DB> is the MySQL common configuration database name.

    <leaderElectionDB for multipod service> is the MySQL database name for multipod service.

    <Namespace of NRF deployment> is the namespace of NRF deployment.

  4. Run the updated command.

    After the secret update is complete, the following message appears:

    secret/<database secret name> replaced

    Where,

    <database secret name> is the updated secret name of the Privileged User.

2.2.1.6.2 Creating and Updating Secret for Application Database User

This section explains how to create and update Kubernetes secret for application user to access the database.

  1. Run the following command to create Kubernetes secret:
    $ kubectl create secret generic <appuser-secret name> --from-literal=dbUsername=<NRF Application User Name> --from-literal=dbPassword=<Password for NRF Application User> --from-literal=appDbName=<NRF Application Database> -n <Namespace of NRF deployment>

    Where,

    <appuser-secret name> is the secret name of the Application User.

    <NRF Application User Name> is the username of the Application User.

    <Password for NRF Application User> is the password of the Application User.

    <NRF Application Database> is the database name.

    <Namespace of NRF deployment> is the namespace of NRF deployment.

    Note:

    Note down the command used during the creation of Kubernetes secret, this command will be used for updating the secrets in future.
    For example:
    $ kubectl create secret generic appuser-secret --from-literal=dbUsername=nrfApplicationUsr --from-literal=dbPassword=nrfApplicationPasswd --from-literal=appDbName=nrfApplicationDB -n ocnrf 

    Note:

    It is recommended to use the same secret name as mentioned in the example. In case you change <appuser-secret name>, then update the appUserSecretName parameter in the ocnrf-custom-values-23.4.6.yaml file. For more information about appUserSecretName parameter, see the Global Parameters section.
  2. Run the following command to verify the secret created:
    $ kubectl describe secret <appuser-secret name> -n <Namespace of NRF deployment>

    Where,

    <appuser-secret name> is the secret name of the Application User.

    <Namespace of NRF deployment> is the namespace of NRF deployment.

    For example:
    $ kubectl describe secret appuser-secret -n ocnrf
    Sample output:
    Name:         appuser-secret
    Namespace:    ocnrf
    Labels:       <none>
    Annotations:  <none>
    
    Type:  Opaque
    
    Data
    ====
    mysql-password:  10 bytes
    mysql-username:  7 bytes
  3. To update the Kubernetes secret, update the command used in step 1 with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of NRF deployment>". After the update is performed, use the following command:
    $ kubectl create secret generic <appuser-secret name> --from-literal=dbUsername=<NRF Application User Name> --from-literal=dbPassword=<Password for NRF Application User> --from-literal=appDbName=<NRF Application Database> -n <Namespace of NRF deployment>

    Where,

    <appuser-secret name> is the secret name of the Application User.

    <NRF Application User Name> is the username of the Application User.

    <Password for NRF Application User> is the password of the Application User.

    <NRF Application Database> is the database name.

    <Namespace of NRF deployment> is the namespace of NRF deployment.

  4. Run the updated command.

    After the secret update is complete, the following message appears:

    secret/<database secret name> replaced

    Where,

    <database secret name> is the updated secret name of the Application User.

2.2.1.7 Configuring Secrets for Enabling HTTPS

This section explains the steps to configure HTTPS at Ingress and Egress Gateways.

2.2.1.7.1 Managing HTTPS at Ingress Gateway

This section explains the steps to create and update the Kubernetes secret, and enable HTTPS at Ingress Gateway.

Note:

Creation process for private keys, certificates and passwords is based on discretion of user or operator.

Creating and Updating Secrets at Ingress Gateway

To create Kubernetes secret for HTTPS, the following files are required:
  • ECDSA private key and CA signed certificate of NRF, if ingressgateway.service.ssl.initialAlgorithm is ES256 or RSA private key and CA signed certificate of NRF, if ingressgateway.service.ssl.initialAlgorithm is RS256
  • TrustStore password file
  • KeyStore password file
  • CA Root File

Note:

  • The passwords for TrustStore and KeyStore are stored in respective password files.
  • The process to create private keys, certificates, and passwords is at the discretion of the user or operator.
You can manage Kubernetes secrets for enabling HTTPS in NRF using one of the following methods:
  • Managing secrets through OCCM
  • Managing secrets manually

Managing Secrets Through OCCM

To create the secrets using OCCM, see "Managing Certificates" in Oracle Communications Cloud Native Core, Certificate Management User Guide.

The secrets created by OCCM are then patched to add keyStore password and trustStore password files by running the following commands:
  1. To patch the secrets created with the keyStore password file:
    TLS_CRT=$(base64 < "key.txt" | tr -d '\n')
    kubectl patch secret server-primary-ocnrf-secret-occm -n nrfsvc -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"
    Where,
    • key.txt is the password file that contains KeyStore password.
    • server-primary-ocnrf-secret-occm is the secret created by OCCM.
  2. To patch the secrets created with the trustStore password file:
    TLS_CRT=$(base64 < "trust.txt" | tr -d '\n')
    kubectl patch secret server-primary-ocnrf-secret-occm -n nrfsvc -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"
    Where,
    • trust.txt is the password file that contains TrustStore password.
    • server-primary-ocnrf-secret-occm is the secret created by OCCM.

Note:

To monitor the lifecycle management of the certificates through OCCM, do not patch the Kubernetes secrets manually to update the TLS certificate or keys. It must be done through the OCCM GUI.

Managing Secrets Manually

  1. Run the following command to create secret:
    $ kubectl create secret generic <ocingress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of NRF deployment>

    Where,

    <ocingress-secret-name> is the secret name for Ingress Gateway.

    <ssl_ecdsa_private_key.pem> is the ECDSA private key.

    <rsa_private_key_pkcs1.pem> is the RSA private key.

    <ssl_truststore.txt> is the SSL truststore file.

    <ssl_keystore.txt> is the SSL keystore file.

    <ssl_cabundle.crt> is the CA bundle certificate.

    <caroot.cer> is the CA Root file.

    <ssl_rsa_certificate.crt> is the SSL RSA certificate.

    <ssl_ecdsa_certificate.crt> is the SSL ECDSA certificate.

    <Namespace of NRF deployment> is the namespace of NRF deployment.

    Note:

    Note down the command used during the creation of the secret. Use the command for updating the secrets in future.
    For example:
    $ kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n ocnrf

    Note:

    It is recommended to use the same secret name as mentioned in the example. In case you change <ocingress-secret-name>, then update the k8SecretName parameter under ingress-gateway attributes section in the ocnrf-custom-values-23.4.6.yaml file. For more information about ingress-gateway attributes, see the Ingress Gateway Microservice section.
  2. Run the following command to verify the secret created:
    $ kubectl describe secret <ocingress-secret-name> -n <Namespace of NRF deployment>

    Where,

    <ocingress-secret-name> is the secret name for Ingress Gateway.

    <Namespace of NRF deployment> is the namespace of NRF deployment.

    For example:
    $ kubectl describe secret ocingress-secret -n ocnrf
    Sample output:
    Name:         ocingress-secret
    Namespace:    ocnrf
    Labels:       <none>
    Annotations:  <none>
    
    Type:  Opaque
    
  3. (Optional) Perform the following tasks to add, delete, or modify TLS or SSL certificates in the secret:
    • To add a certificate, run the following command:
      TLS_CRT=$(base64 < "<certificate-name>" | tr -d '\n')
      kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":\"${TLS_CRT}\"}}"
      Where,
      • <certificate-name> is the certificate file name.
      • <secret-name> is the name of the secret, for example, ocnrf-secret.

      Example:

      If you want to add a Certificate Authority (CA) Root from the caroot.cer file to the ocnrf-secret, run the following command:

      TLS_CRT=$(base64 < "caroot.cer" | tr -d '\n')
      kubectl patch secret ocnrf-secret  -p "{\"data\":{\"caroot.cer\":\"${TLS_CRT}\"}}" -n scpsvc

      Similarly, you can also add other certificates and keys to the ocnrf-secret.

    • To update an existing certificate, run the following command:
      TLS_CRT=$(base64 < "<updated-certificate-name>" | tr -d '\n')
      kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":\"${TLS_CRT}\"}}"

      Where, <updated-certificate-name> is the certificate file that contains the updated content.

      Example:

      If you want to update the privatekey present in the rsa_private_key_pkcs1.pem file to the ocnrf-secret, run the following command:

      TLS_CRT=$(base64 < "rsa_private_key_pkcs1.pem" | tr -d '\n') 
      kubectl patch secret ocnrf-secret -p "{\"data\":{\"rsa_private_key_pkcs1.pem\":\"${TLS_CRT}\"}}" -n scpsvc

      Similarly, you can also update other certificates and keys to the ocnrf-secret.

    • To remove an existing certificate, run the following command:
      kubectl patch secret <secret-name> -p "{\"data\":{\"<certificate-name>\":null}}"

      Where, <certificate-name> is the name of the certificate to be removed.

      The certificate must be removed when it expires or needs to be revoked.

      Example:

      To remove the CA Root from the ocnrf-secret, run the following command:
      kubectl patch secret ocnrf-secret  -p "{\"data\":{\"caroot.cer\":null}}" -n scpsvc
      

      Similarly, you can also remove other certificates and keys from the ocnrf-secret.

  4. To update the secret, update the command used in step 1 with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of NRF deployment>".
    After the update is performed, use the following command:
    $ kubectl create secret generic <ocingress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace of NRF deployment> | kubectl replace -f - -n <Namespace of NRF deployment>

    For example:

    $ kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n ocnrf | kubectl replace -f - -n ocnrf

    Note:

    The names used in the aforementioned command must be as same as the names provided in the custom_values.yaml in NRF deployment.
  5. Run the updated command.
    After the secret update is complete, the following message appears:
    secret/<ocingress-secret> replaced

Enabling HTTPS at Ingress Gateway

This step is required only when SSL settings need to be enabled on Ingress Gateway microservice of NRF.

  1. Enable enableIncomingHttps parameter under Ingress Gateway Global Parameters section in the ocnrf-custom-values-23.4.6.yaml file. For more information about enableIncomingHttps parameter, see the Ingress Gateway Global Parameters section.
  2. Configure the following details in the ssl section under ingress-gateway attributes, in case you have changed the attributes while creating secret:
    • Kubernetes namespace
    • Kubernetes secret name holding the certificate details
    • Certificate information
    service:
        # configuration under ssl section is mandatory if enableIncomingHttps is configured as "true"
        ssl:
    
          # OCNRF private key details for HTTPS
          # Secret Name, Namespace, Keydetails
          privateKey:
            k8SecretName: ocingress-secret
            k8NameSpace: ocnrf
            rsa:
              fileName: rsa_private_key_pkcs1.pem
            ecdsa:
              fileName: ssl_ecdsa_private_key.pem
    
          # OCNRF certificate details for HTTPS
          # Secret Name, Namespace, Keydetails
          certificate:
            k8SecretName: ocingress-secret
            k8NameSpace: ocnrf
            rsa:
              fileName: ssl_rsa_certificate.crt
            ecdsa:
              fileName: ssl_ecdsa_certificate.crt
    
          # OCNRF CA details for HTTPS
          caBundle:
            k8SecretName: ocingress-secret
            k8NameSpace: ocnrf
            fileName: caroot.cer
    
          # OCNRF KeyStore password for HTTPS
          # Secret Name, Namespace, Keydetails
          keyStorePassword:
            k8SecretName: ocingress-secret
            k8NameSpace: ocnrf
            fileName: ssl_keystore.txt
    
          # OCNRF TrustStore password for HTTPS
          # Secret Name, Namespace, Keydetails
          trustStorePassword:
            k8SecretName: ocingress-secret
            k8NameSpace: ocnrf
            fileName: ssl_truststore.txt
    
          # Initial Algorithm for HTTPS
          # Supported Values: ES256, RS256
          initialAlgorithm: ES256

    Note:

    If the certificates are not available, then create them following the instructions given in the Creating Private Keys and Certificate.
  3. Save the ocnrf-custom-values-23.4.6.yaml file.
2.2.1.7.2 Managing HTTPS at Egress Gateway

This section explains the steps to create and update the Kubernetes secret and enable HTTPS at Egress Gateway.

Creating and Updating Secrets at Egress Gateway

To create Kubernetes secret for HTTPS, the following files are required:
  • ECDSA private key and CA signed certificate of NRF, if egressgateway.service.ssl.initialAlgorithm is ES256 or RSA private key and CA signed certificate of NRF, if egressgateway.service.ssl.initialAlgorithm is RS256
  • TrustStore password file
  • KeyStore password file
  • CA Root File

Note:

  • The passwords for TrustStore and KeyStore are stored in respective password files.
  • The process to create private keys, certificates, and passwords is at the discretion of the user or operator.
You can manage Kubernetes secrets for enabling HTTPS in NRF using one of the following methods:
  • Managing secrets through OCCM
  • Managing secrets manually

Managing Secrets Through OCCM

To create the secrets using OCCM, see "Managing Certificates" in Oracle Communications Cloud Native Core, Certificate Management User Guide.

The secrets created by OCCM are then patched to add keyStore password and trustStore password files by running the following commands:
  1. To patch the secrets created with the keyStore password file:
    TLS_CRT=$(base64 < "key.txt" | tr -d '\n')
    kubectl patch secret server-primary-ocnrf-secret-occm -n nrfsvc -p "{\"data\":{\"key.txt\":\"${TLS_CRT}\"}}"
    Where,
    • key.txt is the password file that contains KeyStore password.
    • server-primary-ocnrf-secret-occm is the secret created by OCCM.
  2. To patch the secrets created with the trustStore password file:
    TLS_CRT=$(base64 < "trust.txt" | tr -d '\n')
    kubectl patch secret server-primary-ocnrf-secret-occm -n nrfsvc -p "{\"data\":{\"trust.txt\":\"${TLS_CRT}\"}}"
    Where,
    • trust.txt is the password file that contains TrustStore password.
    • server-primary-ocnrf-secret-occm is the secret created by OCCM.

Note:

To monitor the lifecycle management of the certificates through OCCM, do not patch the Kubernetes secrets manually to update the TLS certificate or keys. It must be done through the OCCM GUI.

Managing Secrets Manually

  1. Run the following command to create secret.
    $ kubectl create secret generic <ocegress-secret-name> --from-file=<ssl_ecdsa_private_key.pem>  --from-file=<ssl_rsa_private_key.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<ssl_cabundle.crt> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of NRF deployment>

    Where,

    <ocegress-secret-name> is the secret name for Egress Gateway.

    <ssl_ecdsa_private_key.pem> is the ECDSA private key.

    <rsa_private_key_pkcs1.pem> is the RSA private key.

    <ssl_truststore.txt> is the SSL truststore file.

    <ssl_keystore.txt> is the SSL keystore file.

    <ssl_cabundle.crt> is the CA bundle certificate.

    <caroot.cer> is the CA Root file.

    <ssl_rsa_certificate.crt> is the SSL RSA certificate.

    <ssl_ecdsa_certificate.crt> is the SSL ECDSA certificate.

    <Namespace of NRF deployment> is the namespace of NRF deployment.

    Note:

    Note down the command used during the creation of the secret. Use the command for updating the secrets in future.

    For example:

    $ kubectl create secret generic ocegress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=ssl_rsa_private_key.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=ssl_cabundle.crt --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n ocnrf

    Note:

    It is recommended to use the same secret name as mentioned in the example. In case you change <ocegress-secret-name>, then update the k8SecretName parameter under egressgateway attributes section in the ocnrf-custom-values-23.4.6.yaml file. For more information about egressgateway attributes, see the Egress Gateway Microservice section.
  2. Run the following command to verify the details of the secret created:
    $ kubectl describe secret <ocegress-secret-name> -n <Namespace of NRF deployment>

    Where,

    <ocegress-secret-name> is the secret name for Egress Gateway.

    <Namespace of NRF deployment> is the namespace of NRF deployment.

    For example:

    $ kubectl describe secret ocegress-secret -n ocnrf
  3. To update the secret, update the command used in step 1 with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of NRF deployment>".
    After the update is performed, use the following command:
    kubectl create secret generic <ocegress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<ssl_rsa_private_key.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<ssl_cabundle.crt> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace of NRF Egress Gateway secret> | kubectl replace -f - -n <Namespace of NRF deployment>

    For example:

    $ kubectl create secret generic egress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n ocnrf | kubectl replace -f - -n ocnrf

    Note:

    The names used in the aforementioned command must be as same as the names provided in the custom_values.yaml in NRF deployment.
  4. Run the updated command.
    After the secret update is complete, the following message appears:
    secret/<ocingress-secret> replaced

Enabling HTTPS at Egress Gateway

This step is required only when SSL settings need to be enabled on Egress Gateway microservice of NRF.

  1. Enable enableOutgoingHttps parameter under egressgateway attributes section in the ocnrf-custom-values-23.4.6.yaml file. For more information about enableOutgoingHttps parameter, see the Egress Gateway Microservice section.
  2. Configure the following details in the ssl section under egressgateway attributes, in case you have changed the attributes while creating secret:
    • Kubernetes namespace
    • Kubernetes secret name holding the certificate details
    • Certificate information
      service:
        # configuration under ssl section is mandatory if enableOutgoingHttps is configured as "true"
        ssl:
    
          # OCNRF private key details for HTTPS
          # Secret Name, Namespace, Keydetails
          privateKey:
            k8SecretName: ocegress-secret
            k8NameSpace: ocnrf
            rsa:
              fileName: ssl_rsa_private_key.pem
            ecdsa:
              fileName: ssl_ecdsa_private_key.pem
    
          # OCNRF certificate details for HTTPS
          # Secret Name, Namespace, Keydetails
          certificate:
            k8SecretName: ocegress-secret
            k8NameSpace: ocnrf
            rsa:
              fileName: ssl_rsa_certificate.crt
            ecdsa:
              fileName: ssl_ecdsa_certificate.crt
    
          # OCNRF CA details for HTTPS
          caBundle:
            k8SecretName: ocegress-secret
            k8NameSpace: ocnrf
            fileName: ssl_cabundle.crt
    
          # OCNRF KeyStore password for HTTPS
          # Secret Name, Namespace, Keydetails
          keyStorePassword:
            k8SecretName: ocegress-secret
            k8NameSpace: ocnrf
            fileName: ssl_keystore.txt
    
          # OCNRF TrustStore password for HTTPS
          # Secret Name, Namespace, Keydetails
          trustStorePassword:
            k8SecretName: ocegress-secret
            k8NameSpace: ocnrf
            fileName: ssl_truststore.txt
    
          # Initial algorithm for HTTPS
          # Support Values: ES256, RS256
          initialAlgorithm: ES256

    Note:

    If the certificates are not available, then create them following the instructions given in the Creating Private Keys and Certificate.
  3. Save the ocnrf-custom-values-23.4.6.yaml file.
2.2.1.8 Configuring Secret for Enabling CCA Header

This section explains the steps to create and update the Kubernetes secret, and enable CCA at Ingress Gateway.

Creating a secret to enable CCA

Perform the following command to create secrets for enabling CCA header in NRF Ingress Gateway microservice.
$ kubectl create secret generic <ocingress-secret-name> --from-file=<caroot.cer> -n <Namespace of NRF deployment>

Where,

  • <ocingress-secret-name> is the secret name for Ingress Gateway.
  • <caroot.cer> is the CA Root file.
  • <Namespace of NRF deployment> is the namespace of NRF deployment.

For example:

$ kubectl create secret generic ocingress-secret --from-file=caroot.cer -n ocnrf

Updating a secret

To update the secret, update the command used in step 1 with string --dry-run -o yaml and kubectl replace -f - -n <Namespace of NRF deployment>. After the update is performed, use the following command:

$ kubectl create secret generic <ocingress-secret-name> --from-file=<caroot.cer> --dry-run -o yaml -n <Namespace of NRF deployment> | kubectl replace -f - -n <Namespace of NRF deployment>

For example:

$ kubectl create secret generic ocingress-secret --from-file=caroot.cer --dry-run -o yaml -n ocnrf | kubectl replace -f - -n ocnrf

Note:

  • In case you want to combine the certificates, see Combining Multiple Certificates.
  • From 23.3.0, configure the secret using ingressgateway.ccaHeaderValidation.k8SecretName, ingressgateway.ccaHeaderValidation.k8NameSpace, ingressgateway.ccaHeaderValidation.fileName in the REST API.
2.2.1.9 Configuring Secret to Enable Access Token Service

This section explains how to configure a secret for enabling access token service (Nnrf_AccessToken Service).

Creating Secret for Enabling Access Token Service

This section explains the steps to create and update a secret for the access token service of NRF.

To create a Kubernetes secret for an access token, following files are required:

  • ECDSA private keys for algorithm ES256 and corresponding valid public certificates for NRF
  • RSA private keys for algorithm RS256 and corresponding valid public certificates for NRF

Note:

  • Creation process for private keys and signed certificates are at the discretion of user or operator.
  • Unencrypted keys and certificates are only supported.
  • For RSA, the supported versions are PKCS1 and PKCS8.
  • For ECDSA, the supported version is PKCS8.
  1. Run the following command to create a secret. This is just an example of two keys and certificates. Multiple files can be loaded into secret according to various key usage for access token:
    $ kubectl create secret generic <ocnrfaccesstoken-secret> --from-file=<ecdsa_private_key.pem> --from-file=<rsa_private_key.pem> --from-file=<rsa_certificate.crt> --from-file=<ecdsa_certificate.crt> -n <Namespace of NRF deployment>
    Where,

    <ocnrfaccesstoken-secret> is the secret name for access token service.

    <ecdsa_private_key.pem> is the ECDSA private key.

    <rsa_private_key_pkcs1.pem> is the RSA private key.

    <rsa_certificate.crt> is the RSA certificate.

    <ecdsa_certificate.crt> is the ECDSA certificate.

    <Namespace of NRF deployment> is the namespace of NRF deployment.

    Note:

    Note down the command used during the creation of the secret. Use the command for updating the secrets in future.
    For example:
    $ kubectl create secret generic ocnrfaccesstoken-secret --from-file=ecdsa_private_key.pem --from-file=rsa_private_key.pem --from-file=rsa_certificate.crt --from-file=ecdsa_certificate.crt -n ocnrf
  2. Run the following command to verify secret created:
    $ kubectl describe secret <ocnrfaccesstoken-secret> -n <Namespace of NRF deployment>
    Where,

    <ocnrfaccesstoken-secret> is the secret name for access token service.

    <Namespace of NRF deployment> is the namespace of NRF deployment.

    For example:

    $ kubectl describe secret ocnrfaccesstoken-secret -n ocnrf
  3. To update the secret, update the command used in step 1 with "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of NRF deployment>".

    After the update is performed, use the following command:

    $ kubectl create secret generic <ocnrfaccesstoken-secret> --from-file=<ecdsa_private_key.pem> --from-file=<rsa_private_key.pem> --from-file=<rsa_certificate.crt> --from-file=<ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace of NRF deployment> | kubectl replace -f - -n <Namespace of NRF deployment>

    For example:

    $ kubectl create secret generic ocnrfaccesstoken-secret --from-file=ecdsa_private_key.pem --from-file=rsa_private_key.pem --from-file=rsa_certificate.crt --from-file=ecdsa_certificate.crt --dry-run -o yaml -n ocnrf | kubectl replace -f - -n ocnrf

    Note:

    The names used in the aforementioned command must be as same as the names provided in the custom_values.yaml in NRF deployment.
  4. Run the updated command.
    After the secret update is complete, the following message appears:
    secret/<ocnrfaccesstoken-secret> replaced
2.2.1.10 Configuring NRF to Support ASM

NRF leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external TLS communication. The service mesh integration provides inter-NF communication and allows API gateway co-working with service mesh. The service mesh integration supports the services by deploying a special sidecar proxy in each pod to intercept all network communication between microservices.

Supported ASM version: 1.14.6

For ASM installation and configuration, refer to official Aspen Service Mesh website for details.

Aspen Service Mesh (ASM) configurations are categorized as follows:

  • Control Plane: It involves adding labels or annotations to inject sidecar. The control plane configurations are part of the NF Helm chart.
  • Data Plane: It helps in traffic management, such as handling NF call flows by adding Service Entries (SE), Destination Rules (DR), Envoy Filters (EF), and other resource changes such as apiVersion change between different versions. This configuration is done using ocnrf-servicemesh-config-custom-values-23.4.0.yaml file.

Configuring ASM Data Plane

Data Plane configuration consists of the following Custom Resource Definitions (CRDs):

  • Service Entry (SE)
  • Destination Rule (DR)
  • Envoy Filter (EF)
  • Peer Authentication (PA)
  • Virtual Service (VS)
  • Request Authentication (RA)
  • Policy Authorization (PA)

Note:

Use ocnrf-servicemesh-config-custom-values-23.4.0.yaml to add or remove CRDs that you may require due to ASM upgrades to configure features across different releases.

The data plane configuration is applicable in the following scenarios:

  • Service Entry: Enables adding additional entries into Sidecar's internal service registry, so that auto-discovered services in the mesh can access or route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints).
  • Destination Rule: Defines policies that apply to traffic intended for service after routing has occurred. These rules specify configuration for load balancing, connection pool size from the sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load balancing pool.
  • Envoy Filters: Provides a mechanism to customize the Envoy configuration generated by Istio Pilot. Use Envoy Filter to modify values for certain fields, add specific filters, or even add entirely new listeners, clusters, and so on.
  • Peer Authentication: Used for service-to-service authentication to verify the client making the connection.
  • Virtual Service: Defines a set of traffic routing rules to apply when a host is addressed. Each routing rule defines matching criteria for the traffic of a specific protocol. If the traffic is matched, then it is sent to a named destination service (or subset or version of it) defined in the registry.
  • Request Authentication: Used for end-user authentication to verify the credential attached to the request.
  • Policy Authorization: Enables access control on workloads in the mesh. Policy Authorization supports CUSTOM, DENY, and ALLOW actions for access control. When CUSTOM, DENY, and ALLOW actions are used for a workload at the same time, the CUSTOM action is evaluated first, then the DENY action, and finally the ALLOW action.

Service Mesh Configuration File

A sample ocnrf-servicemesh-config-custom-values-23.4.0.yaml is available in Custom_Templates file. For downloading the file, see Customizing NRF.

Table 2-20 Supported Fields in CRD

CRD Supported Fields
Service Entry hosts
exportTo
addresses
ports.name
ports.number
ports.protocol
resolution
Destination Rule host
mode
sbitimers
tcpConnectTimeout
tcpKeepAliveProbes
tcpKeepAliveTime
tcpKeepAliveInterval
Envoy Filters labelselector
applyTo
filtername
operation
typeconfig
configkey
configvalue
stream_idle_timeout
max_stream_duration
patchContext
networkFilter_listener_port
transport_socket_connect_timeout
filterChain_listener_port
route_idle_timeout
route_max_stream_duration
httpRoute_routeConfiguration_port
vhostname
Peer Authentication labelselector
tlsmode
Virtual Service host
destinationhost
port
exportTo
retryon
attempts
timeout
Request Authentication labelselector
issuer
jwks/jwksUri
Policy Authorization labelselector
action
hosts
paths
xfccvalues

For more information about the CRDs and the parameters, see Aspen Service Mesh.

2.2.1.10.1 Predeployment Configuration

This section explains the predeployment configuration procedure to install NRF with Service Mesh support.

Creating NRF namespace:

  1. Verify required namespace already exists in system:
    $ kubectl get namespaces
  2. In the output of the above command, check if required namespace is available. If not available, create the namespace using the following command:
    $ kubectl create namespace <ocnrf_namespace>
    Where,

    <ocnrf_namespace> is the namespace of NRF.

    For example:
    $ kubectl create namespace ocnrf
2.2.1.10.2 Installing Service Mesh Configuration Charts
Perform the below steps to configure Service Mesh CRs using the Service Mesh Configuration chart:
  1. Download the service mesh chart ocnrf-servicemesh-config-23.4.0.tgz available in the Scripts folder of ocnrf_csar_.zip. For downloading the file, see Customizing NRF.
  2. Unzip ocnrf_csar_.zip.:
    unzip ocnrf_csar_<release_number>.zip
    For example:
    unzip ocnrf_csar_.zip
  3. Configure the ocnrf_servicemesh_config_custom_values_23.4.0.yaml file as follows:

    Modify only the "SERVICE-MESH Custom Resource Configuration" section for configuring the CRs as needed. For example, to add or modify a ServiceEntry CR, required attributes and its value must be configured under the "serviceEntries:" section of "SERVICE-MESH Custom Resource Configuration". You can also comment on the CRs that you do not need.

  4. Install the Service Mesh Configuration Chart as below:
    • Run the below Helm install command on the namespace you want to apply the changes:
      helm install <helm-release-name> <charts> --namespace <namespace-name> -f <custom-values.yaml-filename>

      For example,

      helm install ocnrf ocnrf-servicemesh-config-23.4.0.tgz --namespace ocnrf -f ocnrf_servicemesh_config_custom_values_23.4.0.yaml
    • Run the below command to verify if all CRs are installed:
      kubectl get <CRD-Name> -n <Namespace>

      For example,

      kubectl get se,dr,peerauthentication,envoyfilter,vs -n ocnrf

      Note:

      Any modification to the existing CRs or adding CRs can be done by updating the ocnrf_servicemesh_config_custom_values_23.4.0.yaml file and running Helm upgrade.
2.2.1.10.3 Deploying NRF with Service Mesh
  1. Run the following command to create namespace label for auto sidecar injection and to automatically add the sidecars in all pods spawned in NRF namespace:
    $ kubectl label ns <ocnrf_namespace> istio-injection=enabled

    Where,

    <ocnrf_namespace> is the namespace of NRF.

    For example:
    $ kubectl label ns ocnrf istio-injection=enabled
  2. Update ocnrf_custom_values_23.4.6.yaml with the following annotations:
    1. Update the global section for adding annotation for the following use cases:
      1. To scrape metrics from NRF pods, add oracle.com/cnc: "true" annotation.

        Note:

        This step is required only if OSO is deployed.
      2. Enable Prometheus to scrape metrics from NRF pods by adding "9090" to traffic.sidecar.istio.io/excludeInboundPorts and traffic.sidecar.istio.io/excludeOutboundPorts annotations.
      3. Enable Coherence to form cluster in ASM based deployment by adding "8095,8096,7,53" to traffic.sidecar.istio.io/excludeInboundPorts and traffic.sidecar.istio.io/excludeOutboundPorts annotations.

        For example:

        global:
          customExtension:
            allResources:
               labels: {}
               annotations: {}
            lbDeployments:
              annotations:
                oracle.com/cnc: "true"
                traffic.sidecar.istio.io/excludeInboundPorts: "9090,8095,8096,7,53"
                traffic.sidecar.istio.io/excludeOutboundPorts: "9090,8095,8096,7,53" 
            nonlbDeployments:
              annotations:
                oracle.com/cnc: "true"
                traffic.sidecar.istio.io/excludeInboundPorts: "9090,8095,8096,7,53"
                traffic.sidecar.istio.io/excludeOutboundPorts: "9090,8095,8096,7,53"
    2. If NF authentication using the TLS certificate feature is enabled, update the following attribute under the global ingressgateway section to true.
      xfccHeaderValidation:
          extract:
            enabled: true
    3. Enable the Service Mesh Flag and check if the serviceMeshCheck flag is set to true in the Global Parameters section.

      Note:

      The serviceMeshCheck parameter is mandatory and the other two parameters are read-only.
      
       # Mandatory: This parameter must be set to "true" when NRF is deployed with the Service Mesh
      serviceMeshCheck: true
        # Mandatory: needs to be set with correct url format http://127.0.0.1:<istio management port>/quitquitquit" if NRF is deployed with the Service Mesh.
       istioSidecarQuitUrl: "http://127.0.0.1:15000/quitquitquit"
        # Mandatory: needs to be set with correct url format http://127.0.0.1:<istio management port>/ready" if NRF is deployed with the Service Mesh.
      istioSidecarReadyUrl: "http://127.0.0.1:15000/ready"
    4. Change Ingress-Gateway Service Type to ClusterIP under the Ingress Gateway Microservice section:
      global:
          # Service Type
          type: ClusterIP
    5. Update Service Type to ClusterIP under NRF the NRF Configuration Microservice (nrfconfiguration) section:
      nrfconfiguration:
        service:
          # Service Type
          type: ClusterIP
    6. Update the egress-gateway section for below attributes to enforce Egress Gateway container to send non-TLS egress requests irrespective of the HTTP Scheme value of the message. Because, in a Service Mesh-based deployment, the sidecar container takes care of establishing a TLS connection with the peer.
      egress-gateway:
        # Mandatory: This flag needs to set it "true" if Service Mesh would be present where OCNRF will be deployed
        # This is to enable egress gateway to send http2 (and not https) even if the target scheme https
        httpRuriOnly: "true"
    7. Update the following sidecar configuration in perf-info section:
      deployment:
          customExtension:
            labels: {}
            annotations: {
      	  # Enable this section for service-mesh based installation
      		sidecar.istio.io/proxyCPU: "2",
      		sidecar.istio.io/proxyCPULimit: "2",
      		sidecar.istio.io/proxyMemory: "2Gi",
      		sidecar.istio.io/proxyMemoryLimit: "2Gi"
      	  }
  3. Install NRF using updated ocnrf_custom_values_23.4.6.yaml.
2.2.1.10.4 Post-deployment Configuration

This section explains the post-deployment configurations after installing NRF with support for service mesh.

Enable Inter-NF communication

For every new NF participating in call flows when NRF is a client, DestinationRule, and ServiceEntry must be created in NRF namespace to enable communication.

Following are the inter-NF communication with NRF:

  • NRF to SLF or UDR communication
  • NRF to other NRF communication (forwarding)
  • NRF to SEPP communication (roaming)

Create CRs using the ocnrf_servicemesh_config_custom_values_23.4.0.yaml file in Custom_Templates folder.

2.2.1.10.5 Deploying NRF without Service Mesh

This section describes the steps to redeploy NRF without Service Mesh resources.

  1. To disable Service Mesh, run the following command:
    $ kubectl label ns <ocnrf_namespace> istio-injection=disabled

    Where,

    <ocnrf_namespace> is the namespace of NRF.

    For example:
    $ kubectl label ns ocnrf istio-injection=disabled
  2. Remove the metrics scraping annotations from the ocnrf_custom_values_23.4.6.yaml file.
    1. To scrape metrics from NRF pods, add oracle.com/cnc: "true" annotation.

      Note:

      This step is required only if OSO is deployed.
      For example:
      global:
        customExtension:
          allResources:
             labels: {}
             annotations: {}
          lbDeployments:
            annotations:
              oracle.com/cnc: "true"
            
          nonlbDeployments:
            annotations:
              oracle.com/cnc: "true" 
    2. Update the following attributes under the global ingress-gateway section, in case NF authentication using the TLS certificate feature should be enabled. Update the 'enabled' attribute to false as below.
        xfccHeaderValidation:
          extract:
            enabled: false
    3. Disable Service Mesh Flag and check if the serviceMeshCheck flag is set to false in the Global parameter section.

      Note:

      The serviceMeshCheck parameter is mandatory and the other two parameters are read-only.
      
       # Mandatory: This parameter must be set to "true" when NRF is deployed with the Service Mesh
      serviceMeshCheck: false
        # Mandatory: needs to be set with correct url format http://127.0.0.1:<istio management port>/quitquitquit" if NRF is deployed with the Service Mesh.
       istioSidecarQuitUrl: "http://127.0.0.1:15000/quitquitquit"
        # Mandatory: needs to be set with correct url format http://127.0.0.1:<istio management port>/ready" if NRF is deployed with the Service Mesh.
      istioSidecarReadyUrl: "http://127.0.0.1:15000/ready"
    4. Change Ingress-Gateway Service Type to LoadBalancer under ingress-gateway's global section:
      global:
          # Service Type
          type:LoadBalancer
    5. Update Service Type to LoadBalancer under NRF configuration microservice section:
      nrfconfiguration:
        service:
          # Service Type
          type: LoadBalancer
    6. Update Egress-Gateway section for the below attributes to enforce Egress-Gateway container for not to send non-TLS Egress requests irrespective of the HTTP Scheme value of the message. Because, in a Service Mesh-based deployment, the sidecar container takes care of establishing a TLS connection with the peer.
      egress-gateway:
        # Mandatory: This flag needs to set it "true" if Service Mesh would be present where OCNRF will be deployed
        # This is to enable egress gateway to send http2 (and not https) even if the target scheme https
        httpRuriOnly: "false"
    7. Remove the sidecar configuration in perf-info section:
      deployment:
          customExtension:
            labels: {}
            annotations: {}
      	  
  3. Upgrade or Install NRF using updated ocnrf_custom_values_23.4.6.yaml.
2.2.1.10.6 Deleting Service Mesh Resources

This section describes the steps to delete Service Mesh resources.

To delete Service Mesh resources, run the following command:

helm delete <helm-release-name> -n <namespace-name>

Where,

  • <helm-release-name> is the release name used by the helm command. This release name must be the same as the release name used for ServiceMesh.
  • <namespace-name> is the deployment namespace used by Helm command.

To verify if Service Mesh resources are deleted, run the following command:

kubectl get se,dr,peerauthentication,envoyfilter,vs -n ocnrf
2.2.1.11 Creating Secrets for DNS NAPTR - Alternate route service
This section provides information about how to create secret DNS NAPTR alternate route service.

Note:

Perform this procedure only if DNS NAPTR feature must be implemented.
  1. Run the following command to create secret:
    $ kubectl create secret generic <DNS NAPTR Secret> --from-literal=tsigKey=<tsig key generated of DNS Server> --from-literal=algorithm=<Algorithm used to generate key> --from-literal=keyName=<key-name used while generating key> -n <Namespace of NRF deployment>

    Where,

    <DNS NAPTR Secret> is the secret name for DNS NAPTR.

    <tsig key generated of DNS Server> is the TSIG key generated for DNS Server.

    <Algorithm used to generate key> is the algorithm used to generate key.

    <key-name used while generating key> is the key-name used while generating key.

    <Namespace of NRF deployment> is the namespace of NRF deployment.

    Note:

    Note down the command used during the creation of the secret. Use the command for updating the secrets in future.
    For example:
    $ kubectl create secret generic tsig-secret --from-literal=tsigKey=kUVdLp2SYshV/mkE985LEePLt3/K4vhM63suWJXA9T6DAl3hJFQQpKAcK5imcIKjI5IVyYk2AJBkq3qtQvRTGw== --from-literal=algorithm=hmac-sha256 --from-literal=keyName=ocnrf-tsig -n ocnrf
  2. Run the following command to verify the secret created:
    $ kubectl describe secret <DNS NAPTR Secret> -n <Namespace of NRF deployment>
    For example:
    $ kubectl describe secret tsig-secret -n ocnrf

Note:

Creation process for DNS Server key is on discretion of the operator.
2.2.1.12 Configuring Network Policies

Kubernetes network policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application.

These network policies enforce access restrictions for all the applicable data flows except the communication from Kubernetes node to pod for invoking container probe.

Note:

Configuring network policy is optional. Based on the security requirements, network policy can be configured.

For more information about the network policy, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.

Note:

  • If the traffic is blocked or unblocked between the pods even after applying network policies, check if any existing policy is impacting the same pod or set of pods that might alter the overall cumulative behavior.
  • If changing default ports of services such as Prometheus, Database, Jaegar, or if Ingress or Egress Gateway names is overridden, update them in the corresponding network policies.
2.2.1.12.1 Installing Network Policies

Prerequisite

Network Policies are implemented by using the network plug-in. To use network policies, you must be using a networking solution that supports Network Policy.

Note:

For a fresh installation, it is recommended to install Network Policies before installing NRF. However, if NRF is already installed, you can still install the Network Policies.
Following is the procedure for installing network policy:
  1. Open the ocnrf-network-policy-custom-values-23.4.6.yaml file provided in the release package. For downloading the file, see Downloading the NRF package.
  2. The file is provided with the default network policies. If required, update the ocnrf-network-policy-custom-values-23.4.6.yaml file as per the requirement. For more information about the parameters, see the Table 2-21 parameter table.
  3. Run the following command to install the network policies:
    helm install <helm-release-name> <charts> -n <namepsace> -f <custom-value-file>

    Where,

    helm-release-name: Helm release name of the ocnrf-network-policy.

    charts: is the chart to deploy the network policy.

    custom-value-file: Custom value file of the ocnrf-network-policy.

    namespace: Namespace must be the NRF's namespace.

    Sample command:
    helm install ocnrf-network-policy ocnrf-network-policy -n ocnrf -f ocnrf-network-policy-custom-values-23.4.6.yaml

Note:

  • The connections created before installing network policy are not impacted by the new network policy. Only the new connections are impacted.
  • If you are using ATS suite along with network policies, it is required to install the NRF and ATS in the same namespace.
  • While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.
2.2.1.12.2 Upgrading Network Policies
To add, delete, or update network policy:
  1. Modify the network-policy-custom-values-23.4.6.yaml file to update, add, and delete the network policy.
  2. Run the following command to upgrade the network policies:
    helm upgrade <helm-release-name> <charts> -n <namespace> -f <values.yaml>
    Sample command:
    helm upgrade ocnrf-network-policy ocnrf-network-policy -n ocnrf -f ocnrf-network-policy-custom-values-23.4.6.yaml

Note:

While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.
2.2.1.12.3 Verifying Network Policies

Run the following command to verify that the network policies have been applied successfully:

kubectl get networkpolicy -n <namespace>

Where,

namespace: Namespace must be the NRF's namespace.

Sample command:
kubectl get networkpolicy -n ocnrf
Following the sample output:
NAME                          POD-SELECTOR                              AGE
allow-egress-database         app.kubernetes.io/part-of=ocnrf           21h
allow-egress-dns              app.kubernetes.io/part-of=ocnrf           21h
allow-egress-jaeger           app.kubernetes.io/part-of=ocnrf           21h
allow-egress-k8-api           app.kubernetes.io/part-of=ocnrf           21h
allow-egress-sbi              app.kubernetes.io/name=egressgateway      21h
allow-egress-to-nrf-pods      app.kubernetes.io/part-of=ocnrf           21h
allow-from-node-port          app=ocats-nrf                             21h
allow-ingress-from-console    app.kubernetes.io/name=nrfconfiguration   21h
allow-ingress-from-nrf-pods   app.kubernetes.io/part-of=ocnrf           21h
allow-ingress-prometheus      app.kubernetes.io/part-of=ocnrf           21h
allow-ingress-sbi             app.kubernetes.io/name=ingressgateway     21h
deny-egress-all               app.kubernetes.io/part-of=ocnrf           21h
deny-ingress-all              app.kubernetes.io/part-of=ocnrf           21h
2.2.1.12.4 Uninstalling Network Policies
Run the following command to uninstall the network policies:
helm uninstall <helm-release-name> -n <namespace>
Sample command:
helm uninstall ocnrf-network-policy -n ocnrf

Note:

While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.
2.2.1.12.5 Configuration Parameters for Network Policies

Table 2-21 Supported Kubernetes Resource for Configuring Network Policy

Parameter Description Default Value
apiVersion

This is a mandatory parameter.

This indicates Kubernetes version for access control.

Note: This is the supported api version for network policy. This is a read-only parameter.

networking.k8s.io/v1
kind

This is a mandatory parameter.

This represents the REST resource this object represents.

Note: This is a read-only parameter.

NetworkPolicy

Table 2-22 Configuration Parameters for Network Policy

Parameter Description Default Value
metadata.name

This is a mandatory parameter.

This indicates unique name for the network policy.
{{ .metadata.name }}
spec.{}

This is a mandatory parameter.

This consists of all the information needed to define a particular network policy in the given namespace.

Note: NRF supports the spec parameters defined in Kubernetes Resource Category.

 

For more information about this functionality, see Network Policies in Oracle Communications Cloud Native Core, Network Repository Function User Guide.

2.2.2 Installation Tasks

This section provides installation procedures to install Oracle Communications Cloud Native Core, Network Repository Function (NRF) using Command Line Interface (CLI). To install NRF using CDCS, see Oracle Communications CD Control Server User Guide.

This section explains how to install NRF.

Note:

  • Before installing NRF, you must complete Prerequisites and Preinstallation Tasks.
  • In a georedundant deployment, perform the steps explained in this section on all the georedundant sites.
2.2.2.1 Installing NRF Package

To install the NRF package, perform the following steps:

  1. Run the following command to access the extracted package:
    cd <ReleaseName>_csar_<Releasenumber>
    For example:
    cd ocnrf_csar_23.4.6
  2. Customize the ocnrf-custom-values-23.4.6.yaml file with the required deployment parameters. See Customizing NRF chapter to customize the file. For more information about predeployment parameter configurations, see Preinstallation Tasks.

    Note:

    • In case of georedundant deployments, configure nfInstanceId uniquely for each NRF site.
  3. (Optional) Customize the ocnrf-servicemesh-config-custom-values-23.4.0.yaml with the required deployment parameters in case you are creating DestinationRule and service entry using the yaml file. See Configuring NRF to Support ASM chapter for the sample template.
  4. (Optional) Run the following command to create DestinationRule and service entry using the yaml file:
    helm install <helm-release-name> <charts> --namespace <namespace-name> -f <custom-values.yaml-filename>
    Example:
    helm install ocnrf ocnrf --namespace ocnrf -f ocnrf-servicemesh-config-custom-values-23.4.0.yaml
  5. Run the following command to install NRF:
    1. Using local helm chart:
      helm install <helm-release-name> <charts> --namespace <namespace-name> -f <custom-values.yaml-filename>
      Example:
      helm install ocnrf ocnrf --namespace ocnrf -f ocnrf-custom-values-23.4.6.yaml
    2. Using chart from helm repo:
      helm install <helm-release-name> <helm_repo/helm_chart> --version <chart_version> --namespace <namespace-name> -f ocnrf-custom-values-<release_number>.yaml
      Example:
      helm install ocnrf ocnrf-helm-repo/ocnrf --version 23.4.6 --namespace ocnrf -f ocnrf-custom-values-23.4.6.yaml

      Where,

      helm_repo is the location where helm charts are stored.

      helm_chart is the chart to deploy the NRF.

      helm-release-name is the release name used by helm command.

      Note:

      <helm-release-name> must not exceed 20 characters.

      namespace-name is the deployment namespace used by helm command.

      ocnrf-custom-values-<release_number>.yaml is the name of the custom values yaml file (including location).

    Note:

    timeout duration: The timeout duration is an optional parameter. It specifies the time to wait for any individual Kubernetes operation (like Jobs for hooks). The default value is 5m0s in Helm3. If the helm install command fails at any point to create a Kubernetes object, it will internally call the purge to delete after the timeout value (default: 300s). Here, timeout value is not applicable for the overall installation procedure but for automatic purge on installation failure.

Caution:

Do not exit from helm install command manually. After running the helm install command, it takes some time to install all the services. Do not press "ctrl+c" to exit from helm install command. It may lead to some anomalous behavior.

Note:

If you want to add a site in georedundant deployment, see Adding a Site in Georedundant Deployment.

2.2.3 Postinstallation Tasks

This section explains the postinstallation tasks for NRF.

2.2.3.1 Verifying Installation

To verify the installation:

  1. Run the following command to check the installation status:
    helm status <helm-release> -n <namespace>

    Where,

    <helm-release> is the Helm release name of NRF.

    <namespace> is the namespace of NRF deployment.

    For example:
    helm status ocnrf -n ocnrf

    If the deployment is successful, then the STATUS is displayed as deployed.

    Sample output:
    NAME: ocnrf
    LAST DEPLOYED: Fri Aug 15 10:08:03 2023
    NAMESPACE: ocnrf
    STATUS: deployed
    REVISION: 1
    
  2. Run the following command to verify if the pods are up and active:
    $ kubectl get pods -n <namespace>

    Where,

    <namespace> is the namespace of NRF deployment.

    The STATUS column of all the pods must be 'Running'.

    The READY column of all the pods must be n/n, where n is the number of containers in the pod.

    For example:

    $ kubectl get pods -n ocnrf
    NAME                                     READY   STATUS    RESTARTS   AGE
    ocnrf-alternate-route-7dcf9b9c5d-d8q75   1/1     Running   0          2m56s
    ocnrf-alternate-route-7dcf9b9c5d-x89gx   1/1     Running   0          2m1s
    ocnrf-appinfo-79b6c79746-dvvmp           1/1     Running   0          2m54s
    ocnrf-appinfo-79b6c79746-v698l           1/1     Running   0          2m54s
    ocnrf-egressgateway-84fbcd8748-klm8z     1/1     Running   0          2m1s
    ocnrf-egressgateway-84fbcd8748-zp4qk     1/1     Running   0          2m52s
    ocnrf-ingressgateway-bb6dfc8f9-6t6h8     1/1     Running   0          2m49s
    ocnrf-ingressgateway-bb6dfc8f9-zxgtq     1/1     Running   0          117s
    ocnrf-nfaccesstoken-55dc8f6745-flh4w     1/1     Running   0          2m1s
    ocnrf-nfaccesstoken-55dc8f6745-gq6gn     1/1     Running   0          2m45s
    ocnrf-nfdiscovery-68777b4556-gd6wf       1/1     Running   0          2m43s
    ocnrf-nfdiscovery-68777b4556-nqp5t       1/1     Running   0          2m1s
    ocnrf-nfregistration-5b8c8b7dd5-6qq8w    1/1     Running   0          2m41s
    ocnrf-nfregistration-5b8c8b7dd5-pvqtr    1/1     Running   0          2m
    ocnrf-nfsubscription-84c7d48b95-z6jlk    1/1     Running   0          2m39s
    ocnrf-nfsubscription-84c7d48b95-zq4bl    1/1     Running   0          2m1s
    ocnrf-nrfartisan-567c6dc8-bpz7t          1/1     Running   0          2m39s
    ocnrf-nrfauditor-6fdf4846c5-wjpfl        1/1     Running   0          2m37s
    ocnrf-nrfauditor-6fdf4846c5-zxyz         1/1     Running   0          2m37s
    ocnrf-nrfconfiguration-5f5c476d-rj6w6    1/1     Running   0          2m35s
    ocnrf-performance-65587f5d4f-b5cdf       1/1     Running   0          2m33s
    ocnrf-performance-65587f5d4f-fw8fc       1/1     Running   0          2m31s
  3. Run the following command to verify if the services are deployed and active:
    kubectl -n <namespace> get services

    Where,

    <namespace> is the namespace of NRF deployment.

    For example:

    kubectl -n ocnrf get services

Note:

If an external load balancer is used, EXTERNAL-IP address is assigned to ocnrf-ingressgateway.

If the installation is unsuccessful or the status of all the pods is not in RUNNING state, perform the troubleshooting steps provided in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

2.2.3.2 Performing Helm Test

This section describes how to perform sanity check for NRF installation through Helm test. The pods to be checked should be based on the namespace and label selector configured for the Helm test configurations.

Helm Test is a feature that validates successful installation of NRF and determines if the NF is ready to take traffic.

Note:

  • Helm test can be performed only on Helm3.
  • Helm test expects all of the pods of given microservice to be in READY state for a successful result.
To perform the Helm test:
  1. Configure the Helm test configurations under the Helm Test Global Parameters section of the ocnrf-custom-values-23.4.6.yaml file.
  2. Run the following command to perform the Helm test:
    helm test <helm-release_name> -n <namespace>

    Where,

    <helm-release-name> is the release name.

    <namespace> is the deployment namespace where NRF is installed.

    For example:

    helm test ocnrf -n ocnrf

    Sample output:
    NAME: ocnrf
    LAST DEPLOYED: Fri Aug 15 10:08:03 2023
    NAMESPACE: ocnrf
    STATUS: deployed
    REVISION: 1
    TEST SUITE:     ocnrf-test
    Last Started:   Fri Aug 15 10:41:25 2023
    Last Completed: Fri Aug 15 10:41:34 2023
    Phase:          Succeeded
    

If the Helm test fails, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

2.2.3.3 Taking a Backup

Take a backup of the following files, which are required during fault recovery:

  • Updated ocnrf-custom-values-23.4.6.yaml file.
  • Updated ocnrf-servicemesh-config-custom-values-23.4.0.yaml.
  • Updated helm charts.
  • Updated ocnrf_network_policy_custom_values_23.4.0.yaml.
  • Secrets, certificates, CA root, and keys that are used during installation.