4 Upgrading NRF

This section provides information on how to upgrade an existing Oracle Communications Cloud Native Core, Network Repository Function (NRF) deployment to the latest release using CDCS or CLI procedures as outlined in the following table:

Note:

  • In a georedundant deployment, perform the steps explained in this section on all the georedundant sites.
  • Before upgrading NRF, ensure that the required cnDBTier resources are available. For more information about the cnDBTier resource changes, see cnDBTier Resource Requirement.
  • For NRF georedundant deployments, the difference between the NRF release versions for all the georedundant sites cannot be more than 1 release.

    For example, In a three-site NRF deployment, all the 3 sites are at the same release version as N. During site upgrade, site 1 and site 2 are upgraded to N+1 version, and site 3 is not upgraded yet. At this state, before upgrading site 3 to N+1, upgrading site 1 and/or site 2 from N+1 version to N+2 version is not supported as the difference between the NRF release versions for all the georedundant sites cannot be more than 1 release.

    For more information about the cnDBTier georedundant deployments, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

    For more information about the CNC Console georedundant deployments, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.

  • While performing a cnDBTier upgrade to 23.4.x, cnDBTier recommends the value of global.additionalndbconfigurations.ndb.HeartbeatIntervalDbDb parameter should be 1250. This value should be incremented in two steps.
    If the value of global.additionalndbconfigurations.ndb.HeartbeatIntervalDbDb parameter is 500, follow the below steps to increase the value to 1250.
    • Modify the value of the global.additionalndbconfigurations.ndb.HeartbeatIntervalDbDb parameter to 900 in the ocnrf_dbtier_23.4.6_custom_values_23.4.6.yaml file and perform cnDBTier upgrade.
    • Once upgrade is completed successfully, modify the value of the global.additionalndbconfigurations.ndb.HeartbeatIntervalDbDb parameter to 1250 in ocnrf_dbtier_23.4.6_custom_values_23.4.6.yaml and perform cnDBTier upgrade.
  • In case, NRF should accept the 3GPP compliant nfSetId value, then the value of global.deprecatedList parameter should be configured appropriately. For more information about global.deprecatedList parameter, see Global Parameters.

Table 4-1 NRF Upgrade Sequence

Upgrade Sequence Applicable for CDCS Applicable for CLI
Preupgrade Tasks See Oracle Communications CD Control Server User Guide Yes
Upgrade Tasks See Oracle Communications CD Control Server User Guide Yes

4.1 Supported Upgrade Paths

The following table lists the supported upgrade paths for NRF.

Table 4-2 Supported Upgrade Paths

Source Release Target Release
23.4.x 23.4.6
23.3.x 23.4.6

Note:

NRF must be upgraded before upgrading cnDBTier.

4.2 Upgrade Strategy

NRF supports in-service upgrade. The supported upgrade strategy is RollingUpdate. The rolling update strategy is a gradual process that allows you to update your Kubernetes system with only a minor effect on performance and no downtime. The advantage of the rolling update strategy is that the update is applied Pod-by-Pod so the greater system can remain active.

Note:

It is recommended to perform in-service upgrade during maintenance window where the recommended traffic rate is 25% of the configured traffic or below. We also expect the traffic failure to stay below 5% during the upgrade and fully recover post upgrade.

Following engineering configuration parameters are used to define upgrade strategy:
  • upgradeStrategy parameter indicates the update strategy used in NRF.
  • maxUnavailable parameter determines the maximum number of pods that can be unavailable during upgrade.

Table 4-3 Predefined Upgrade Strategy Value

Microservice Upgrade Value (maxUnavailabe)
<helm-release-name>-nfregistration 25%
<helm-release-name>-nfsubscription 25%
<helm-release-name>-nfdiscovery 25%
<helm-release-name>-nrfauditor 25%
<helm-release-name>-nrfconfiguration Not Applicable

Note: maxSurge attribute is used for this microservice.

<helm-release-name>-appinfo 50%
<helm-release-name>-nfaccesstoken 25%
<helm-release-name>-nrfartisan Not Applicable

Note: maxSurge attribute is used for this microservice.

<helm-release-name>-alternate_route 25%
<helm-release-name>-egressgateway 25%
<helm-release-name>-ingressgateway 25%
<helm-release-name>-performance 50%

Note:

<helm-release-name> is the Helm release name. For example, if Helm release name is "ocnrf", then nrfartisan microservice name will be "ocnrf-nrfartisan".

Note:

When NRF is deployed with OCCM, follow the specific upgrade sequence as mentioned in the Oracle Communications, Cloud Native Core Solution Upgrade Guide.

4.3 Preupgrade Tasks

This section provides information about preupgrade tasks to be performed before upgrading NRF:

  1. Keep current custom_values.yaml file as backup, that is ocnrf-custom-values-23.3.x.yaml for upgrading to 23.4.6.
  2. Update the new custom_values.yaml defined for target NRF release.
  3. If you are upgrading from version 23.2.x to target NRF release, follow the steps below:
    • Before starting the procedure to upgrade, perform the following tasks:
      • Perform the following scenarios for enabling or disabling tracing as per the requirement, when upgrading from 23.2.x to target NRF release:
        • When tracing was enabled in 23.2.x:
          • If the tracing was enabled in 23.2.x:
            • To enable tracing in the target NRF release, perform the following:
              1. Ensure that Jaeger collector service is up and running inside OCCNE-Infra, with port specified in ocnrf-custom-values-23.3.x.yaml.
              2. Delete the old configuration.
              3. Ensure that following configuration is added in 23.4.x in ocnrf-custom-values-23.3.x.yaml.
                # enable Jaeger telemetry  tracing
                jaegerTelemetryTracingEnabled: false
                openTelemetry:
                  jaeger:
                    httpExporter:
                      # Update this configuration when jaeger tracing is enabled.
                      # httpExporter host
                      host: "jaeger-collector.cne-infra"
                      # httpExporter port
                      port: 4318
                    # Jaeger message sampler. Value range: 0 to 1
                    # e.g. Value 0: No Trace will be sent to Jaeger collector
                    # e.g. Value 0.3: 30% of message will be sampled and will be sent to Jaeger collector
                    # e.g. Value 1: 100% of message (i.e. all the messages) will be sampled and will be sent to Jaeger collector
                    probabilisticSampler: 0.5
          • To disable tracing in the target ocnrf-custom-values-23.3.x.yaml:
            tracing:
              tracingEnabled: 'false'
        • When tracing was disabled in 23.2.x:
          • To enable tracing in the target NRF release, perform the following:
            1. Ensure that Jaeger collector service is up and running inside OCCNE-Infra, with port specified in ocnrf-custom-values-23.3.x.yaml.
            2. Ensure that following configuration is added in the target ocnrf-custom-values-23.3.x.yaml.
              
              envJaegerCollectorHost: 'occne-tracer-jaeger-collector.occne-infra'
              envJaegerCollectorPort: 4318 -> Make sure this matches with OCCNE-INFRA jaeger collector service port.          
              tracing:
                tracingEnabled: 'true'
                tracingSamplerRatio: 0.001
                tracingJdbcEnabled: 'true'
                tracingLogsEnabled: 'false'
          • To disable tracing in 23.4.x, update the following parameter in the ocnrf-custom-values-23.3.x.yaml:
            tracing:
              tracingEnabled: 'false'
  4. Enable enableNrfArtisanService in the Global Parameters, if DNS NAPTR must be implemented and it is not enabled in previous release.
  5. Enable performanceServiceEnable in the Global Parameters, if Perf-Info must be implemented and it is not enabled in previous release.
  6. Enable leaderElectionDbName in the Global Parameters if leaderElectionDB database must be implemented and it is not enabled in previous release.
  7. Enable alternateRouteServiceEnable in the Global Parameters, if alternate route service must be implemented and it is not enabled in previous release.
  8. Before starting upgrade, take a manual backup of NRF REST based configuration. This helps if preupgrade data has to be restored.
  9. Before NRF upgrade starts, validate that there are no database backup operation is in progress or no scheduled backups are planned during the upgrade window.
    1. Ensure that there is no database backup operation is running for each node-id.
      kubectl -n <namespace> exec -it ndbmgmd-0 -c mysqlndbcluster -- ndb_mgm -e "<ndbmtd-node-id> REPORT BACKUPSTATUS"
      If the output is Node <node-id>: Backup not started for all the data nodes then there are no backups which are currently in progress.

      Sample output:

      Node 1: Backup not started
    2. Validate that there are no scheduled backups that starts in between NRF upgrade. Get the cronjobExpression that is set for the backup manager svc in the dbtier custom_values.yaml file.
      /db-backup-manager-svc/scheduler/cronjobExpression: "0 0 */7 * *"

      Run the following command using the cronjobExpression

      kubectl exec -it <db-backup-manager-svc-pod> -n <namespace> -c db-backup-manager-svc -- python -c "import os;from datetime import datetime; from croniter import croniter; next_time = datetime.fromtimestamp(croniter('0 0 */7 * *', dat
      etime.now()).get_next()); print(f'Next cron time: {next_time}, Time remaining: {next_time - datetime.now()}')"

      Sample output:

      Next cron time: 2025-01-22 00:00:00, Time remaining: 4 days, 14:37:56.783569
  10. Before upgrading, perform sanity check using Helm test. See the Performing Helm Test section for the Helm test procedure.
  11. Install or upgrade the network policies, if applicable. For more information, see Configuring Network Policies.

    For Rest API configuration details, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

4.4 Upgrade Tasks

This section provides information about the sequence of tasks to be performed for upgrading an existing NRF deployment.

Helm Upgrade

Upgrading an existing deployment replaces the running containers and pods with new containers and pods. If there is no change in the pod configuration, it is not replaced. Unless there is a change in the service configuration of a microservice, the service endpoints remain unchanged. For example, ClusterIP.

Upgrade Procedure

Caution:

  • Ensure that there is no database backup is in progress or there are no scheduled backups planned during the NRF upgrade. For more information, See step 9 in Preupgrade Tasks section.
  • Do not perform any configuration changes during the upgrade.
  • Do not exit from the helm upgrade command manually. After running the helm upgrade command, it takes sometime (depending upon the number of PODs to upgrade) to upgrade all of the services. Do not press "ctrl+c" to come out from the helm upgrade command. It may lead to anomalous behavior.
  1. Untar the latest NRF package and if required, re-tag and push the images to registry. For more information, see Downloading the NRF package and Pushing the Images to Customer Docker Registry.
  2. Modify the ocnrf-custom-values-23.4.6.yaml file parameters as per site requirement.
  3. Run the following command to create and grant leaderElectionDb database, if does not exists:
    1. Create the leaderElectionDb database:
      CREATE DATABASE IF NOT EXISTS leaderElectionDB CHARACTER SET utf8;
    2. Grant the NRF privileged user permission to leaderElectionDb database:
      GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON leaderElectionDB.* TO 'nrfPrivilegedUsr'@'%';
      
      FLUSH PRIVILEGES;
  4. Run the following commands to update the secret with leaderElectionDb database details before upgrade:
    1. Run the following command:
      echo -n <leaderElectionDB_name> | base 64
      For example:
      echo -n "leaderElectionDB" | base64
      bGVhZGVyRWxlY3Rpb25EQg==

      Note:

      Note down the output of this command.
    2. Run the following command to update the secret for NRF privileged user:
      kubectl patch secret -n <namespace> <secret-name> -p="{\"data\":{\"<leaderElectionDB_literal_key_name>\": \"<leaderElectionDB_literal_value\"}}" -v=1
      For example:
      kubectl patch secret -n ocnrf privilegeduser-secret -p="{\"data\":{\"leaderElectionDbName\": \"bGVhZGVyRWxlY3Rpb25EQg==\"}}" -v=1

      Note:

      The leaderElectionDbName parameter must be as defined in the Global Parameters section.
  5. Run the following commands to grant INDEX privilege to NRF privileged user:
    GRANT INDEX ON <NRF Application Database>.* TO '<NRF Privileged User Name>'@'%';
    FLUSH PRIVILEGES;
    For example:
    GRANT INDEX ON nrfApplicationDB.* TO 'nrfPrivilegedUsr'@'%';
    FLUSH PRIVILEGES; 
  6. Run the following command to verify the INDEX grant:
    SHOW GRANTS FOR '<NRF Privileged User Name>'@'%';
    For example:
    SHOW GRANTS FOR 'nrfPrivilegedUsr'@'%';
    Sample output:
    SHOW GRANTS FOR 'nrfPrivilegedUsr'@'%';
    +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | Grants for nrfPrivilegedUsr@%                                                                                                                                     |
    +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | GRANT USAGE ON *.* TO `nrfPrivilegedUsr`@`%`                                                                                                                      |
    | GRANT NDB_STORED_USER ON *.* TO `nrfPrivilegedUsr`@`%` WITH GRANT OPTION                                                                                          |
    | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `commonConfigurationDB`.* TO `nrfPrivilegedUsr`@`%`   || GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `nrfApplicationDB`.* TO `nrfPrivilegedUsr`@`%` |
    | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON `nrfNetworkDB`.* TO `nrfPrivilegedUsr`@`%`            |
    +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  7. Create secrets for DNS NAPTR alternate route service as described in Creating Secrets for DNS NAPTR - Alternate route service section.

    Note:

    Skip this step, if DNS NAPTR feature is not required.
  8. Run the following command to upgrade an existing NRF deployment:

    Note:

    If you are upgrading an existing NRF deployment with georedundancy feature enabled, ensure that you configure dbMonitorSvcHost, dbMonitorSvcPort, and siteNameToNrfInstanceIdMapping parameters before running helm upgrade to NRF 23.4.6. For more information about the parameters, see Global Parameters.
    1. Using local Helm chart:
      $ helm upgrade <release_name> <helm_chart> -f <ocnrf_customized_values.yaml> --namespace <namespace-name> 
      For example:
      $ helm upgrade ocnrf ocnrf-23.4.6.tgz -f ocnrf-custom-values-23.4.6.yaml --namespace ocnrf
    2. Using chart from Helm repo:
      $ helm upgrade <release_name> <helm_repo/helm_chart> --version <chart_version> -f <ocnrf_customized_values.yaml> --namespace <namespace-name>
      For example:
      $ helm upgrade ocnrf ocnrf-helm-repo/ocnrf --version 23.4.6 -f ocnrf-custom-values-23.4.6.yaml --namespace ocnrf 

      Caution:

      Do not exit from the Helm upgrade command manually. After running the Helm upgrade command, wait until all of the services are upgraded. Do not press "ctrl+c" to come out from the Helm upgrade command. It may lead to uncommon behavior.
  9. Run the following command to check the status of the upgrade:
    $ helm status <release_name> -namespace <namespace-name>

    For example: $ helm status ocnrf -namespace ocnrf

    Sample output of Helm status:

    
    $ helm status ocnrf -n ocnrf
    NAME: ocnrf
    LAST DEPLOYED: Mon Dec 16 11:49:46 2024
    NAMESPACE: ocnrf
    STATUS: deployed
    REVISION: 2
    NOTES:
    # Copyright 2020 (C), Oracle and/or its affiliates. All rights reserved.
    Thank you for installing ocnrf.
    Your release is named ocnrf, Release Revision: 2.
  10. Run the following command to check the history of the upgrade:
    $ helm history <release_name> -namespace <namespace-name>

    For example: $ helm history ocnrf -n ocnrf

    Sample output of a successful upgrade:
    
    REVISION        UPDATED                         STATUS          CHART            APP VERSION       DESCRIPTION     
    2               Thu Jan  9 06:13:27 2025        superseded      ocnrf-23.4.6      23.4.6       Upgrade complete
    3               Thu Jan  9 06:36:30 2025        superseded      ocnrf-23.3.1      23.3.1          Rollback to 1   
    4               Thu Jan  9 07:21:59 2025        superseded      ocnrf-23.4.6      23.4.6       Upgrade complete
    5               Thu Jan  9 07:43:16 2025        superseded      ocnrf-23.3.1      23.3.1          Rollback to 3   
    6               Thu Jan  9 08:17:48 2025        superseded      ocnrf-23.4.6      23.4.6       Upgrade complete
    7               Thu Jan  9 09:19:31 2025        superseded      ocnrf-23.3.1      23.3.1          Rollback to 3   
    8               Thu Jan  9 10:22:10 2025        superseded      ocnrf-23.4.6      23.4.6       Upgrade complete
    9               Thu Jan  9 10:39:08 2025        superseded      ocnrf-23.3.1      23.3.1          Rollback to 3   
    10              Thu Jan  9 10:57:08 2025        superseded      ocnrf-23.4.6      23.4.6       Upgrade complete
    11              Thu Jan  9 11:06:23 2025        deployed        ocnrf-23.3.1      23.3.1          Rollback to 3   
  11. Perform sanity check using Helm test. See the Performing Helm Test section for the Helm test procedure.
  12. If the upgrade fails, see Upgrade or Rollback Failure in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

4.5 Postupgrade Tasks

Note:

To automate the lifecycle management of the certificates through OCCM, you can migrate certificates and keys from NRF to OCCM. For more information, see "Introducing OCCM in an Existing NF Deployment" in Oracle Communications Cloud Native Core, Certificate Management User Guide.

You can remove Kubernetes secrets if the current version of NRF does not use that secret by checking the ocnrf_custom_values_23.4.6.yaml file. Before deleting, please make sure that there is no plan to rollback to the NRF version which uses these secrets. Otherwise Rollback will fail.

Installation and Upgrade Considerations

  • Upon 23.4.0 installation or upgrade, CDS microservice pod is deployed by default. The NRF core microservices queries the CDS for state data information. In case CDS is not available, NRF core microservices fall back to the cnDBTier for service operation.
  • Release 23.3.x microservice pods retrieve the state data from the cnDBTier for processing the service operations.
  • Release 23.4.x microservice pods retrieve queries CDS to retrieve the state data for processing the service operations.
  • The NfInstances table is updated to include a new nfProfileUpdateTimestamp column. The column is used by NRF 23.4.x and above. This column is ignored for the previous releases.
  • In case of in-service upgrade:
    • CDS updates its in-memory cache with state data. The readiness probes of the CDS are configured to succeed only after at least one cache update attempt is performed. The cache is updated with the local NRF set data from the cnDBTier and if the NRF Growth feature is enabled the cache is updated with remote NRF set data.
    • During the above-mentioned upgrade scenario, until the CDS pod is available, the previous and new release pods of other microservices will query the old release pod of the CDS for the state data. This ensures that there are no in-service traffic failures during the upgrade.