5 Upgrading Policy

This chapter provides information about upgrading Oracle Communications Cloud Native Core, Converged Policy (Policy) deployment to the latest release. It is recommended to perform Policy upgrade in a specific order. For more information about the upgrade order, see Oracle Communications Cloud Native Core, Solution Upgrade Guide.

Note:

In a georedundant deployment, perform the steps explained in this section on all the georedundant sites.

5.1 Supported Upgrade Paths

The following table lists the supported upgrade paths for Policy:

Table 5-1 Supported Upgrade Paths

Source Release Target Release
23.4.x, 23.2.x 23.4.9

Note:

Policy must be upgraded before upgrading cnDBTier.

5.2 Upgrade Strategy

Policy supports in-service upgrade. The supported upgrade strategy is RollingUpdate. The rolling update strategy is a gradual process that allows you to update your Kubernetes system with only a minor effect on performance and no downtime. The advantage of the rolling update strategy is that the update is applied Pod-by-Pod so the greater system can remain active.

The following engineering configuration parameters are used to define upgrade strategy:

  • upgradeStrategy parameter indicates the update strategy used in Policy.
  • maxUnavailable parameter determines the maximum number of pods that will be unavailable during upgrade.

    For more information on maxUnavailable for each microservices

    refer PodDisruptionBudget Configuration.

5.3 Preupgrade Tasks

This section provides information about preupgrade tasks to be performed before upgrading Policy.

  1. Keep current custom_values.yaml file as backup.
  2. Update the new custom_values.yaml file for target Policy release. For details on customizing this file, see Customizing Policy.
  3. If you are upgrading from version 23.2.x to 23.4.x, follow the steps below:
    • Before starting the procedure to upgrade Policy, perform the following tasks:
      • Perform the following scenarios for enabling or disabling tracing as per the requirement, when upgrading from 23.2.x to 23.4.x:
        • When tracing was enabled in 23.2.x
          • Enable tracing in 23.4.x
            1. Ensure that envJaegerCollectorHost is up and running inside OCCNE-Infra, with port specified in values.yaml.
            2. Delete the old configuration.
            3. Ensure that following configuration has been added in 23.4.x in custom_values.yaml.
              
              envJaegerCollectorHost: 'occne-tracer-jaeger-collector.occne-infra'
              envJaegerCollectorPort: 4318 -> Make sure this matches with OCCNE-INFRA jaeger collector service port.          
              tracing:
                tracingEnabled: 'true'
                tracingSamplerRatio: 0.001
                tracingJdbcEnabled: 'true'
                tracingLogsEnabled: 'false'
          • To disable tracing in 23.4.x, update the following parameter in the custom_values.yaml:
            tracing:
              tracingEnabled: 'false'
      • When tracing was disabled in 23.2.x
        • Enable tracing in 23.4.x
          1. Ensure that envJaegerCollectorHost is up and running inside OCCNE-Infra, with port specified in values.yaml.
          2. Ensure that following configuration has been added in 23.4.x in custom_values.yaml.
            
            envJaegerCollectorHost: 'occne-tracer-jaeger-collector.occne-infra'
            envJaegerCollectorPort: 4318 -> Make sure this matches with OCCNE-INFRA jaeger collector service port.          
            tracing:
              tracingEnabled: 'true'
              tracingSamplerRatio: 0.001
              tracingJdbcEnabled: 'true'
              tracingLogsEnabled: 'false'
        • To disable tracing in 23.4.x, update the following parameter in the custom_values.yaml:
          tracing:
            tracingEnabled: 'false'
    • The NRF Client Database (occnp_nrf_client) is introduced in release 23.4.0. If you are upgrading from version 23.2.x to 23.4.x, follow the steps below to manually create the NRF Client Database before upgrading:
      1. Run the following command to create a new NRF Client Database.

        $ CREATE DATABASE IF NOT EXISTS <DB Name> CHARACTER SET utf8;
        For example:
        CREATE DATABASE IF NOT EXISTS occnp_nrf_client CHARACTER SET utf8;

        Note:

        • Ensure that you use the same database name while creating database that you have used in the global parameters of custom_values.yaml file.

          nrfClientDbName: 'occnp_nrf_client'
        • For multisite georedudnant setups, change the parameter value of the NRF Client Database name (occnp_nrf_client) uniquely for each site in the custom_values.yaml file. For example, change the values as mentioned below in case of two-site and three-site setup, respectively: For Two-Site:
          • Change the value of global.nrfClientDbName as occnp_nrf_client_site1 and occnp_nrf_client_site2 for one-site and two-site scenarios, respectively.
          For Three-Site:
          • Change the value of global.nrfClientDbName as occnp_nrf_client_site1, occnp_nrf_client_site2, and occnp_nrf_client_site3 for one-site, two-site, and three-site scenarios, respectively.
      2. Run the following command to grant Privileged User permission on NRF Client Database:
        
        GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON <DB Name> TO '<Policy Privileged Username>'@'%';  

        For example:

        
        GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_nrf_client.* TO 'occnpadminusr'@'%';

        Note:

        Run this step on all the SQL nodes for each Policy standalone site in a multisite georedundant setup.

      3. Run the following command to verify that the privileged or application users have all the required permissions:

        show grants for username;

        where username is the name of the privileged or application user.

        Example:

        
        show grants for occnpadminusr;
        show grants for occnpusr;
      4. Run the following command to flush privileges.

        FLUSH PRIVILEGES;
      5. For more information about creating database and granting privileges on single and multisite georedundant setups, see Configuring Database, Creating Users, and Granting Permissions.
  4. Before starting the upgrade, take a manual backup of Policy REST based configuration. This helps if preupgrade data has to be restored.

    Note:

    For Rest API configuration details, see Oracle Communications Cloud Native Core, Converged Policy REST Specification Guide.
  5. Before upgrading, perform sanity check using Helm test. See the Performing Helm Test section for the Helm test procedure.

5.4 Upgrade Tasks

This section provides information about the sequence of tasks to be performed for upgrading an existing Policy deployment..

Helm Upgrade

Upgrading an existing deployment replaces the running containers and pods with new containers and pods. If there is no change in the pod configuration, it is not replaced. Unless there is a change in the service configuration of a microservice, the service endpoints remain unchanged.

Upgrade Procedure

Caution:

  • Stop the provisioning traffic before you start the upgrade procedure.
  • Do not perform any configuration changes during the upgrade.
  • Do not exit from helm upgrade command manually. After running the helm upgrade command, it takes some time (depending upon the number of pods to upgrade) to upgrade all the services. In the meantime, you must not press "ctrl+c" to come out from helm upgrade command. It may lead to anomalous behavior.
  1. Untar the latest Policy package and if required, re-tag and push the images to registry. For more information, see Downloading Policy package and Pushing the Images to Customer Docker Registry.
  2. Modify the occnp_custom_values_23.4.9.yaml file parameters as per site requirement.
  3. Do not change the nfInstanceId configuration for the site. In case of multisite deployments, configure nfInstanceId uniquely for each site.
  4. Assign appropriate values to core_services in the appInfo configuration based on policy Mode.
  5. Run the following command to upgrade an existing Policy deployment:

    Note:

    If you are upgrading an existing Policy deployment with georedundancy feature enabled, ensure that you configure dbMonitorSvcHost and dbMonitorSvcPort parameters before running helm upgrade. For more information on the parameters, see
    • Using local Helm chart:
      $ helm upgrade <release_name> <helm_chart> -f <occnp_customized_values.yaml> --namespace <namespace>

      Where,

      <release_name> is the Policy release name.

      <helm_chart> is the Helm chart.

      <policy_customized_values.yaml> is the latest custom-values.yaml file. For example, occnp_custom_values_23.4.9.yaml

      <namespace> is namespace of Policy deployment.

      For example:

      $ helm upgrade occnp occnp-pkg-23.4.9.0.0.tgz -f occnp_custom_values_23.4.9.yaml --namespace occnp
    • Using chart from Helm repo:
      $ helm upgrade <release_name> <helm_repo/helm_chart> --version <chart_version> -f <policy_customized_values.yaml> --namespace <namespace>

      Where,

      <release_name> is the Policy release name.

      <helm_repo/helm_chart> is the Helm repository for Policy.

      <policy_customized_values.yaml> is the latest custom-values.yaml file. For example, occnp-23.4.9-custom-values-occnp.yaml

      <namespace> is namespace of Policy deployment.

      For example:

      $ helm upgrade occnp occnp-helm-repo/occnp --version 23.4.9 -f occnp_custom_values_23.4.9.yaml --namespace occnp
      Optional parameters that can be used in the helm install command:
      • atomic:If this parameter is set, installation process purges chart on failure. The --wait flag will be set automatically.
      • wait: If this parameter is set, installation process will wait until all pods, PVCs, Services, and minimum number of pods of a deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout.
      • timout duration: If not specified, default value will be 300 (300 seconds) in Helm. It specifies the time to wait for any individual Kubernetes operation (like Jobs for hooks). If the helm install command fails at any point to create a Kubernetes object, it will internally call the purge to delete after the timeout value. Here, the timeout value is not for overall installation, but for automatic purge on installation failure.

    Note:

    It is recommended not to use --wait and --atomic parameters along with helm upgrade as this might result in upgrade failure.
  6. Run the following command to check the status of the upgrade:
    $ helm status <release_name> --namespace <namespace>

    Where,

    <release_name> is the Policy release name.

    <namespace> is namespace of Policy deployment.

    For example:

    $ helm status occnp --namespace occnp
  7. Perform sanity check using Helm test. See the Performing Helm Test section for the Helm test procedure.
  8. If the upgrade fails, see Upgrade or Rollback Failure in Oracle Communications Cloud Native Core, Converged Policy Troubleshooting Guide.

5.5 MIB Management

toplevel.mib and policy-alarm-mib.mib are two MIB files which are used to generate the traps. You must update these files along with the Alert file in order to fetch the traps in their environment. The MIB files are managed by SNMP manager.

The following two scenarios are considered:
  • Single-site SNMP manager: In a single site SNMP manager, each NFs are managed by a dedicated SNMP Manager. In this case we can continue with normal upgrade procedure.
  • Multisite SNMP manager: In a Multisite SNMP manager, multiple NFs are managed by a centralized SNMP Manager. In this case, there are conflicting OIDs in toplevel.mib for OraclePCF and OracleNRF with ID as 36. To resolve this, the ID for OracleNRF has been retained as 36 but OraclePCF has been renamed as OraclePolicy with a new ID as 52. You need to do the following during first Site and last site upgrade.
    • First Site Upgrade: Replace the existing tklc_toplevel.mib with the new tklc_toplevel.mib file. You can download the new tklc_toplevel.mib from MOS.

      Note:

      MIB files are packaged along with CNC Policy Custom Templates.
    • Last Site Upgrade: After the upgrade, use toplevel.mib file to generate the traps. You can download the new toplevel.mib from MOS. In this case, all the existing alerts will get cleared and new alerts with new OIDs will be triggered. Customer defined alerts (if any) using existing policy OID, shall be adjusted to adapt new OID.

      Note:

      It is mandatory to remove the tklc_toplevel.mib file after the upgrade.