5 Upgrading Policy

This chapter provides information about upgrading Oracle Communications Cloud Native Core, Converged Policy (Policy) deployment to the latest release. It is recommended to perform Policy upgrade in a specific order. For more information about the upgrade order, see Oracle Communications Cloud Native Core, Solution Upgrade Guide.

Note:

  • In a georedundant deployment, perform the steps explained in this section on all the georedundant sites.
  • For Policy georedundant deployments, all the georedundant sites are expected to be upgraded to the common version before any individual site of GR deployment is planned for additional upgrade.

    For example, In a three-site Policy deployment, all the 3 sites are at the same release version as N. During site upgrade, site 1 and site 2 are upgraded to N+1 version, and site 3 is not upgraded yet. At this state, before upgrading site 3 to N+1/N+2, upgrading site 1 or site 2 from N+1/N+2 version to higher version is not supported as site3 in the georedundant environment is still not upgraded to N+1/N+2.

    For more information about the cnDBTier georedundant deployments, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

    For more information about the CNC Console georedundant deployments, see Oracle Communications Cloud Native Core, CNC Console Installation, Upgrade, and Fault Recovery Guide.

5.1 Supported Upgrade Paths

The following table lists the supported upgrade paths for Policy:

Table 5-1 Supported Upgrade Paths

Source Release Target Release
24.2.x 24.3.0
24.1.x 24.3.0

Note:

Policy must be upgraded before upgrading cnDBTier.

5.2 Upgrade Strategy

Policy supports in-service upgrade. The supported upgrade strategy is RollingUpdate. The rolling update strategy is a gradual process that allows you to update your Kubernetes system with only a minor effect on performance and no downtime. The advantage of the rolling update strategy is that the update is applied Pod-by-Pod so the greater system can remain active.

Note:

It is recommended to perform in-service upgrade during maintenance window where the recommended traffic rate is 25% of the configured traffic or below. We also expect the traffic failure to stay below 5% during the upgrade and fully recover post upgrade.

The following engineering configuration parameters are used to define upgrade strategy:

  • upgradeStrategy parameter indicates the update strategy used in Policy.
  • maxUnavailable parameter determines the maximum number of pods that will be unavailable during upgrade.

    For more information on maxUnavailable for each microservices

    refer PodDisruptionBudget Configuration section.

Note:

When Policy is deployed with OCCM, follow the specific upgrade sequence as mentioned in the Oracle Communications, Cloud Native Core Solution Upgrade Guide.

Note:

During an in-service Helm upgrade transient errors may occur which are typically resolved by the Network Element's retry mechanism. It is done either by using a different available pod on the same site or by retrying at another site.

It is recommended is to execute in-service Helm upgrade during maintenance window or low traffic period to minimize any service impact.

5.3 Preupgrade Tasks

This section provides information about preupgrade tasks to be performed before upgrading Policy.

  1. Keep current custom_values.yaml file as backup.
  2. Update the new custom_values.yaml file for target Policy release. For details on customizing this file, see Customizing Policy.
  3. While upgrading Policy from a base version where database slicing was introduced to SM service (Policy 22.4.0), Usage Monitoring service (Policy 23.4.0) or PCRF Core (Policy 24.2.0) or at a version above from where database slicing was introduced, manually create the sliced table for these services if table slicing was not previously enabled for that service and if table slicing is enabled during upgrade.

    For example, if the base version is Policy 23.3.0 and Policy is installed with no slicing enabled for any of the services. To enable slicing for Usage Monitoring service and upgrade to any latest version (say 24.1.0), manually create sliced tables of UmContext database.

    Name of the tables must be in the format: <tableName>_1, <tableName>_2...<tableName>_n. Number of sliced tables to be created must be slicing count -1.

    For example, if umContextTableSlicingCount is 8 then following sliced tables need to be manually created before upgrade - UmContext_1, UmContext_2, UmContext_3 ... UmContext_7

  4. In order to enable database slicing after an upgrade from one version to the same version, since the database hooks are not executed, manually create the database slices on database directly.
    1. Run the following command to create the database slices:
      CREATE TABLE `gxsession_<slice_number>` (
        `id` varchar(255) NOT NULL,
        `value` varchar(20000) DEFAULT NULL,
        `nai` varchar(255) DEFAULT NULL,
        `ipv4` varchar(20) DEFAULT NULL,
        `ipv6` varchar(50) DEFAULT NULL,
        `e164` varchar(20) DEFAULT NULL,
        `imsi` varchar(20) DEFAULT NULL,
        `imei` varchar(20) DEFAULT NULL,
        `ipd` varchar(255) DEFAULT NULL,
        `updated_timestamp` bigint unsigned DEFAULT '0',
        `lastaccesstime` datetime DEFAULT CURRENT_TIMESTAMP,
        `siteid` varchar(128) DEFAULT NULL,
        `compression_scheme` tinyint unsigned DEFAULT NULL,
        PRIMARY KEY (`id`),
        KEY `idx_gxsession_ipv4` (`ipv4`),
        KEY `idx_gxsession_e164` (`e164`),
        KEY `idx_gxsession_nai` (`nai`),
        KEY `idx_gxsession_ipv6` (`ipv6`),
        KEY `idx_gxsession_imsi` (`imsi`),
        KEY `idx_gxsession_imei` (`imei`),
        KEY `idx_gxsession_ipd` (`ipd`),
        KEY `idx_audit_datetime` (`lastaccesstime`)
      ) ENGINE=ndbcluster DEFAULT CHARSET=latin1 COMMENT='NDB_TABLE=NOLOGGING=1'

      Here <slice_number> refers to the number of slices to be created.

      For example, if the number of slices to be created is 3, run the above command two times, first by replacing gxsession_<slice_number> with gxsession_1, and then with gxsession_2.

    2. Configure the database slicing feature with the advanced setting (DISTRIBUTE_GX_TRAFFIC_USING_TABLE_SLICING) and the deployment variable for GX_SESSION_TABLE_SLICING_COUNT according to the number of slices that you manually created. For more information see Configurable Parameters for Database Slicing table in Database Load Balancing Configuration.

    Note:

    In case of rollback to a previous version of Policy software, all the sessions that are saved in the slices will remain in those tables and will not be moved to the main table.

    For the upgrade process, it is recommended to have the new policy installation in the cluster.

  5. Before starting the upgrade, take a manual backup of Policy REST based configuration. This helps if preupgrade data has to be restored.

    Note:

    For Rest API configuration details, see Oracle Communications Cloud Native Core, Converged Policy REST Specification Guide.
  6. Before upgrading, perform sanity check using Helm test. See the Performing Helm Test section for the Helm test procedure.
  7. Before Upgrading to release 24.2.0 from any previous release, make sure there are no entries for UDR Connector and CHF Connector in ReleaseConfig Table of <occnp_release> database (“<occnp_release>” is the database name). Following are the commands for deleting the entries:
    
    DELETE FROM `<occnp_release>`.`ReleaseConfig` WHERE  CfgKey = ‘public.hook.chf-connector’;
    DELETE FROM `<occnp_release>`.`ReleaseConfig` WHERE  CfgKey = ‘public.hook.udr-connector’;

5.4 Upgrade Tasks

This section provides information about the sequence of tasks to be performed for upgrading an existing Policy deployment..

Helm Upgrade

Upgrading an existing deployment replaces the running containers and pods with new containers and pods. If there is no change in the pod configuration, it is not replaced. Unless there is a change in the service configuration of a microservice, the service endpoints remain unchanged.

Upgrade Procedure

Caution:

  • Stop the provisioning traffic before you start the upgrade procedure.
  • Do not perform any configuration changes during the upgrade.
  • Do not exit from helm upgrade command manually. After running the helm upgrade command, it takes some time (depending upon the number of pods to upgrade) to upgrade all the services. In the meantime, you must not press "ctrl+c" to come out from helm upgrade command. It may lead to anomalous behavior.
  1. Untar the latest Policy package and if required, re-tag and push the images to registry. For more information, see Downloading Policy package and Pushing the Images to Customer Docker Registry.
  2. Modify the occnp_custom_values_24.3.0.yaml file parameters as per site requirement.
  3. Do not change the nfInstanceId configuration for the site. In case of multisite deployments, configure nfInstanceId uniquely for each site.
  4. Assign appropriate values to core_services in the appInfo configuration based on policy Mode.
  5. Run the following command to upgrade an existing Policy deployment:

    Note:

    If you are upgrading an existing Policy deployment with georedundancy feature enabled, ensure that you configure dbMonitorSvcHost and dbMonitorSvcPort parameters before running helm upgrade. For more information on the parameters, see
    • Using local Helm chart:
      helm upgrade <release_name> <helm_chart> -f <occnp_customized_values.yaml> --namespace <namespace>

      Where,

      <release_name> is the Policy release name.

      <helm_chart> is the Helm chart.

      <policy_customized_values.yaml> is the latest custom-values.yaml file. For example, occnp_custom_values_24.3.0.yaml

      <namespace> is namespace of Policy deployment.

      For example:

      helm upgrade occnp occnp-pkg-24.3.0.0.0.tgz -f occnp_custom_values_24.3.0.yaml --namespace occnp
    • Using chart from Helm repo:
      helm upgrade <release_name> <helm_repo/helm_chart> --version <chart_version> -f <policy_customized_values.yaml> --namespace <namespace>

      Where,

      <release_name> is the Policy release name.

      <helm_repo/helm_chart> is the Helm repository for Policy.

      <policy_customized_values.yaml> is the latest custom-values.yaml file. For example, occnp-24.3.0-custom-values-occnp.yaml

      <namespace> is namespace of Policy deployment.

      For example:

      helm upgrade occnp occnp-helm-repo/occnp --version 24.3.0 -f occnp_custom_values_24.3.0.yaml --namespace occnp
      Optional parameters that can be used in the helm install command:
      • atomic:If this parameter is set, installation process purges chart on failure. The --wait flag will be set automatically.
      • wait: If this parameter is set, installation process will wait until all pods, PVCs, Services, and minimum number of pods of a deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout.
      • timout duration: If not specified, default value will be 300 (300 seconds) in Helm. It specifies the time to wait for any individual Kubernetes operation (like Jobs for hooks). If the helm install command fails at any point to create a Kubernetes object, it will internally call the purge to delete after the timeout value. Here, the timeout value is not for overall installation, but for automatic purge on installation failure.

    Note:

    It is recommended not to use --wait and --atomic parameters along with helm upgrade as this might result in upgrade failure.

    Note:

    The following warnings must be ignored for policy upgrade on 24.1.0, 24.2.0, and 24.3.0 CNE:
    helm upgrade <release-name> -f <custom.yaml> <tgz-file> -n <namespace>
    W0301 15:46:11.144230 2082757 warnings.go:70] spec.template.spec.containers[0].env[21]: hides previous definition of "PRRO_JDBC_SERVERS"
    W0301 15:46:48.202424 2082757 warnings.go:70] spec.template.spec.containers[0].ports[3]: duplicate port definition with spec.template.spec.containers[0].ports[1]
    W0301 15:47:25.069699 2082757 warnings.go:70] spec.template.spec.containers[0].ports[3]: duplicate port definition with spec.template.spec.containers[0].ports[2]
    W0301 15:47:43.260912 2082757 warnings.go:70] spec.template.spec.containers[0].ports[4]: duplicate port definition with spec.template.spec.containers[0].ports[2]
    W0301 15:47:51.457088 2082757 warnings.go:70] spec.template.spec.containers[0].ports[4]: duplicate port definition with spec.template.spec.containers[0].ports[2]
    Release "<release-name>" has been upgraded. Happy Helming!
    NAME: <release-name>
    LAST DEPLOYED: <Date-Time>
    NAMESPACE: <namespace>
    STATUS: deployed
    REVISION: <N>
  6. Run the following command to check the status of the upgrade:
    helm status <release_name> --namespace <namespace>

    Where,

    <release_name> is the Policy release name.

    <namespace> is namespace of Policy deployment.

    For example:

    helm status occnp --namespace occnp
  7. Perform sanity check using Helm test. See the Performing Helm Test section for the Helm test procedure.
  8. If the upgrade fails, see "Upgrade or Rollback Failure" in Oracle Communications Cloud Native Core, Converged Policy Troubleshooting Guide.

Note:

If Usage Monitoring Service is enabled during upgrade to 24.2.1, then the log level must be set to WARN in the CNC Console for the Usage Management Service.

Note:

To automate the lifecycle management of the certificates through OCCM, you can migrate certificates and keys from Policy to OCCM. For more information, see "Introducing OCCM in an Existing NF Deployment" in Oracle Communications Cloud Native Core, Certificate Management User Guide.

You can remove Kubernetes secrets if the current version of Policy does not use that secret by checking the occnp_custom_values.yaml file. Before deleting, please make sure that there is no plan to rollback to the Policy version which uses these secrets. Otherwise Rollback will fail.

After the upgrade http_server_requests_seconds metric with dimension {pod=~".*ueservice.*}" for UE service is replaced with occnp_ueservice_overall_processing_time_seconds and http_server_requests_seconds metric with dimension {pod=~".*amservice.*}" for AM service is replaced with occnp_amservice_overall_processing_time_seconds. Make sure to use the new metrics:
  • For UE service:
    • occnp_ueservice_overall_processing_time_seconds_max instead of http_server_requests_seconds_max
    • occnp_ueservice_overall_processing_time_seconds_sum instead of http_server_requests_seconds_sum
    • occnp_ueservice_overall_processing_time_seconds_count instead of http_server_requests_seconds_count
  • For AM service:
    • occnp_amservice_overall_processing_time_seconds_max instead of http_server_requests_seconds_max
    • occnp_amservice_overall_processing_time_seconds_sum instead of http_server_requests_seconds_sum
    • occnp_amservice_overall_processing_time_seconds_count instead of http_server_requests_seconds_count

For more details, see UE Service Metrics and AM Service Metrics sections in Oracle Communications Cloud Native Core, Converged Policy User Guide.

5.5 MIB Management

toplevel.mib and POLICY-ALARM-MIB.mib are two MIB files which are used to generate the traps. You must update these files along with the Alert file in order to fetch the traps in their environment. The MIB files are managed by SNMP manager.

Note:

policy-alarm-mib.mib file has been replaced by POLICY-ALARM-MIB.mib file.