5 Upgrading Policy

This chapter provides information about upgrading Oracle Communications Cloud Native Core, Converged Policy (Policy) deployment to the latest release.

It is recommended to perform Policy upgrade in a specific order.

For more information about the upgrade order, see Oracle Communications Cloud Native Core, Solution Upgrade Guide.

Note:

Unless otherwise stated, features should not be enabled during Policy upgrade using Helm.

It is recommended to enable new features only after ensuring that the Policy upgrade is successful and a rollback to an older release will not be considered.

In a multi-site environment, Policy upgrade must be successful in all the sites before enabling the required features.

This can involve two Helm upgrades: one for upgrading Policy software version and then another Helm upgrade to enable the feature after accepting the upgrade.

If there are any issues observed after enabling the features, disable the feature using Helm and CNC Console, perform a Helm upgrade, and verify that the issue is resolved.

Note:

  • In a georedundant deployment, perform the steps explained in this section on all the georedundant sites.
  • For Policy georedundant deployments, all the georedundant sites are expected to be upgraded to the common version before any individual site of GR deployment is planned for additional upgrade.

    For example, In a three-site Policy deployment, all the 3 sites are at the same release version as N. During site upgrade, site 1 and site 2 are upgraded to N+1 version, and site 3 is not upgraded yet. At this state, before upgrading site 3 to N+1/N+2, upgrading site 1 or site 2 from N+1/N+2 version to higher version is not supported as site3 in the georedundant environment is still not upgraded to N+1/N+2.

    For more information about the cnDBTier georedundant deployments, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

    For more information about the CNC Console georedundant deployments, see Oracle Communications Cloud Native Core, CNC Console Installation, Upgrade, and Fault Recovery Guide.

5.1 Supported Upgrade Paths

The following table lists the supported upgrade paths for Policy:

Table 5-1 Supported Upgrade Paths

Source Release Target Release
25.1.2xx 25.2.100

Note:

Policy must be upgraded before upgrading cnDBTier. For Policy 25.2.1xx, the upgrade is supported only from cnDBTier 24.2.6.

5.2 Upgrade Strategy

Policy supports in-service upgrade. The supported upgrade strategy is RollingUpdate. The rolling update strategy is a gradual process that allows you to update your Kubernetes system with only a minor effect on performance and no downtime. The advantage of the rolling update strategy is that the update is applied Pod-by-Pod so the greater system can remain active.

Note:

It is recommended to perform in-service upgrade during maintenance window where the recommended traffic rate is 25% of the configured traffic or below. We also expect the traffic failure to stay below 5% during the upgrade and fully recover post upgrade.

The following engineering configuration parameters are used to define upgrade strategy:

  • upgradeStrategy parameter indicates the update strategy used in Policy.
  • maxUnavailable parameter determines the maximum number of pods that will be unavailable during upgrade.

    For more information on maxUnavailable for each microservices

    refer PodDisruptionBudget Configuration section.

Note:

When Policy is deployed with OCCM, follow the specific upgrade sequence as mentioned in the Oracle Communications, Cloud Native Core Solution Upgrade Guide.

Note:

During an in-service Helm upgrade transient errors may occur which are typically resolved by the Network Element's retry mechanism. It is done either by using a different available pod on the same site or by retrying at another site.

It is recommended is to execute in-service Helm upgrade during maintenance window or low traffic period to minimize any service impact.

5.3 Preupgrade Tasks

This section provides information about preupgrade tasks to be performed before upgrading Policy.

  1. Keep current custom_values.yaml file as backup.
  2. Update the new custom_values.yaml file for target Policy release. For details on customizing this file, see Customizing Policy.
  3. Before starting the upgrade, take a manual backup of Policy REST based configuration. This helps if preupgrade data has to be restored.

    Note:

    For Rest API configuration details, see Oracle Communications Cloud Native Core, Converged Policy REST Specification Guide.
  4. Before upgrading, perform sanity check using Helm test. See the Performing Helm Test section for the Helm test procedure.
  5. While upgrading Policy from a base version where database slicing was introduced to SM service (Policy 22.4.0), Usage Monitoring service (Policy 23.4.0) or PCRF Core (Policy 24.2.0) or at a version above from where database slicing was introduced, manually create the sliced table for these services if table slicing was not previously enabled for that service and if table slicing is enabled during upgrade.

    For example, if the base version is Policy 23.3.0 and Policy is installed with no slicing enabled for any of the services. To enable slicing for Usage Monitoring service and upgrade to any latest version (say 24.1.0), manually create sliced tables of UmContext database.

    Name of the tables must be in the format: <tableName>_1, <tableName>_2...<tableName>_n. Number of sliced tables to be created must be slicing count -1.

    For example, if umContextTableSlicingCount is 8 then following sliced tables need to be manually created before upgrade - UmContext_1, UmContext_2, UmContext_3 ... UmContext_7

  6. In order to enable database slicing after an upgrade from one version to the same version, since the database hooks are not executed, manually create the database slices on database directly.
    1. Run the following command to create the database slices:
      CREATE TABLE `gxsession_<slice_number>` (
        `id` varchar(255) NOT NULL,
        `value` varchar(20000) DEFAULT NULL,
        `nai` varchar(255) DEFAULT NULL,
        `ipv4` varchar(20) DEFAULT NULL,
        `ipv6` varchar(50) DEFAULT NULL,
        `e164` varchar(20) DEFAULT NULL,
        `imsi` varchar(20) DEFAULT NULL,
        `imei` varchar(20) DEFAULT NULL,
        `ipd` varchar(255) DEFAULT NULL,
        `updated_timestamp` bigint unsigned DEFAULT '0',
        `lastaccesstime` datetime DEFAULT CURRENT_TIMESTAMP,
        `siteid` varchar(128) DEFAULT NULL,
        `compression_scheme` tinyint unsigned DEFAULT NULL,
        PRIMARY KEY (`id`),
        KEY `idx_gxsession_ipv4` (`ipv4`),
        KEY `idx_gxsession_e164` (`e164`),
        KEY `idx_gxsession_nai` (`nai`),
        KEY `idx_gxsession_ipv6` (`ipv6`),
        KEY `idx_gxsession_imsi` (`imsi`),
        KEY `idx_gxsession_imei` (`imei`),
        KEY `idx_gxsession_ipd` (`ipd`),
        KEY `idx_audit_datetime` (`lastaccesstime`)
      ) ENGINE=ndbcluster DEFAULT CHARSET=latin1 COMMENT='NDB_TABLE=NOLOGGING=1'

      Here <slice_number> refers to the number of slices to be created.

      For example, if the number of slices to be created is 3, run the above command two times, first by replacing gxsession_<slice_number> with gxsession_1, and then with gxsession_2.

    2. Configure the database slicing feature with the advanced setting (DISTRIBUTE_GX_TRAFFIC_USING_TABLE_SLICING) and the deployment variable for GX_SESSION_TABLE_SLICING_COUNT according to the number of slices that you manually created. For more information see Configurable Parameters for Database Slicing table in Database Load Balancing Configuration.

    Note:

    In case of rollback to a previous version of Policy software, all the sessions that are saved in the slices will remain in those tables and will not be moved to the main table.

    For the upgrade process, it is recommended to have the new policy installation in the cluster.

  7. Before starting the Helm upgrade to the latest NF version, please check the set of required databases from Configuring Database, Creating Users, and Granting Permissions section to ensure if they have to be common or site specific. If any of the required databases from Configuring Database, Creating Users, and Granting Permissions section are not available, then ensure adding the missing databases before proceeding with the upgrade.

  8. Before Upgrading to release 24.2.0 from any previous release, make sure there are no entries for UDR Connector and CHF Connector in ReleaseConfig Table of <occnp_release> database (“<occnp_release>” is the database name). Following are the commands for deleting the entries:
    
    DELETE FROM `<occnp_release>`.`ReleaseConfig` WHERE  CfgKey = ‘public.hook.chf-connector’;
    DELETE FROM `<occnp_release>`.`ReleaseConfig` WHERE  CfgKey = ‘public.hook.udr-connector’;
  9. Before upgrading to 25.1.200, perform the following steps:
    1. Move the site undergoing the upgrade to a Complete Shutdown State.
    2. Log in to one of the ndbappmysqld pods on this site.
    3. Run the following commands to verify if the required indexes are present on the local site:
      mysql> USE <policyds-database>;
      1. Check the CREATE TABLE query by using the following command:
        mysql> SHOW CREATE TABLE pdssubscriber;
      Here is the sample output:
      +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      | Table | Create Table |
      +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      | pdssubscriber | CREATE TABLE `pdssubscriber` (
      `pdsuser_id` varchar(128) NOT NULL,
      `supi` varchar(45) DEFAULT NULL,
      `gpsi` varchar(45) DEFAULT NULL,
      `nai` varchar(45) DEFAULT NULL,
      `sy_session_id` varchar(256) DEFAULT NULL,
      `subscriber_value` varchar(12000) NOT NULL,
      `source_type` tinyint(1) DEFAULT NULL,
      `subscription_level` varchar(64) DEFAULT NULL,
      `subscription_info` varchar(4000) DEFAULT NULL,
      `context_info` varchar(4000) DEFAULT NULL,
      `checksum` varchar(64) DEFAULT NULL,
      `pds_profile_id` varchar(128) DEFAULT '',
      `site_id` varchar(128) DEFAULT NULL,
      `is_migrated` tinyint(1) DEFAULT NULL,
      `created_timestamp` bigint unsigned DEFAULT NULL,
      `last_modified_time` bigint unsigned DEFAULT '0',
      `last_audited_time` datetime(6) DEFAULT NULL,
      `version` int unsigned DEFAULT '0',
      `COMPRESSION_SCHEME` tinyint unsigned DEFAULT NULL,
      `mode` smallint unsigned DEFAULT NULL,
      PRIMARY KEY (`pdsuser_id`),
      KEY `idx_pdsprofile_id` (`pds_profile_id`),
      KEY `idx_version` (`version`),
      KEY `idx_audit_datetime` (`last_audited_time`),
      KEY `idx_sysessionid` (`sy_session_id`)
      ) ENGINE=ndbcluster DEFAULT CHARSET=latin1 COMMENT='NDB_TABLE=NOLOGGING=1' |
      +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      1 row in set (0.00 sec)
    4. Check for the presence of following indexes using the required command:
      1. idx_version:
        mysql> SELECT IF(COUNT(*) = 1, 'Exist', 'Not Exist') AS result FROM information_schema.statistics WHERE TABLE_SCHEMA = '<policyds-database>' AND TABLE_NAME = 'pdssubscriber' AND INDEX_NAME = 'idx_version';
      2. idx_supi:
        mysql> SELECT IF(COUNT(*) = 1, 'Exist', 'Not Exist') AS result FROM information_schema.statistics WHERE TABLE_SCHEMA = '<policyds-database>' AND TABLE_NAME = 'pdssubscriber' AND INDEX_NAME = 'idx_supi';
      3. idx_gpsi:
        mysql> SELECT IF(COUNT(*) = 1, 'Exist', 'Not Exist') AS result FROM information_schema.statistics WHERE TABLE_SCHEMA = '<policyds-database>' AND TABLE_NAME = 'pdssubscriber' AND INDEX_NAME = 'idx_gpsi';

      Note:

      Check the values of idx_version, idx_supi, and idx_gpsi and compare it with the output of step 11.2.1
    5. Disable binary logging for the current session to avoid performance impact during index modifications:
      
      mysql> SHOW VARIABLES LIKE 'sql_log_bin';
      mysql> SET sql_log_bin = OFF;  -- Disables binary logging for this session only
      mysql> SHOW VARIABLES LIKE 'sql_log_bin';

      Note:

      Binary logging is disabled only for the current session. Do not exit this session until all the operations are completed. If exited, ensure to disable it again in the new session.
    6. Switch to the target database:
      USE <policyds-database>;
    7. Based on the earlier verification results, perform the following ALTER TABLE command to drop the index:

      Note:

      If the index is already present, this step can be ignored.
      1. If idx_version does not exist in 11.3, drop the index using the following command:
        mysql> ALTER TABLE pdssubscriber DROP INDEX idx_version, ALGORITHM=INPLACE;
      2. If idx_supi does not exist in 11.3, add the index using the following command:
        mysql> ALTER TABLE pdssubscriber ADD INDEX idx_supi (supi ASC) VISIBLE, ALGORITHM=INPLACE;
      3. If idx_gpsi does not exist in 11.3, add the index using the following command:
        mysql> ALTER TABLE pdssubscriber ADD INDEX idx_gpsi (gpsi ASC) VISIBLE, ALGORITHM=INPLACE;

      Note:

      These operations might take some time depending on traffic and database size. Since the site is traffic-isolated and binary logging is disabled, performance and latency on other sites are not affected.
    8. Re-enable binary logging after operations are complete:
      
      mysql> SHOW VARIABLES LIKE 'sql_log_bin';
      mysql> SET sql_log_bin = ON;
      mysql> SHOW VARIABLES LIKE 'sql_log_bin';

      Note:

      While executing the above DDL statements (ALTER statements), there might be call failures for a couple of minutes during this activity (due to timeouts or deadlocks) in the respective site. But it should not impact the traffic on other sites.

      Note:

      To avoid call failures, it is advised to move the site to control shutdown state.
    9. Verify the steps using the following command:
      -- Verify idx_version index - Not Exist
      mysql> SELECT IF(COUNT(*) = 1, 'Exist', 'Not Exist') AS result FROM information_schema.statistics WHERE TABLE_SCHEMA = 'occnp_policyds' AND TABLE_NAME = 'pdssubscriber' AND INDEX_NAME = 'idx_version';
      -- Verify idx_supi index - Exist
      mysql> SELECT IF(COUNT(*) = 1, 'Exist', 'Not Exist') AS result FROM information_schema.statistics WHERE TABLE_SCHEMA = 'occnp_policyds' AND TABLE_NAME = 'pdssubscriber' AND INDEX_NAME = 'idx_supi';
      -- Verify idx_gpsi index - Exist
      mysql> SELECT IF(COUNT(*) = 1, 'Exist', 'Not Exist') AS result FROM information_schema.statistics WHERE TABLE_SCHEMA = 'occnp_policyds' AND TABLE_NAME = 'pdssubscriber' AND INDEX_NAME = 'idx_gpsi';

      Note:

      In addition, you could even check the value of SHOW CREATE TABLE output of pdssubscriber table as an optional step.

      Note:

      On other sites where Policy is not upgraded, please verify the output of 11.3.
    10. Continue with the upgrade on the current site. Repeat the steps from 11.1 on subsequent sites after successful completion on one site.
    11. Before diverting traffic to the site, ensure you re-run the steps for Geo Redundancy Recovery (GRR), even if previously executed.
    12. The following steps needs to be performed for rollback:

      Note:

      Rollback of the above ateps is optional since these changes are backward compatible.
      1. Disable binary logging for the current session to avoid performance impact during index modifications:
        mysql> SHOW VARIABLES LIKE 'sql_log_bin';
        mysql> SET sql_log_bin = OFF;  -- Disables binary logging for this session only
        mysql> SHOW VARIABLES LIKE 'sql_log_bin';

        Note:

        Binary logging is disabled only for the current session. Do not exit this session until all operations are completed. If exited, ensure to disable it again in the new session.
      2. Switch to the target database:
        USE <policyds-database>;
      3. Based on the earlier verification results, perform the following ALTER TABLE commands:
        1. If we have added an idx_version index
          mysql> ALTER TABLE pdssubscriber ADD INDEX idx_version (version ASC), ALGORITHM=INPLACE;
        2. If we have removed idx_supi
          mysql> ALTER TABLE pdssubscriber DROP INDEX idx_supi, ALGORITHM=INPLACE;
        3. If we have added idx_gpsi
          mysql> ALTER TABLE pdssubscriber DROP INDEX idx_gpsi, ALGORITHM=INPLACE;

        Note:

        These operations might take some time depending on traffic and database size. Since the site is traffic-isolated and binary logging is disabled, performance and latency on other sites will not be affected.
    13. Run the following command to enable binary logging after the operations are complete:
      
      mysql> SHOW VARIABLES LIKE 'sql_log_bin';
      mysql> SET sql_log_bin = ON;
      mysql> SHOW VARIABLES LIKE 'sql_log_bin';
  10. Before upgrading to version 25.1.200, ensure you follow this procedure:
    1. Verify if pdssettings table exists in policyds-database:
      mysql> SELECT IF(COUNT(*) = 1, 'Exist', 'Not Exist') AS result FROM information_schema.TABLES WHERE TABLE_SCHEMA = '<policyds-database>' AND TABLE_NAME = 'pdssettings' ;

      Note:

      Proceed with the following steps only if the pdssettings table is present.
    2. Ensure binary logging for the current session is enabled with below steps:
      mysql> SHOW VARIABLES LIKE 'sql_log_bin';
      mysql> SET sql_log_bin = ON;
      mysql> SHOW VARIABLES LIKE 'sql_log_bin';
    3. Run the following commend to delete the required site details:
      mysql>DELETE FROM <policyds-database>.pdssettings WHERE site_id='<site-id>';
    4. Using the following command, verify that no entry is present in pdssettings table for the specified site-id:
      mysql>SELECT COUNT(*) FROM <policyds-database>.pdssettings WHERE site_id='<site-id>';

    In above commands, policyds-database is the database name of policyds. And, site-id is the ID of the site that is being upgraded.

5.4 Upgrade Tasks

This section provides information about the sequence of tasks to be performed for upgrading an existing Policy deployment..

Helm Upgrade

Upgrading an existing deployment replaces the running containers and pods with new containers and pods. If there is no change in the pod configuration, it is not replaced. Unless there is a change in the service configuration of a microservice, the service endpoints remain unchanged.

Upgrade Procedure

Caution:

  • Stop the provisioning traffic before you start the upgrade procedure.
  • Do not perform any configuration changes during the upgrade.
  • Do not exit from helm upgrade command manually. After running the helm upgrade command, it takes some time (depending upon the number of pods to upgrade) to upgrade all the services. In the meantime, you must not press "ctrl+c" to come out from helm upgrade command. It may lead to anomalous behavior.
  1. Untar the latest Policy package and if required, re-tag and push the images to registry. For more information, see Downloading Policy package and Pushing the Images to Customer Docker Registry.
  2. Modify the occnp_custom_values_25.2.100.yaml file parameters as per site requirement.
  3. Do not change the nfInstanceId configuration for the site. In case of multisite deployments, configure nfInstanceId uniquely for each site.
  4. Assign appropriate values to core_services in the appInfo configuration based on policy Mode.
  5. Run the following command to upgrade an existing Policy deployment:

    Note:

    If you are upgrading an existing Policy deployment with georedundancy feature enabled, ensure that you configure dbMonitorSvcHost and dbMonitorSvcPort parameters before running helm upgrade. For more information on the parameters, see
    • Using local Helm chart:
      helm upgrade <release_name> <helm_chart> -f <occnp_customized_values.yaml> --namespace <namespace>

      Where,

      <release_name> is the Policy release name.

      <helm_chart> is the Helm chart.

      <policy_customized_values.yaml> is the latest custom-values.yaml file. For example, occnp_custom_values_25.2.100.yaml

      <namespace> is namespace of Policy deployment.

      For example:

      helm upgrade occnp occnp-25.2.100.0.0.tgz -f occnp_custom_values_25.2.100.yaml --namespace occnp
    • Using chart from Helm repo:
      helm upgrade <release_name> <helm_repo/helm_chart> --version <chart_version> -f <policy_customized_values.yaml> --namespace <namespace>

      Where,

      <release_name> is the Policy release name.

      <helm_repo/helm_chart> is the Helm repository for Policy.

      <policy_customized_values.yaml> is the latest custom-values.yaml file. For example, occnp-25.2.100-custom-values-occnp.yaml

      <namespace> is namespace of Policy deployment.

      For example:

      helm upgrade occnp occnp-helm-repo/occnp --version 25.2.100 -f occnp_custom_values_25.2.100.yaml --namespace occnp
      Optional parameters that can be used in the helm install command:
      • atomic:If this parameter is set, installation process purges chart on failure. The --wait flag will be set automatically.
      • wait: If this parameter is set, installation process will wait until all pods, PVCs, Services, and minimum number of pods of a deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout.
      • timout duration: If not specified, default value will be 300 (300 seconds) in Helm. It specifies the time to wait for any individual Kubernetes operation (like Jobs for hooks). If the helm install command fails at any point to create a Kubernetes object, it will internally call the purge to delete after the timeout value. Here, the timeout value is not for overall installation, but for automatic purge on installation failure.

    Note:

    It is recommended not to use --wait and --atomic parameters along with helm upgrade as this might result in upgrade failure.

    Note:

    The following warnings must be ignored for policy upgrade on CNE 25.1.2xx and 25.2.1xx:
    helm upgrade <release-name> -f <custom.yaml> <tgz-file> -n <namespace>
    W0301 15:46:11.144230 2082757 warnings.go:70] spec.template.spec.containers[0].env[21]: hides previous definition of "PRRO_JDBC_SERVERS"
    W0301 15:46:48.202424 2082757 warnings.go:70] spec.template.spec.containers[0].ports[3]: duplicate port definition with spec.template.spec.containers[0].ports[1]
    W0301 15:47:25.069699 2082757 warnings.go:70] spec.template.spec.containers[0].ports[3]: duplicate port definition with spec.template.spec.containers[0].ports[2]
    W0301 15:47:43.260912 2082757 warnings.go:70] spec.template.spec.containers[0].ports[4]: duplicate port definition with spec.template.spec.containers[0].ports[2]
    W0301 15:47:51.457088 2082757 warnings.go:70] spec.template.spec.containers[0].ports[4]: duplicate port definition with spec.template.spec.containers[0].ports[2]
    Release "<release-name>" has been upgraded. Happy Helming!
    NAME: <release-name>
    LAST DEPLOYED: <Date-Time>
    NAMESPACE: <namespace>
    STATUS: deployed
    REVISION: <N>
  6. Run the following command to check the status of the upgrade:
    helm status <release_name> --namespace <namespace>

    Where,

    <release_name> is the Policy release name.

    <namespace> is namespace of Policy deployment.

    For example:

    helm status occnp --namespace occnp
  7. Perform sanity check using Helm test. See the Performing Helm Test section for the Helm test procedure.
  8. If the upgrade fails, see "Upgrade or Rollback Failure" in Oracle Communications Cloud Native Core, Converged Policy Troubleshooting Guide.

Note:

If you are upgrading from any of the previous releases to Policy 25.1.200 or later versions, after the upgrade procedure is complete, clear the browser cache before accessing the CNC Console, to avoid any configuration-related issues.

After Upgrade, Congestion Control Data Migration

The Diameter Gateway Pod Congestion Control, and Bulwark Pod Congestion Control features are modified to work with common Congestion Control mechanism in 25.1.200. Due to which there are changes to the configurations and would require a data migration. After performing upgrade to 25.1.200, the user need to perform data migration from older Congestion Control configurations to the current configurations. The data migration process is manual and one-time activity that user has to perform, by using either the CNC Console or Congestion Control migration APIs.

For more information about the data migration, see "Diameter Pod Congestion Control" feature description and Congestion Control "settings" sections in Oracle Communications Cloud Native Core, Converged Policy User Guide.

For more information about the data migration, see "Bulwark Pod Congestion Control" feature description and Congestion Control "settings" sections in Oracle Communications Cloud Native Core, Converged Policy User Guide.

For more information about the data migration using the REST API, see "Diameter Gateway Congestion Migration" API section in Oracle Communications Cloud Native Core, Converged Policy REST Specification Guide.

For more information about the data migration using the REST API, see "Bulwark Congestion Migration" API section in Oracle Communications Cloud Native Core, Converged Policy REST Specification Guide.

Note:

If Usage Monitoring Service is enabled during upgrade to 24.2.1, then the log level must be set to WARN in the CNC Console for the Usage Management Service.

Note:

To automate the life cycle management of the certificates through OCCM, you can migrate certificates and keys from Policy to OCCM. For more information, see "Introducing OCCM in an Existing NF Deployment" in Oracle Communications Cloud Native Core, Certificate Management User Guide.

You can remove Kubernetes secrets if the current version of Policy does not use that secret by checking the occnp_custom_values.yaml file. Before deleting, please make sure that there is no plan to rollback to the Policy version which uses these secrets. Otherwise Rollback will fail.

After the upgrade http_server_requests_seconds metric with dimension {pod=~".*ueservice.*}" for UE service is replaced with occnp_ueservice_overall_processing_time_seconds and http_server_requests_seconds metric with dimension {pod=~".*amservice.*}" for AM service is replaced with occnp_amservice_overall_processing_time_seconds. Make sure to use the new metrics:
  • For UE service:
    • occnp_ueservice_overall_processing_time_seconds_max instead of http_server_requests_seconds_max
    • occnp_ueservice_overall_processing_time_seconds_sum instead of http_server_requests_seconds_sum
    • occnp_ueservice_overall_processing_time_seconds_count instead of http_server_requests_seconds_count
  • For AM service:
    • occnp_amservice_overall_processing_time_seconds_max instead of http_server_requests_seconds_max
    • occnp_amservice_overall_processing_time_seconds_sum instead of http_server_requests_seconds_sum
    • occnp_amservice_overall_processing_time_seconds_count instead of http_server_requests_seconds_count

For more details, see UE Service Metrics and AM Service Metrics sections in Oracle Communications Cloud Native Core, Converged Policy User Guide.

5.5 Postupgrade Tasks

This section explains the postupgrade tasks for Policy.

5.5.1 Alert Configuration

This section describes how to modify or update Policy alerts as needed, based on requirements, after performing the upgrade. For more details, see Configuring Alerts section.

5.6 MIB Management

toplevel.mib and POLICY-ALARM-MIB.mib are two MIB files which are used to generate the traps. You must update these files along with the Alert file in order to fetch the traps in their environment. The MIB files are managed by SNMP manager.

Note:

policy-alarm-mib.mib file has been replaced by POLICY-ALARM-MIB.mib file.