4 Resolved and Known Bugs

This chapter lists the resolved and known bugs for Oracle Communications Cloud Native Core release 3.25.2.2xx.0.

These lists are distributed to customers with a new software release at the time of General Availability (GA) and are updated for each maintenance release.

4.1 Severity Definitions

Service requests for supported Oracle programs may be submitted by you online through Oracle’s web-based customer support systems or by telephone. The service request severity level is selected by you and Oracle and should be based on the severity definitions specified below.

Severity 1

Your production use of the supported programs is stopped or so severely impacted that you cannot reasonably continue work. You experience a complete loss of service. The operation is mission critical to the business and the situation is an emergency. A Severity 1 service request has one or more of the following characteristics:
  • Data corrupted.
  • A critical documented function is not available.
  • System hangs indefinitely, causing unacceptable or indefinite delays for resources or response.
  • System crashes, and crashes repeatedly after restart attempts.

Reasonable efforts will be made to respond to Severity 1 service requests within one hour. For response efforts associated with Oracle Communications Network Software Premier Support and Oracle Communications Network Software Support & Sustaining Support, please see the Oracle Communications Network Premier & Sustaining Support and Oracle Communications Network Software Support & Sustaining Support sections above.

Except as otherwise specified, Oracle provides 24 hour support for Severity 1 service requests for supported programs (OSS will work 24x7 until the issue is resolved) when you remain actively engaged with OSS working toward resolution of your Severity 1 service request. You must provide OSS with a contact during this 24x7 period, either on site or by phone, to assist with data gathering, testing, and applying fixes. You are requested to propose this severity classification with great care, so that valid Severity 1 situations obtain the necessary resource allocation from Oracle.

Severity 2

You experience a severe loss of service. Important features are unavailable with no acceptable workaround; however, operations can continue in a restricted fashion.

Severity 3

You experience a minor loss of service. The impact is an inconvenience, which may require a workaround to restore functionality.

Severity 4

You request information, an enhancement, or documentation clarification regarding your software but there is no impact on the operation of the software. You experience no loss of service. The result does not impede the operation of a system.

4.2 Resolved Bug List

The following Resolved Bugs tables list the bugs that are resolved in Oracle Communications Cloud Native Core Release 3.25.2.2xx.0.

4.2.1 ATS Resolved Bugs

Release 25.2.202

Table 4-1 ATS 25.2.202 Resolved Bugs

Bug Number Title Description Severity Found in Release
38720772 ATS Jenkins UI fails to log in using the default policy credentials (25.2.101) ATS login failed with default credentials.

Doc Impact:

There is no doc impact.

2 25.2.100
37735161 ATS Framework Lacks Support for ipFamilies Configuration in Helm Charts for Dual Stack Support ATS Helm charts did not provide a configurable option to enable dual stack (IPv4 and IPv6) support. Instead, ATS relied on the Kubernetes cluster’s preferred ipFamilies configuration, which defaulted to either IPv4 or IPv6.

Doc Impact:

There is no doc impact.

3 25.1.100

4.2.2 BSF Resolved Bugs

Release 25.2.200

Table 4-2 BSF 25.2.200 Resolved Bugs

Bug Number Title Description Severity Found In Release
38618278 nrfclient_nw_conn_out_request_total metric for NfDeregistration is not pegging with configured priority value

The nrfclient_nw_conn_out_request_total metric for NfDeregistration was not pegged with the following configuration. Instead, it was pegged as UNKNOWN:

"trafficPrioritization": {

"messageTypes": [

{ "priority": "1", "messageType": "AutonomousOnDemandNFRegistration" },

{ "priority": "1", "messageType": "NfHeartBeat" },

{ "priority": "1", "messageType": "AutonomousNfPatch" },

{ "priority": "1", "messageType": "NfDeRegistration" },

{ "priority": "1", "messageType": "AutonomousHealthCheck" }

],

"featureEnabled": true,

"incomingPriorityHeader": "3gpp-sbi-message-priority",

"outgoingPriorityHeader": "3gpp-sbi-message-priority",

"nfSubscribeMessageTypes": []

}

Doc Impact:

There is no doc impact.

2 25.2.200
38303397 Unable to edit Load Shedding Profiles after BSF Upgrade and Rollback

After upgrading BSF from 25.1.100 to 25.1.200, the Load Shedding Profiles (LSP), LSP-overload and LSP-congestion could not be edited through CNC Console. The Edit icon did not respond. This behavior persisted after the congestion profiles were migrated and BSF was rolled back to version 25.1.100.

Doc Impact:

There is no doc impact.

2 25.1.200
38562169 XFCC_header scenarios failed while changing config-map values for Ingress Gateway when integrating APIGW

XFCC_header scenarios failed as the Ingress Gateway config-map values were changed during the APIGW integration. The gateway property is replaced with gateway.server.webflux in application.yaml file and in the gateways’ config-map in order to address an issue in which metadata value retrieval returned null, because the key was treated as case insensitive.

Doc Impact:

There is no doc impact.

2 25.2.200
38840813 PCF is in complete shutdown, when PCF Diameter Gateway pod is scaled down and BSF does not perform alternate routing to PCF

Alternate routing did not function when the PCF was in complete shutdown and the Diameter gateway pods were scaled down.

Diameter alternate routing was configured to route on the following error conditions (when the PCF responded):

  • 3002
  • 3004
  • timeout

BSF did not route to an alternate PCF when the TCP connection was down.

Doc Impact:

There is no doc impact.

2 25.1.200
38954031 log4j2_events_total metric is not seen in Prometheus for BSF Management service after performing in-service upgrade to 25.2.200. However, they are pegging correctly within the pod

During an in-service upgrade to BSF 25.2.200 from 25.2.101, the log4j2_events_total metric was not visible for bsf-management-service in the Prometheus endpoint. However, after logging into the pod and checking the metrics locally, the actuator metrics were observed to be present and updating.

Doc Impact:

There is no doc impact.

2 25.2.200
38656599 High-cardinality metrics were observed after upgrading from 25.1.100

After upgrading BSF from 25.1.100 to 25.2.200, ocbsf_diam_response_latency_seconds_bucket and ocbsf_diam_service_overall_processing_time_seconds_bucket metrics appeared to drive elevated memory utilization on the Operations Services Overlay (OSO) prom-svr pod and triggered remote_write errors. An out of memory condition was observed at 16 GB. After increasing the limit to 32 GB, the pod continued to restart due to overload condition. As a temporary solution, a drop action was applied on OSO to prevent scraping these metrics. After the change, the pod appeared stable, and other metrics began to display on the Grafana dashboard. Metric dimensions needed to be adjusted, so that the metrics could be used.

Doc Impact:

There is no doc impact.

3 25.1.100
38636281 3002' errors were observed after Post Upgrade MOP execution for overload control change

After upgrading from BSF 23.4.x to 25.2.200, after applying the Overload and Congestion Control configurations, 3002 errors were observed for Re-Auth-Request (RAR toward Call Session Control Function (CSCF).

Of the eight Diameter Gateway pods, only one Diameter Gateway pod initiated Certificates (CERs) with an outbound direction, while the remaining seven Diameter Gateway pods did not initiate CERs and reported 3002 errors for RARs.

Doc Impact:

There is no doc impact.

3 23.4.6
38786398 SCP alerts were triggered even after disabling SCP monitoring feature

SCP alerts were triggered when the SCP Peer Health Check feature was disabled. The alerts were triggered as the ocbsf_oc_egressgateway_peer_health_status != 0 expression checked for {{Unable to render embedded object: File (= 0}. This is because, when the feature was disabled, the metric value was set to {{-1}}, which still satisfied the alert condition {{) not found.= 0}}.

As a resolution, the alert condition was set to ocbsf_oc_egressgateway_peer_health_status == 1.

Doc Impact:

Updated the details of SCP_PEER_UNAVAILABLE alert in List of Alerts section in Oracle Communications Cloud Native Core, Binding Support Function User Guide.

3 25.1.200
38854828 DateTimeParseException errors were observed in BSF Management service pod logs

The DateTimeParseException errors were observed in the pod logs for BSF Management service.

Doc Impact:

There is no doc impact.

3 25.1.100
38855968 Misconfiguration in BSF Management service with datasources.default.schema-generate=create_drop

It was identified that the application.properties file contained in atasources.default.schema-generate=create_drop. This setting caused the schema to be dropped and recreated upon each application startup, which could result in loss of all production data and service outages.

The datasources.default.schema-generate=create_drop configuration was reviewed and changed to a safe value such as “none” to prevent the service outage and the data loss.

Doc Impact:

There is no doc impact.

3 25.2.200
38475917 REST API to delete all the pcfBinding sessions from database is not working

An attempt was made to delete all available PCF binding sessions from the database by using the /oc-bsf-query/v1/pcfBindings/admin/databasecleanup REST API. The request failed with the following error: "error":"Not Found","path":"/oc-bsf-query/v1/pcfBindings/admin/databasecleanup".

Doc Impact:

There is no doc impact.

3 25.2.100
38380963 BSF_SERVICES_DOWN alert is updating as app-info service was down when scaling bsf-management pod to 0

The BSF_SERVICES_DOWN alert was updated to indicate that the app-info service was down when the bsf-management pod was scaled to 0.

Doc Impact:

The description of BSF_SERVICES_DOWN alert in BSF User Guide is changed from "{{$labels.microservice}} service is not running!" to "{{$labels.service}} service is not running!". For more information, see "BSF Alerts" section in Oracle Communications Cloud Native Core, Binding Support Function User Guide.

3 25.2.100
38648929 NullPointerException observed in CM service logs when changing log level for app-info service

When log level for app-info service was changed through CNC Console, a NullPointerException (NPE) was observed in the CM service logs, and a 500 Internal Server Error was displayed.

Doc Impact:

There is no doc impact.

3 25.2.200
38591830 Subscriber tracing "marker":{"name":"SUBSCRIBER"} is not observed in BSF revalidation message

Subscriber tracing marker {"name":"SUBSCRIBER"} was not observed on the bsf-revalidation message.

Doc Impact:

There is no doc impact.

3 25.2.100
38391474 Reconnection attempt from DSR to BSF does not happen when DPR has "BUSY" cause

When the Controlled Shutdown feature was enabled, BSF did not perform error mapping for Message Type: Disconnect-Peer-Request with Command Code: 282. During Controlled Shutdown execution, BSF sent a Disconnect-Peer-Request to DSR with the cause set to “BUSY” for an ongoing TCP and Diameter connection.

Doc Impact:

There is no doc impact.

3 24.2.2
38705674 Unexpected "Request body has already been claimed: Two conflicting sites are trying to access the request body." error occurred in BSF Management service after upgrading to 25.2.200

During the in-service upgrade test, a few errors were observed related to the following message:

Unexpected error occurred: Request body has already been claimed.

Doc Impact:

There is no doc impact.

3 25.2.200
38693223 BSF Management service is reporting "Error extracting UE ID from request body: No content to map due to end-of-input" error when both Enhanced logging and Enable UE Identifier is enabled

When Enhanced Logging and UE Identifier were enabled, BSF Management logs were flooded for every call.

Doc Impact:

There is no doc impact.

3 25.2.200
38934802 Remove unused Helm parameter isIpv6Enabled

The isIpv6Enabled parameter was deprecated and unused in Ingress Gateway, Egress Gateway, and Alternate-route services.

Accordingly, these attributes were removed from the custom-values.yaml file to avoid confusion.

Doc Impact:

Removed "isIpv6Enabled" parameter from Customizing BSF section in Oracle Communications Cloud Native Core, Binding Support Function Installation, Upgrade, and Fault Recovery Guide.

3 25.2.200
38963348 Reverting millisecond to second level precision for last_access_timestamp

Previously, last_access_timestamp values were updated with millisecond precision (for example, GREATEST(last_access_timestamp + 1, UNIX_TIMESTAMP(CURRENT_TIMESTAMP(3)) * 1000)) to improve conflict resolution and prevent duplicate timestamps during concurrent updates. This change aligned the column’s precision with that of other services.

However, multisite upgrade scenarios showed that upgraded sites stored timestamps in milliseconds, while non-upgraded sites continued to store timestamps in seconds. This discrepancy increased conflicts and introduced inconsistencies across sites, particularly during rolling or mixed-version upgrades. As a result, the millisecond-precision change was reverted to restore consistent behavior and compatibility across all environments.

Doc Impact:

There is no doc impact.

3 25.1.200
38729878 BSF Alertrule yaml file has extra space on namespace label

The BSF Alertrule yaml file had an extra space on the namespace label.

Doc Impact:

Updated the expression and description of the alerts for which the extra space is removed in the BSF_Alertrule.yaml file. For more details see, "BSF Alerts" section in Oracle Communications Cloud Native Core, Binding Support Function User Guide.

3 25.2.100
38898693 Flooding of org.eclipse.jetty logs were observed in 25.2.200

The org.eclipse.jetty logs were flooded in BSF 25.2.200.

Doc Impact:

There is no doc impact.

3 25.2.200
38390051 Observed data inconsistency after completion of rollback

Data inconsistency was observed in the ocpm_bsf.pcf_binding table across two sites following completion of the rollback.

A replication channel error was reported. The LOST_EVENTS incident occurred on the source.

Doc Impact:

There is no doc impact.

3 25.1.200
38940949 NRF Agent Import REST API with action=Create is successful, but configuration is not updated

The NRF Agent configuration import through REST API call with action=Create completed successfully, but the configuration was not updated.

Doc Impact:

There is no doc impact.

3 25.2.200
38365198 ocbsf-custom-values.yaml file does not expose containerPortNames parameter

The custom-values.yaml file for BSF 25.1.100 did not contain containerPortName parameter, which was used to provision backendPortName in the Cloud Native Load Balancer (CNLB) annotations. The containerPortName parameter was added to custom-values.yaml file to reduce the effort required to locate it in the charts.

Doc Impact:

There is no doc impact.

4 25.2.100
36866750 "Failed to update stats" (<class 'requests.exceptions.MissingSchema'>) error observed on Performance pods

The “Failed to update stats” error (<class 'requests.exceptions.MissingSchema'>) was observed on Performance pods.

Doc Impact:

There is no doc impact.

4 24.2.200
38642472 Some of the configuration screens are not exported from CNC Console even when they are present on CNC Console

Certain configuration screens such as Perf-Info Logging Level, App-Info Logging Level, and Error Code Series List present in CNC Console are not exported when using the Bulk Export option. These screens are excluded from the exported output. This issue is observed in both freshly installed BSF deployments and upgraded environments (upgraded from 25.2.100 to 25.2.200).

Doc Impact:

There is no doc impact.

3 25.2.200

Note:

Resolved bugs from 25.1.200 and 25.2.100 have been forward ported to Release 25.2.200.

4.2.3 cnDBTier Resolved Bugs

Release 25.2.201

Table 4-3 cnDBTier 25.2.201 Resolved Bugs

Bug Number Title Description Severity Found In Release
38684514 API for changing preferredIpFamily from IPV4 to IPV6 and vice versa gives partial response on multi-channel setups

The API for changing the preferredIpFamily in dual stack setups (from IPv4 to IPv6 or vice versa) did not return a detailed JSON response for multi-channel configurations. Instead of listing all replication channel groups configured for the local site, the API response returned a randomly selected replication channel group per remote site.

Doc impact:

Updated the "Support for Dual Stack" section in Oracle Communications Cloud Native Core, cnDBTier User Guide.

2 25.1.201
38671447 Replication service resolves to IPv6 IP instead of FQDN, causing TLS SAN mismatch and connection failures

The replication microservice used a raw IPv6 address instead of the configured FQDN to connect to remote sites, causing TLS validation failures due to certificate SANs containing only FQDNs. This led to REST API call failures, repeated communication errors in the logs, and prevented replication from initializing.

Doc impact:

Updated the "Support for Dual Stack" section in Oracle Communications Cloud Native Core, cnDBTier User Guide.

2 25.1.201
38567831 SLF DBTier Recover script is failing during migration to new site

While migrating to a new site, the SLF DBTier Recover script failed to run successfully. The migration process included uninstalling the previous site, cleaning up related database entries, upgrading existing sites, and configuring the new site on an updated cluster. However, due to communication issues between the new and existing sites, the recovery script was unable to complete, resulting in a failed migration scenario.

Doc impact:

Updated the "Removing a Georedundant cnDBTier Cluster" section in Oracle Communications Cloud Native Core, cnDBTier User Guide.

2 25.1.201
38385887 During BT Data/Voice Call Model performance test, mysql-cluster-db-backup-manager-svc pod restart has been observed unexpectedly

During high-load performance testing on a multi-site cluster, repeated restarts of a subset of ndbappmysqld pods on one site resulted in the unexpected restart of the mysql-cluster-db-backup-manager-svc pod. This behavior highlights a potential stability issue where ongoing disruptions to ndbappmysqld pods can impact the reliability of backup operations managed by the mysql-cluster-db-backup-manager-svc pod, especially during periods of system stress or failover scenarios.

Doc impact:

There is no doc impact.

2 25.2.100
38677062 SLF dbtremovesite is restarting the application pods

When running the dbtremovesite script as part of a site removal or migration process in a multi-site SLF environment, it was observed that running the script caused unexpected restarts of system services, specifically, the monitor-service and backup-manager-svc pods. These restarts further triggered associated application pods to restart, even though the active sites were handling live traffic. This unexpected pod behavior is not aligned with the intended function of the script, as standard removal procedures do not require disruption to application services on active sites.

Doc impact:

There is no doc impact.

2 25.1.100
38284918 Down site local backup during non-fatal GRR status changed from COMPLETED to FAILED in backup-manager-svc pod log

Check if backup is completed successfully before dropping the databases during the GRR.

Doc impact:

There is no doc impact.

3 25.1.200
38181539 REPLICATION_DOWN alert falsely triggers due to missing metrics during prometheus scrapes

Replication failed Alert Logic to be fired on an actual switch over and not on Metrics Missing Cases.

Doc impact:

There is no doc impact.

3 23.4.3
38582579 Rest API of db-replication-svc will not be accessible outside db-replication-svc during migration of http

During migration from HTTPS to HTTP or vice versa, the REST API of db-replication-svc becomes inaccessible outside the service due to a configuration mismatch. As a result, while db-replication-svc expects HTTPS connections, client services such as monitor-svc and helm test perceive HTTPS as disabled and attempt to connect over HTTP. This mismatch in protocol configuration leads to failed REST API calls during the initial handshake, resulting in observed errors and disrupted communication during the migration process.

Doc impact:

There is no doc impact.

3 25.2.100
38487200 Remove export of DBTIER_RELEASE_NAME in horizontal scaling procedure using dbtscale_ndbmtd_pods

The horizontal scaling procedure for ndbmtd pods previously required users to manually export the DBTIER_RELEASE_NAME environment variable before running the dbtscale_ndbmtd_pods script. With this fix , this manual export is no longer necessary. As a result, the step to manually export DBTIER_RELEASE_NAME had to be removed from the horizontal scaling documentation to reflect the updated process.

Doc impact:

Updated the "Horizontal Scaling of ndbmtd pods using dbtscale_ndbmtd_pods" section to remove the export DBTIER_RELEASE_NAME command.

For more information, see Oracle Communications Cloud Native Core, cnDBTier User Guide.

3 25.2.100
38637939 Documentation error in "Remove cnDBTier Geo-Redundant Site" procedure

The documentation should be updated to ensure the reference log aligns with the intended removal of cluster1.

Doc impact:

Updated Step 4 of "Remove cnDBTier Geo-Redundant Site" procedure in Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

3 25.2.101
38646497 25.1.201-1 : Observing outbound traffic on ipv4

Outbound traffic from the DB replication service was observed using IPv4, even though IPv6 was configured as the preferred protocol in the service settings. This occurred despite internal and most external services being set to use IPv6, with only optional dual-stack fallback.

Doc impact:

There is no doc impact.

3 25.1.201
38648664 GRR API for marking remotesite as failed giving OK response for unconfigured remotesite The GRR API for marking a remotesite as failed was returning a 200 OK response even when the specified remotesite was not configured in the setup. The API indicated a successful operation regardless of the existence or configuration status of the remotesite which was incorrect.

Doc impact:

There is no doc impact.

3 25.2.101
38652957 Real Time Replication Status shows incorrect status of replication when one replication channel is down between 2 sites

In a 4-site multichannel setup, the system reported the overall replication status as UP for a site even when one or more replication channels between sites were down. The API did not evaluate or display replication status on a per-channel basis, resulting in a misleading aggregate site-level status that indicated healthy replication despite individual channel failures.

Doc impact:

There is no doc impact.

3 25.2.100
38660810 Real Time Replication Status API responds incorrect Error Code When Monitor Service cannot communicate with all SQL pods on a site

The Real Time Replication Status API returned a 500 Internal Server Error or timed out whenever the monitor service lost communication with all SQL pods on Site 1, rather than returning the expected 503 Service Unavailable status code. This incorrect error code can mislead clients into interpreting the issue as a server malfunction instead of a temporary service unavailability.

Doc impact:

There is no doc impact.

3 25.2.100
38660862 New Real Time Replication Status REST API Returns 200 Instead of 503 when Communication breaks between Monitor service and Replication services(1 or more) on Site 1

The new Real Time Replication Status REST API returned a 200 OK status code even when there was a communication failure between the monitor service and one or more replication services on Site 1. In these cases, the response showed allSqlStatusDetails = null for the affected replication service, instead of returning a 503 Service Unavailable error.

Doc impact:

There is no doc impact.

3 25.2.100
38669677 Real Time Replication Status API Only Reports Local Site Replication Status and Fails to Return Replication Details for Remote Sites

The Real Time Replication Status API returned replication status only for the local (requesting) site, omitting replication details for remote sites in the cluster.

Doc impact:

There is no doc impact.

3 25.2.100
38660967 Real Time Replication Status returns 502 or Timeout Instead of 400/404 when Monitor Service is down

When the Monitor service is down, the Real-Time Replication Status API returns a 502 Bad Gateway error or times out, rather than providing a clear and appropriate 400 or 404 error response. This incorrect status handling lead to misleading client behavior and prevented accurate identification of the monitor’s unavailability.

Doc impact:

There is no doc impact.

3 25.2.100
38651597 cnDBtier Replication Service leader Pod is not coming up and taking some time

During fresh installation in a multi-site environment, the Replication Service leader pod did not come up promptly when other sites in the cluster were not yet installed. The pod only started successfully after installation progressed on additional sites. Investigation found that this delay was related to certain service containers being unhealthy during the initial startup phase, resulting in extended initialization times for the leader pod.

Doc impact:

There is no doc impact.

3 25.1.201
38685798 dbtreplmgr prints http connection on HTTPS TLS enabled setup

When running the dbtreplmgr script on a setup with HTTPS and TLS enabled, the script output incorrectly indicated HTTP connections in the logs, even though the environment was configured for secure HTTPS communication. This misrepresentation in the script output can cause confusion and does not accurately reflect the actual security protocol in use.

Doc impact:

There is no doc impact.

3 25.1.201
38685274 Continuous ERROR logs being printed in db-monitor-svc

After upgrading a 3-site, 2-channel setup with HTTPS and TLS enabled from 25.1.201-2 to 25.1.201-3, continuous ERROR logs are being generated in db-monitor-svc indicating "[DbtierRetrieveBackupTransferMetrics] No Backups Transfers Started to provide the Backup Status Metrics." These log messages are appearing at a frequency of approximately once per minute, despite the GRR operation completing successfully on the sites.

Doc impact:

There is no doc impact.

3 25.1.201
38710352 Update required in output of dbtscale_ndbmtd_pods in phase zero(0)

When running the dbtscale_ndbmtd_pods script in Phase 0 on a 3-site, 3-channel ASM-enabled setup configured for dual stack with IPv6 preferred, the script output displayed usage of IPv4 for internal operations. This is inconsistent with the deployment's configured IPv6-preferred protocol and may lead to confusion or misinterpretation during scaling activities.

Doc impact:

There is no doc impact.

3 25.2.101
38710401 Update required in output of dbtreplmgr in phase zero(0) When running the dbtreplmgr script in Phase 0 on a 3-site, 3-channel ASM-enabled setup configured for dual stack with IPv6 preferred, the script output indicated usage of IPv4 for internal operations. This behavior is inconsistent with the deployment’s preferred IPv6 configuration and may cause confusion during initial setup and verification.

Doc impact:

There is no doc impact.

3 25.2.101
38710531 Update required in output of dbtscale_vertical_pvc in phase zero(0)

During vertical scaling operations using the scaling script, it was observed that the script defaulted to using IPv4 for internal processes, even though the environment was configured for dual stack with IPv6 as the preferred protocol.

Doc impact:

There is no doc impact.

3 25.2.101
38631076 CNDB- Pending GRR should fail if it stucked for longer duration

Pending Georeplication Recovery operations did not automatically fail when they remained unprogressed for an extended period. During upgrade and rollback procedures in a multi-site environment, it was observed that a GRR process stayed stuck in a pending state for over 12 hours without progressing or timing out.

Doc impact:

There is no doc impact.

3 25.1.103
38724990 DBTIER 25.1.201 : DBtier Installation Guide does not have Post upgrade checks related to schema

cnDBTier documentation did not include procedures or checks to verify that the database schema has been correctly upgraded and that the upgrade was fully successful.

Doc impact:

Added the "Verifying the Schema Upgrade" section to verify if the database schema upgrade has completed successfully on any site.

For more information, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

3 25.1.201
38768457 Inconsistent Replication Status Between Realtime Replication APIs When DB-Monitor to Replication Service Communication Is Broken An inconsistency was observed between the cluster-level and site-specific replication realtime status APIs when communication between the db-monitor service and a replication service was disrupted. In such cases, the cluster-level API reported the replication status between sites as DOWN, while the site-specific API reported it as UP. Both APIs are expected to provide consistent status reporting.

Doc impact:

There is no doc impact.

3 25.2.101
38818934 Wrong Rest URL mentioned in DBTier Status API section in 25.2.200 User Guide

The DBTier Status API section in the 25.2.200 User Guide listed an incorrect REST URL:

http://base-uri/db-tier/db-tier/replication/status/realtime.

Doc impact:

The cnDBTier Status API section has been updated to reflect the correct URL: http://base-uri/db-tier/replication/status/realtime.

For more information, see Oracle Communications Cloud Native Core, cnDBTier User Guide.

3 25.2.101
38855643 BSF 25.2.101 replication down after upgrade

After upgrading the BSF application and cnDBTier (from 25.1.100/25.1.103 to 25.2.101) across three sites in a GR setup, replication from Site 3 to Site 2 failed. Logs indicated an "Unknown database" error for bsf_ocbsf_overload, which was missing on Site 3 but present on Site 1 and Site 2. The issue was first observed after upgrading Site 2’s cnDBTier. All database privileges were confirmed to be correct, and pre-upgrade health checks reported no missing tables.

Doc impact:

There is no doc impact.

3 25.2.101
38865774 Metrics are getting missed on site-3 on a 3 site GR Setup - 6 replication group In a 3-site GR setup with 6 replication groups (IPv6), metrics for certain nodes (for example, node_id 56 and 57) are not being reported on the Grafana dashboard, even though traffic on these nodes is running and replication is functioning correctly. Specifically, metrics such as db_tier_api_bytes_sent_count and db_tier_api_wait_exec_complete_count are missing for the affected nodes, indicating a gap in metric collection or reporting despite normal data and replication activity.

Doc impact:

There is no doc impact.

3 25.1.201
38971451 binlog purge errors observed during a long duration PCF performance run

During a long-duration PCF performance run, the replication SQL pods intermittently logged “Bin Log Sizes Empty at the local site” while running scheduled binlog purge checks.

Doc impact:

There is no doc impact.

3 25.1.201
38894669 Site Specific PCF DB GRANTS not being replicated across sites using MultiRep Channels

In multi-site deployments using MultiRep channels, site-specific database users and GRANTs were intermittently replicated across sites, resulting in inconsistent permissions between sites.

Doc impact:

Added the section "Mandatory Guidelines for User and Grant Operations" to provide mandatory guidelines for schema, user, and grant operations. For more information, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

3 25.1.200
38958827 Request for Documentation and Audit Script PCF NF SKIP ERROR Configuration

Documentation for PCF NF replication SKIP ERROR configuration covering recommended skip-error threshold values per NF nor the referenced database audit script was not available.

Doc impact:

cnDBTier documentation was updated to include the section "cnDBTier Replication Skip Errors" that provided information on replication skip errors as part of its replication error-handling mechanism when applying epochs between sites.

For more information, see Oracle Communications Cloud Native Core, cnDBTier User Guide.

3 24.2.1
38717425 DBTier: Unsupported characters in backup encryption password

Backup and restore operations were failing due to the use of unsupported characters in the backup encryption password. The system allowed passwords containing characters outside the permitted set leading to failures during backup and recovery.

Doc impact:

There is no doc impact.

4 24.2.6

Note:

Resolved bugs from 25.1.100, 25.1.201, and 25.2.101 have been forward ported to release 25.2.201.

4.2.4 CNC Console Resolved Bugs

Release 25.2.201

Table 4-4 CNC Console 25.2.201 Resolved Bugs

Bug Number Title Description Severity Found in Release
39091288 SLF 25.2.100 Upgrade Failure In 25.2.100, a new -ingress-gateway-intra-nf Service was added and inherited the same labels as the main -ingress-gateway Service. This caused mapping conflicts and external FQDN routing issues. As a resolution, -ingress-gateway-intra-nf is now disabled by default so that only the main -ingress-gateway Service is active.

Doc Impact: All references to the -ingress-gateway-intra-nf service were removed from the Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.
2 25.2.100
39189294 Rollback Failure After Failed Upgrade When Intra-NF Service Becomes Optional Across Revisions When a user upgraded from a CNC Console revision with intra-NF services enabled to a revision where intra-NF is disabled or optional in the Helm configuration, if the upgrade failed and the release entered a Helm failed state, a rollback to the previous healthy revision could fail with the error: no Service with the name “<release>-ingress-gateway-intra-nf” found. This was observed during failed-upgrade recovery and was caused by intermediate or inconsistent Helm release state reconciliation for optional resources.

Doc Impact: The Oracle Communications Cloud Native Configuration Console Troubleshooting Guide was updated with steps to resolve this scenario.
3 25.2.101

Release 25.2.200

Table 4-5 CNC Console 25.2.200 Resolved Bugs

Bug Number Title Description Severity Found in Release
38319858 POLICY_READ Role enabled but user is also able to edit the parameters

Users who only had the POLICY_READ role assigned were able to edit parameters on some Policy screens. The "General Configurations" screen correctly blocked write operations with a "403 FORBIDDEN" error, but the "policy-project" screen allowed users to save changes. This happened because the Policy API prefix was not added on certain screens.

Doc Impact:

There is no documentation impact.

3 24.2.3
38528957 CNC Console Logs concerns/requests

CNC Console log data issues such as missing login error logs, events logged at the DEBUG level instead of WARNING or INFO, lack of audit or security attributes, and incomplete metadata for some resource access logs were reported. Some activities, such as failed logins, were not generating expected Splunk events, and fields like AuthenticationType were set to unknown.

Doc Impact:

The "Logs" section in Oracle Communications Cloud Native Configuration Console Troubleshooting Guide has been updated to include additional NF (BSF, POLICY, SCP etc.,) SECURITY and AUDIT log examples.

4 23.4.1
38524240

Format Issues identified from Security CNCC Log Analysis

Log messages were not in standard JSON format, and several attributes contained placeholders instead of values. Header, and payload fields were also formatted as custom key or value pairs, not JSON.

Doc Impact:

All logs in the "Logs" section of Oracle Communications Cloud Native Configuration Console Troubleshooting Guide have been standardized to use JSON format for the message, headers, and payload fields, and placeholders have been removed. The log message label has been corrected to populate as a JSON object by default.

4 23.4.1
38753758 CNCC installation is failing in Openshift 4.14

If a user updated the runAsUser field in the securityContext of a pod or container to use an arbitrary user ID instead of the default hardcoded value, the iam-kc pod entered a CrashLoopBackOff state. To resolve this issue, the IAM Dockerfile was updated to support the use of arbitrary user IDs as specified in the runAsUser field.

Doc Impact:

There is no documentation impact.

3 25.2.100

4.2.5 CNE Resolved Bugs

Release 25.2.200

There are no resolved bugs in this release.

Note:

Resolved bugs from 25.2.1xx have been forward ported to Release 25.2.200.

4.2.6 NSSF Resolved Bugs

Release 25.2.200

Table 4-6 NSSF 25.2.200 Resolved Bugs

Bug Number Title Description Severity Found in Release
38307293 NSSF - Missing max concurrent streams on ingress gateway

The NSSF ingress gateway did not populate the serverDefaultSettingsMaxConcurrentStream value in the HTTP/2 settings, so HTTP/2 clients treated the maximum concurrent streams as 1, which caused a traffic bottleneck.

Doc impact:

There is no doc impact.

2 25.1.100
38159590 [25.1.200] Multiple instances of restart for ns-selection pods were observed over a run of 18 hr with reset stream ( Running 7K success and 3.5K failure as part of http reset stream on Site1)

The ns-selection pods restarted multiple times during an 18-hour run when HTTP/2 reset stream traffic was sent. This occurred in a three-site GR setup with traffic on Site1, where 10.5K TPS included 7K successful requests and 3.5K reset stream requests.

Doc impact:

There is no doc impact.

2 25.1.200
38716716 Need 25.1.100 - Unexpected behavior for NSSF nrf-client when scaled to zero

When you scaled the NSSF nrf-client pod to zero, the NSSF NF registration status in NRF was inconsistent: it sometimes changed to SUSPENDED but sometimes was DEREGISTERED. This behavior occurred when the nrf-client received the Deregistration status from app-info during termination and sent a DELETE request to NRF. In 25.2.200, nrf-client was removed as a critical service in app-info by updating the custom values YAML, which prevented deregistration when nrf-client was scaled down.

Doc impact:

There is no doc impact.

3 25.1.100
38341986 NSSF ATS 25.1.100: NRF Stub Server Returning Internal Error (500)

An Automated Test Suite scenario failed when two NRF stub servers were configured to return 503 and the third stub server was expected to return a successful 20x response, but it returned a 500 internal error. As a result, all NRF stub servers were marked as UNHEALTHY and the expected nnrf-disc message was not sent to NRF, and the test failed with “No Healthy NRF Routes available, cannot send Request.”

Doc impact:

There is no doc impact.

3 25.1.100
38125454 NSSF ATS 25.1.100 MultiplePLMN Feature failing because expiry parameter set to statically.

NSSF ATS regression test cases for the MultiplePLMN feature failed because the request files contained a statically set expiry value (for example, 2025-06-25T09:23:45.123456Z). NSSF returned HTTP/2 400 Bad Request with OPTIONAL_IE_INCORRECT and detail: 'Bad Request. Wrong duration'.

Doc impact:

There is no doc impact.

3 25.1.100
38545826 NSSF 25.1.200 Expired subscriptions: The database is not cleaning after subscription

NSSF continued to send notifications to AMFs for subscriptions that had already expired, and the expired subscription records were not purged from the database until 24 hours after creation.

Doc impact:

There is no doc impact.

3 25.1.200
38500020 NSSF 25.1.200 : Documentation for few features missing in NSSF GUI

The NSSF GUI did not include documentation links for several regression feature files, including Delete_NssfEventSubscription.feature.

Doc impact:

Updated the "Configuring NSSF using CNC Console" section in Oracle Communications Cloud Native Core, Network Slice Selection Function User Guide.

3 25.1.200
38753670 NSSF install failing on Openshift 4.14

NSSF installation on OpenShift 4.14 failed because OpenShift blocked pod creation when the pods used a fixed runAsUser value that was outside the namespace UID range, and the preinstall hook failed to start when it tried to create temporary files under /tmp on a read-only file system.

Doc impact:

There is no doc impact.

3 25.1.200
38192130 25.1.200: [ Incorrect response code in case of expired token is sent in request ]

When you sent a request with an expired token, NSSF returned HTTP status 408 with WWW-Authenticate: ... error="invalid_token" instead of returning 401 for invalid_token.

Doc impact:

There is no doc impact.

3 25.1.200
38219417 NSSF 25.1.200 Discrepancies in Alert Names Between User Guide and Rule Files

Alert names in the user guide did not match the alert rule YAML file, and one alert appeared only in the user guide.

Doc impact:

Updated the "NSSF Alerts " section in Oracle Communications Cloud Native Core, Network Slice Selection Function User Guide.

3 25.1.200
37966602 NSSF is compressing the response when response size is less than 1024 Bytes for an availability PUT request, when gzip compression is enabled

When gzip compression was enabled, NSSF compressed responses to availability PUT requests even when the response size was less than 1024 bytes.

Doc impact:

There is no doc impact.

3 25.1.100
37966541 NSSF is not able to handle avail request when Payload is more than 1 MB and gzip feature is enable

When you sent an availability PUT request with a payload larger than 1 MB while gzip compression was enabled, NSSF returned an HTTP request timeout instead of returning 413 Request Entity Too Large.

Doc impact:

There is no doc impact.

3 25.1.100
37639879 oauth failure is not coming in oc_ingressgateway_http_responses_total metrics

When you sent traffic with invalid OAuth access tokens, the OAuth failure responses were not counted in the oc_ingressgateway_http_responses_total metric even though they were counted in oc_oauth_validation_failure_total.

Doc impact:

There is no doc impact.

3 25.1.100
37684124 [10.5K Traffic] while adding the empty frame in all requests, NSSF rejected the ns-selection traffic, dropping 0.045% with a 503 error code

When you enabled empty frames in all ns-selection and ns-availability requests and ran 10.5K traffic, NSSF rejected ns-selection traffic and dropped 0.045% of requests with HTTP 503 errors.

Doc impact:

There is no doc impact.

3 25.1.100
37048499 GR replication is breaking post rollback to of CNDB 24.2.1

CNDB replication failed after you rolled back from 24.3.0-rc.1 to 24.2.1-rc.4 in a two-site GR setup. Replication went down after the second site rollback completed and the first site rollback finished.

Doc impact:

There is no doc impact.

3 24.3.0
37303227 [NSSF 24.3.0] [EGW-Oauth feature] &quot;Oc-Access-Token-Request-Info:&quot; IE should not come in notification.

When you enabled OAuth token requests for subscription notifications, NSSF included the Oc-Access-Token-Request-Info header in notification messages.

Doc impact:

There is no doc impact.

3 24.3.0
37216832 [9K TPS Success] [1K TPS Slice not configured in DB] NSSF is sending the success responses for slice which has not configured in database and failure response of slice which has configured in database for pdu session establishment request.

During a PDU session establishment test with 9K TPS for slices configured in the database and 1K TPS for slices not configured in the database, NSSF returned incorrect results. NSSF returned successful responses for 0.4% of requests for slices that were not configured, and it returned 403 and 503 errors for some requests for slices that were configured.

Doc impact:

There is no doc impact.

3 24.3.0
36285762 After restarting the NSselection pod, NSSF is transmitting an inaccurate NF Level value to ZERO percentage.

After you restarted the ns-selection pod, NSSF reported an incorrect NF-level load of 0% in the /load response and in the 3gpp-Sbi-Lci header, even though the NF-service-instance load value was nonzero (for example, 29%).

Doc impact:

There is no doc impact.

3 23.4.0
35888411 Wrong peer health status is coming &quot;DNS SRV Based Selection of SCP in NSSF&quot;

When peer monitoring was enabled and DNS SRV selection was disabled, the peer health status reported an invalid SCP IP address as healthy and did not report health status for the peer configured through a virtual host.

Doc impact:

There is no doc impact.

3 23.3.0
38857519 NSSF 25.1.200 - Duplicated key: port in the ingress-gateway section of default CV

The default NSSF custom values file defined the ports key twice in the ingress-gateway section, which could cause exceptions in external tools that generate custom values files.

Doc impact:

There is no doc impact.

4 25.1.200
37323951 prometheus url comment should be mentioned overload and LCI/OCI feature in NSSF CV file

The comment for the Prometheus URL in the NSSF custom values file stated that the URL was mandatory only for the LCI/OCI feature, even though it was also required for the Overload feature.

Doc impact:

There is no doc impact.

4 24.3.0
37622760 NSSF should send 415 responses to ns-selection and ns-availability requests if their content type is invalid.

When you sent ns-selection or ns-availability requests with an invalid Content-Type header (for example, multipart/form-data), NSSF returned a 500 error instead of returning 415 Unsupported Media Type.

Doc impact:

There is no doc impact.

4 25.1.100
37617910 Subscription Patch should be a part of Availability Sub Success (2xx) % panel in Grafana Dashboard

The Grafana Availability Sub Success (2xx) % panel did not include subscription PATCH results, so subscription patch failures (such as 405 errors) were not shown in the service status view.

Doc impact:

There is no doc impact.

4 25.1.100
37617910 If ns-selection and ns-availability are invalid Accept Header, NSSF should not send 404 responses of UnSubscribe and subscription patch request. it should be 406 error code and &quot;detail&quot;:&quot;No acceptable&quot;.

When you sent ns-selection and ns-availability requests with an invalid Accept header, NSSF returned 404 responses for subscription delete and subscription patch requests instead of returning a 406 Not Acceptable response with a “No acceptable representation” detail.

Doc impact:

There is no doc impact.

4 25.1.100
37612743 If URLs for ns-selection and ns-availability are invalid, NSSF should return a 404 error code and title with INVALID_URI.

When you sent ns-selection and ns-availability requests with an invalid URL, NSSF returned inconsistent errors: the ingress gateway returned 404 when the request used an incorrect microservice address, but NSSF returned 400 with title set to INVALID_URI when the request reached NSSF with an incorrect endpoint.

Doc impact:

There is no doc impact.

4 25.1.100
36881883 In Grafana, Service Status Panel is showing more than 100% for Ns-Selection and Ns-Avaliability Data

The Grafana Service Status panel showed percentages greater than 100% for ns-selection and ns-availability data.

Doc impact:

There is no doc impact.

4 24.2.0

4.2.7 OSO Resolved Bugs

Release 25.2.201

Table 4-7 OSO Resolved Bugs 25.2.201

Bug Number Title Description Severity Found in Release
39120693 On installing APM, pod is stuck in CrashLoopBackOff State if TLS and MTLS section remains unchanged with default values shared with package

At the time of Alert Processing microservice installation along with OSO, pods kept restarting because it is expected that TLS is enabled by default. But, it was set to disabled by default which caused this issue. This is fixed and TLS will be enabled by default for connection to Kafka.

Doc impact: There is no doc impact.

2 25.2.200
39130070 TLS CA certificate is to be mounted via secret for APM instead of directly passing TLS certificate into APM custom yaml

Alert Processing microservice Custom values file is reading TLS connection credentials for example CA and certificate and key for certificate, as literals.

As part of fix, these values have now been converted to k8s secrets , where user is expected to create k8s secrets first and then pass secret object names to Alert Processing microservice Custom values file.

Doc impact: There is no doc impact.

2 25.2.200
39000311 Indentation issue with ocoso_csar_25_2_200_0_0_alm_custom_values.yaml causing errors

The alertmanager Custom value file had wrong indentation for resource profile block. This has been fixed and installation was successful.

Doc impact: There is no doc impact.

3 OSO 25.2.200
39101019 OSO APM Service Unable to Assign IPv6 ClusterIP in DualStack Deployment

While installing OSO 25.2.200 with DualStack (IPv4_IPv6) enabled, the OSO APM Service did not assign a ClusterIP with IPv6. The service currently received only an IPv4 ClusterIP even though the Kubernetes cluster is configured for DualStack.

Doc impact: There is no doc impact.

3 OSO 25.2.200
39173807 Lack of startupProbe for OSO ALM

OSO components AlertManager, Prometheus, and Alert Processing microservice do not use startUpProbe construct. startUpProbe construct is used when pods are slow to come up or have outside dependency which takes time to be active, startUpProbe helps to avoid pod restart in thee scenarios.

To achieve the delay so that liveness probe does not enforce restart upon failure, initialDelaySeconds is added which delayed the start of liveness probe by 45 seconds.

Doc impact: There is no doc impact.

3 OSO 25.2.200

Note:

Resolved bugs from 25.2.1xx have been forward ported to Release 25.2.201.

Release 25.2.200

There are no resolved bugs in this release.

Note:

Resolved bugs from 25.2.1xx have been forward ported to Release 25.2.200.

4.2.8 OCCM Resolved Bugs

Release 25.2.200

There are no resolved bugs in this release.

4.2.9 NRF Resolved Bugs

Release 25.2.201

Table 4-8 NRF 25.2.201 Resolved Bugs

Bug Number Title Description Severity Found In Release
38834647 Increase of failure rate % after in service upgrade to 24.2.4 and to 25.1.202

During network instability and cluster churn, traffic failures and latency increased because thread blocking prevented proper Ingress Gateway pod operations. Endpoint request processing was reworked to use a separate thread pool, which improved handling of requests and stabilized traffic after upgrades.

Doc Impact:

There is no doc impact.

2 25.1.202
38738894 Peer health status gives empty response for vFQDN when SCP Health check feature is enabled and changes are done in existing DNS service records The Egress Gateway microservice returned empty peer health responses when DNS records for virtual peers were updated, because the peer health table was refreshed incorrectly. Comparison and refresh logic were fixed so health status tracking now works correctly after DNS record changes.

Doc Impact:

There is no doc impact.

2 25.2.200
39119585 Discovery pods keeps on restarting during performance traffic run when 5 IGW pod restarted Discovery pods repeatedly restarted during high traffic runs if five Egress Gateway microservice pods were restarted at the same time. Improvements were made to increase pod resilience to such scenarios, which prevents repeated crashes.

Doc Impact:

There is no doc impact.

2 25.2.200
38526899 Notification is not triggered from NRF to SCP when CDS service is bought down for an NRF instance in remote set and a new profile is registered When the Cache Data microservice was taken down for an NRF instance in a remote set and a new NF profile was registered, notifications from NRF to SCP were not triggered. This occurred because the new subscription state information could not be written or read while the cache was down, so the SCP was unaware of the update and only learned of it through a later audit or after cache restoration. The logic and documentation were updated to clarify the recommended configuration of NRF sets and Virtual FQDN mapping, and error handling was improved to describe outcomes and mitigation better when cache-based synchronization is unavailable.

Doc Impact:

Updated the configuration details in the Enhanced NRF Set Based Deployment (NRF Growth) section and updated the recommended actions for the OcnrfSyncFailureFromAllNrfsOfAllRemoteSets alert in Oracle Communications Cloud Native Core, Network Repository Function User Guide.

2 25.1.201
38503503 Notification is processed and pushed to SCP for an NF even if the NF Profile is not registered in the NRF Set NRF processed profile updates and sent notifications to SCP even when the NF profile was not registered in the relevant NRF Set, which resulted in duplicate NF entries across sets and inconsistencies in discovery responses. The update ensured that profile updates and notifications are allowed only for NFs registered in the appropriate NRF Set, preventing inconsistencies caused by cross-set updates.

Doc Impact:

Updated the configuration details in the Enhanced NRF Set Based Deployment (NRF Growth) section and updated the recommended actions for the OcnrfSyncFailureFromAllNrfsOfAllRemoteSets alert in Oracle Communications Cloud Native Core, Network Repository Function User Guide.

2 25.1.201
38948698 During pod bring up, if Cache Data microservice remote sync fails then the Cache Data microservice pod is not coming up The Cache Data microservice pod did not reach the "ready" state if remote sync failed due to missing null checks, which led to an exception and prevented readiness. Null checks and improved error handling were added so remote sync failures no longer blocked startup.

Doc Impact:

There is no doc impact.

3 25.2.200
38917051 Despite NF Screening Whitelist, Excluded NFs Eventually Register To NRF Under specific race conditions, registrations and subscriptions that did not meet NF Screening requirements were accepted. Synchronization in NF Screening logic was corrected to ensure rules are enforced for registration and subscription requests.

Doc Impact:

There is no doc impact.

3 24.3.0
38815866 Expression for OcnrfTotalIngressTraffic minor and major are configured incorrectly in alert file Alert rules for OcnrfTotalIngressTraffic had misconfigured thresholds, which caused alerts to under- or over-fire. Threshold values were corrected so alerts now trigger and clear only at the appropriate conditions.

Doc Impact:

There is no doc impact.

3 25.2.200
38727285 NRF is rejecting Accesstoken request with inconsistent Error response NRF gave inconsistent error responses to identical invalid access token requests because error causes were stored without a defined order. Error handling was refactored so that responses are consistent and reflect the same ordering.

Doc Impact:

There is no doc impact.

3 25.2.200
38687349 NRF Grafana chart incompatible with Grafana 7.5.X.X, resulting in no data display NRF Grafana dashboards used features that were incompatible with Grafana 7.5.x, so no data was shown on panels. The dashboards were updated for compatibility, which restored correct panel data display.

Doc Impact:

There is no doc impact.

3 25.2.200
38645943 SMF Discovery Request With preferred-tai Answered With preferredTaiMatchInd:false Despite All Match The preferredTaiMatchInd flag was set incorrectly when only SmfInfoList was present in profiles with matching TAI, as the preferred-tai filter did not apply to SmfInfoList objects. Filtering was updated to apply to both SmfInfo and SmfInfoList, so the indicator now accurately signals matches.

Doc Impact:

There is no doc impact.

3 25.1.202
38644186 Change in behavior observed for error code scenario ONRF-SUB-UNKSROP-E0999 different response getting generated by NRF NRF switched from HTTP 500 to 400 for malformed payloads but still produced non-compliant cause values in the error response. Returned HTTP 400 errors now include the correct "INVALID_MSG_FORMAT" cause value as required by specification.

Doc Impact:

There is no doc impact.

3 25.2.200
38327826 Message copy feature: Access token request generated at EGW towards NRF not being sent to kafka properly Access token requests from the Egress Gateway microservice were not sent to Kafka due to missing headers, which caused exceptions. The required headers were added to enable proper message forwarding and avoid errors.

Doc Impact:

There is no doc impact.

3 25.1.200
38013947 NRF Overload manager issue / ingress-gateway port 80 is not accessible on TLS deployments In TLS deployments, the ingress-gateway did not expose port 80, which blocked HTTP-based overload management. A new intraNFCommunicationPort (default 8008) was added for overload manager communication, which removed this requirement.

Doc Impact:

The perf-info.overloadManager.ingressGatewayPort flag is added. For more details, see Customizing NRF in Oracle Communications Cloud Native Core, Installation, Upgrade and Fault Recovery Guide.

3 24.2.2
37965223 Error Codes are not picked from errorCodeProfile configuration for Pod protection with rate limit There was a mismatch in configuration modes so error codes for pod protection with rate limiting were not picked from the errorCodeProfile configuration. Modes were synchronized to REST, ensuring error code profiles are now correctly honored.

Doc Impact:

There is no doc impact.

3 25.1.200
37800348 After NRF to Data Director connection is impaired, Egress Gateway microservice traffic becomes heavily impacted When the link to the Data Director was down, the Kafka buffers filled up, causing the Egress Gateway microservice event loops to stall and traffic to be rejected. A circuit breaker was introduced to prevent stalls and maintain traffic viability during downstream failures.

Doc Impact:

There is no doc impact.

3 24.2.2
37758198 /oauth2/token returns 403 error unless 3gpp-sbi-client-credentials includes 'Bearer' Token requests missing the "Bearer" prefix in the header were rejected with 403 errors. Validation was relaxed so the "Bearer" prefix is no longer strictly required in headers.

Doc Impact:

There is no doc impact.

3 24.3.0
37428198 Two different overload levels are set at the same time Overload metrics were published from all pods, not just the leader, which could lead to misleading overload status. A dimension was added in the metrics to indicate the leader pod for accurate overload reporting.

Doc Impact:

The isLeaderPod dimension is added to the overload metrics in Oracle Communications Cloud Native Core, Network Repository Function User Guide.

3 23.4.6
36753250 Able to delete peer in Egress Gateway microservice peer configuration even-though peer is configured in Route configuration via CNC Console Peer entries could be deleted from Egress Gateway microservice configuration even if present in Route configuration, risking inconsistent state after restart. Validation was added to block deletion if the peer is referenced in Route configuration via CNC Console.

Doc Impact:

There is no doc impact.

3 24.2.0
36627828 NRF is not able to successfully query SLF via Egress Gateway microservice when oc-alternateroute-attempt header is used, due to incorrect peer selection by Egress Gateway microservice NRF queries with the oc-alternateroute-attempt header failed because the Egress Gateway microservice selected the wrong peer for SLF forwarding. Peer selection logic was corrected for this header scenario.

Doc Impact:

There is no doc impact.

3 24.1.0
38795270 Incorrect discovery response w.r.t smf slice selection and incorrect value for preferredTaiMatchInd The preferredTaiMatchInd flag was set to true, even if DNN and TAI matches did not occur within the same slice or entity, resulting in false indications. This behavior is corrected so that the preferredTaiMatchInd flag is now set to true only when both are present within the same SmfInfo entity.

Doc Impact:

There is no doc impact.

3 25.2.200
39141824 DB fall backs observed when Subscription queries CDS service During periods of high traffic and frequent subscription requests, the Cache Data microservice intermittently threw NullPointerExceptions while processing subscription requests due to missing null checks on cache lookups. As a result, the Subscription microservice unnecessarily reverted to database access, increasing DB fallback metrics and leading to potential request failures. A null check was added to the duplicate-subscription evaluation logic to prevent these errors and ensure smooth handling when cache entries are absent.

Doc Impact:

There is no doc impact.

3 25.2.200
38774470 timestamp is not present in the summary of certain NRF Alerts Several NRF alert summaries lacked a timestamp, making alerts less informative. Alert rules were updated to include timestamp tokens in alert summaries.

Doc Impact:

The summary for the following alerts are updated: For more details, see Oracle Communications Cloud Native Core, Network Repository Function User Guide:

  • OcnrfTotalIngressTrafficRateAboveCriticalThreshold
  • OcnrfTotalEgressTrafficRateAboveCriticalThreshold
  • OcnrfTotalForwardingTrafficRateAboveCriticalThreshold
  • OcnrfTotalSLFRateAboveCriticalThreshold
  • OcnrfAccessTokenRequestsAboveThreshold
  • OcnrfNfUpdateRequestsAboveThreshold
  • OcnrfNfHeartBeatRequestsAboveThreshold
  • OcnrfDiscoveryResponseSizeAboveThreshold
  • OcnrfDiscoveryRequestsForUDRAboveThreshold
  • OcnrfDiscoveryRequestsForUDMAboveThreshold
  • OcnrfDiscoveryRequestsForAMFAboveThreshold
  • OcnrfDiscoveryRequestsForSMFAboveThreshold
  • OcnrfRegisteredNfCountAboveThreshold
  • OcnrfTotalSubscriptionsAboveThreshold
4 25.2.200
38715646 detail parameter for Error code return invalid detail (colon :) in response for feature - NRF Error response enhancements Extra colons appeared in some error response 'detail' fields, causing fields to be misinterpreted when parsing responses. These colons were removed or replaced so responses now have correct message formatting.

Doc Impact:

There is no doc impact.

4 25.2.200
38671036 Incorrect dimension name "SubscriptionIdType" for SLF Request/Response metric Metrics used the dimension "SubscriptionIdType" instead of the intended "SubscriberIdType." The dimension was renamed for accuracy.

Doc Impact:

There is no doc impact.

4 25.1.102
37920803 Incorrect Error code ONRF-CFG-SLFOPT-E0100 for cause OPTIONAL_IE_INCORRECT for feature - Discovery Parameter Value Based Skip SLF Lookup NRF sent an incorrect error code for OPTIONAL_IE_INCORRECT when duplicate entries were detected. Code mapping was fixed so the correct code E0008 is now used.

Doc Impact:

There is no doc impact.

4 25.1.100
37916149 Incorrect behaviour for enabling the feature with enableValueBasedSkipSLFLookup:true when valueBasedSkipSLFLookupParams is already configured - Discovery Parameter Value Based Skip SLF Lookup Enabling valueBasedSkipSLFLookup was blocked when parameters were omitted in the request but present in the database due to a validation logic error. Logic was fixed so the feature can now be enabled if parameters exist in the database.

Doc Impact:

There is no doc impact.

4 25.1.100
37253124 Incorrect Error Code for discovery-Invalid query parameters client-type The wrong error code was returned for invalid query parameters; Discovery used E0010 where E0003 is required. Code or cause mapping was updated so E0003 and INVALID_QUERY_PARAM are now used correctly for all such attributes.

Doc Impact:

There is no doc impact.

4 24.2.0

4.2.10 Policy Resolved Bugs

Table 4-9 Policy 25.2.201 Resolved Bugs

Bug Number Title Description Severity Found in Release
38789982 Service pending count stress on PCRF Core is higher than usual causing multiple overload level 1 trigger on site 1 and 2, level 3 triggered on site1

During a performance test in a two-site georedundant setup (15 KTPS per site), the service pending count stress on PCRF Core was higher than expected and triggered multiple overload Level 1 events on Site 1.

Doc Impact:

There is no doc impact.

2 25.2.201
38823777 For the UE service the congestion L1/L2/L3 thresholds are configured without any gap, which causes very rapid level fluctuations.

UE service congestion L1/L2/L3 thresholds were configured without sufficient separation, which caused rapid and unexpected congestion level fluctuations during load switching.

Doc Impact:

Updated UE Service Congestion States in "UE Service Pod Congestion Control" section in Oracle Communications Cloud Native Core, Converged Policy User Guide.

2 25.2.201
38293751 Policy Table is missing a row in CNCC GUI

When the policy_tables_check script was used, the Policy Table tMedia_039 row count in CNC Console was lower than expected for an SM-PCF testbed site (4 rows displayed instead of 5).

Doc Impact:

There is no doc impact.

2 23.4.6
38272439 Prod SM-PCF TW East 002 SM-Create Failure

SM-Create failed in production after the site was brought back into service.

Doc Impact:

There is no doc impact.

2 23.4.6
38848886 Cluster disconnect was observed when horizontal scaling was performed for ndbappmysqld pods during database upgrade from 25.2.100 to 25.2.101

In a two-site georedundant setup, a cluster disconnect was observed during a DB upgrade from 25.2.100 to 25.2.101 when the ndbappmysqld pods were horizontally scaled from 2 to 4 in the target release; the disconnect occurred due to ndbmtd pod restarts.

Doc Impact:

There is no doc impact.

2 25.2.201
38412963 After GeoReplication Recovery (GRR), database entries are not in sync in Site-1 and Site-2 (failed Site)

In an SM call model run (~7.5K TPS) on Site 1 while Site 2 was marked failed with no traffic, database entries became out of sync between Site 1 and Site 2 after GeoReplication Recovery (GRR) was performed.

Doc Impact:

There is no doc impact.

2 25.1.200
38769920 Infrastructure validation failed for minViablePath feature during upgrade scenario for ocnrf

When infraValidateEnabled was enabled and an incompatible minViablePath value was provided, the infrastructure validation did not detect the mismatch and successfully completed the app-info-infra-check-upgrade hook.

An attempt to upgrade OCNRF from ocnrf-25.1.100 to ocnrf-25.2.200 was made with minViablePath: 25.2.100.

The upgrade was expected to fail because the source (installed) version was 25.1.100, while the specified in the upgrade file was minViablePath: 25.2.100.

But, the infrastructure validation succeeded, and the upgrade completed to the target version 25.2.201.

Doc Impact:

There is no doc impact.

2 25.2.201
38776509 PCF using expired token

One of the two PCF Egress Gateway pods had used an expired token and did not refresh it, which caused call failures.

Doc Impact:

There is no doc impact.

2 25.1.201
38726006 During Geo Redundancy site isolation continues duplicate entry for key pdssubscriber.PRIMARY was reported in PDS causing signaling messages to fail with 409 Conflict.

A duplicate entry for pdssubscriber.PRIMARY had been observed in PDS. As a result, PDS returned 409 Conflict responses to SM, and signaling messages failed. The issue started after traffic from site 2 had been moved to site 1.

Doc Impact:

There is no doc impact.

2 25.2.201
37290279 High subs fall back count trend post Diameter Gateway - PRE - policyds Pod restarts

A high subscriber fallback count trend had been observed after Diameter Gateway, PRE, and PDS pods restarted.

Doc Impact:

There is no doc impact.

2 23.4.7
38926869 Unable to add a new and/or copy Threshold Profile in Overload Control Threshold screen

Users were unable to add a new threshold profile or copy an existing threshold profile in the Overload Control Threshold screen.

Doc Impact:

Updated the "Overload Control Threshold Profiles" section in Oracle Communications Cloud Native Core, Converged Policy REST Specification Guide.

2 25.2.201
39085112 Congestion control for diam-connector is getting triggered at 55K TPS with default thresholds

Congestion control for Diameter Connector had been triggered at 55K TPS when default thresholds were used.

Doc Impact:

There is no doc impact.

2 25.2.201
39123996 UDR patch request failed on GR site-2 Egress Gateway with error 3gpp-sbi-target-apiroot header is missing on minimal performance run of 1K TPS on Dual Stack setup

During a minimal performance run (1K TPS) on a dual-stack setup, UDR PATCH requests had intermittently failed on the GR site-2 Egress Gateway because the 3gpp-sbi-target-apiroot header was missing. In the observed traffic pattern, most UDR GET and subscription operations had been handled by both sites, and 100% of the last session terminations had occurred on site 2. Approximately 50% of PATCH requests had been rejected due to the Egress Gateway error.

Doc Impact:

There is no doc impact.

2 25.2.201
38910464 PCF not sending Event Triggers in notification message towards SMF in continuous call testing

During continuous call testing, PCF had not sent event triggers in notification messages to the SMF.

Doc Impact:

There is no doc impact.

3 24.2.9
38956986 Query service specific metrics are missing in Prometheus once we upgrade to 25.2.201 from 25.2.100 due to missing cnc-metrics specific container port definition

During an in-service upgrade from 25.2.100 to 25.2.201, query-service metrics (for example, log4j2_events_total) had not appeared in Prometheus even though actuator metrics had been available inside the pod. Investigation showed that the cnc-metrics containerPort definition had been missing after the Policy upgrade.

Doc Impact:

There is no doc impact.

3 25.2.201
38757354 NullPointerException (NPE) is seen in audit service during in service upgrade from 25.2.100 to 25.2.201

During an in-service upgrade from 25.2.100 to 25.2.201, a Java NullPointerException (NPE) was observed in the audit service.

Doc Impact:

There is no doc impact.

3 25.2.201
38729782 Ambiguity on namespace label on PCF /PCRF alertrule yaml file.

An ambiguity was identified in the namespace label in the PCF/PCRF alert rule YAML file.

Doc Impact:

Updated the expression and description of the alerts for which the extra space is removed in the PCF_Alertrule.yaml file. For more details see, "Alerts" section in Oracle Communications Cloud Native Core, Binding Support Function User Guide.

3 25.2.100
38794573 PCF choosing SUSPENDED BSF instead of the REGISTERED BSF in case of Alternate route

When two BSFs had the same priority, retries were routed correctly (first to the primary BSF, then to the alternate on failure). When priorities differed (for example, 1 and 3), the first two retries were routed to the primary BSF and the third retry was routed to the alternate BSF; this behavior was observed and required clarification.

Doc Impact:

There is no doc impact.

3 24.2.3
38792935 NullPointerException (NPE) was seen in policyds with ERROR "Cannot invoke "java.util.List.forEach(java.util.function.Consumer)" because the return value of "ocpm.uds.policyds.db.entity.Pdsprofile.getPdsProfileData()" is null"

During an in-service upgrade test for the SM call model, multiple NullPointerExceptions were observed in the PDS microservice, caused by null PDS profile data.

Doc Impact:

There is no doc impact.

3 25.2.201
38837754 UE service exposing too much of the data in the log. Which will potentially results in over flood of the log and will increase the cpu

After overload control and congestion control were enabled, the UE service logged the full HTTP request payload, AMF transaction context, and user identifiers at WARN level, which increased log volume and CPU usage.

Doc Impact:

There is no doc impact.

3 25.2.201
38770246 PA Create without MediaSubComponents causes null pointer exception on SMF Update Notify

When a PA Create request included a mediaComponent but no mediaSubComponent, a NullPointerException occurred during SMF update notification (reauth processing). The request failed silently on SMF processing while the AppSession was still saved and the PA Create response succeeded, which could have left a stale AppSession not linked to AppSessionInfo.

Doc Impact:

There is no doc impact.

3 25.1.201
38785709 PCF not responding to N5 CREATE when MSC is not populated

The PCF did not respond to an N5 CREATE request when MSC was not populated.

Doc Impact:

There is no doc impact.

3 25.1.200
38826644 SM service throwing NPE when binding service is responding with 503

A NullPointerException was observed in the session management service when the binding service returned an HTTP 503 response (as indicated by the referenced ocLogID).

Doc Impact:

There is no doc impact.

3 25.2.201
38621311 SM Service update is successful even after failure to get lock from Bulwark due to congestion

The SM Service update operation had reported success even when it failed to obtain a lock from Bulwark due to congestion.

Doc Impact:

There is no doc impact.

3 25.2.100
38468471 Metric occnp_current_active_sponsored_sessions does not reset/decrease on N5 session termination The occnp_current_active_sponsored_sessions metric had not reset or decreased after N5 session termination. 3 25.2.100
38228820 In UE Create Request, PCF is accepting notification Uri without a URI schema.

In a UE Create request, the PCF had accepted a notification URI that did not include a URI scheme.

Doc Impact:

There is no doc impact.

3 25.1.2X
36573694 Execution of 60K TPS in AM and UE call model , it seems that memory utilization of UE were almost 90% or more throughout run

During execution at 60K TPS in the AM and UE call model, UE memory utilization had remained at approximately 90% or higher throughout the run.

Doc Impact:

There is no doc impact.

3 24.1.0
38700752 errorStatus is present in diameter related enhanced log in diam gateway though it is designated for http related enhAnce log

The errorStatus field had appeared in Diameter-related enhanced logs in the Diameter Gateway, even though it was intended only for HTTP enhanced logs.

Doc Impact:

There is no doc impact.

3 25.2.100
38536654 PCF successfully processing invalid usgThres and also sent update of same to SMF

The PCF had processed an invalid usgThres value and had also sent an update containing the invalid value to the SMF.

Doc Impact:

There is no doc impact.

3 25.2.100
38536517 Incase sponId/aspID attribute is null/blank (semantically incorrect value), PCF reject request with wrong error cause

When the sponId/aspId attribute had been null or blank (a semantically invalid value), the PCF had rejected the request with an incorrect error cause.

Doc Impact:

There is no doc impact.

3 25.2.100
38660513 Diam-connector: new ObjectMapper object creation on every request (SLR-Initial/Intermediate and STR) received by diam-connector

The Diameter Connector had created a new ObjectMapper instance for every request it received (SLR-Initial, SLR-Intermediate, and STR).

Doc Impact:

There is no doc impact.

3 23.4.6
38524495 No servers available for service: pcfd-ltn-occnp-binding.

The system had reported that no servers were available for the pcfd-ltn-occnp-binding service.

Doc Impact:

There is no doc impact.

3 24.2.0
38509828 No 'Error Response Enhancement' sent for PA request failure due to Sponsored Data Connectivity not supported on PDU session

No Error Response Enhancement was sent when a PA request failed because Sponsored Data Connectivity was not supported on the PDU session.

Doc Impact:

There is no doc impact.

3 25.1.200
38505954 Lab Testing -Issue observed during NEF- PCF Integration

During NEF–PCF integration testing, issues were observed in three scenarios: PATCH updates returned 504 responses, DELETE requests returned 415 responses, and session-termination notifications were not sent after the UE detached (no termination notification was observed following the SMF-to-PCF termination message).

Doc Impact:

There is no doc impact.

3 23.4.6
38544360 During PA update, sm-service not sending afSuppFeat list to PRE for evalutaion

During a PA update, when the AF sent a PATCH request without suppFeat, the PCF did not send the afSuppFeat list to the PRE. As a result, policy evaluation failed when the policy logic required that attribute. It was expected that, for PATCH requests, the microservice would have sent complete session details to the PRE for evaluation, as provided in the initial request but not included in the update.

Doc Impact:

There is no doc impact.

3 25.2.100
38551221 Partial success when performing bulk export on new PCF sets

Bulk export operations on new PCF sets resulted in partial success.

Doc Impact:

There is no doc impact.

3 24.2.3
38552120 While bulk exporting CM Service is reporting "Can't find config Item" for numerous attributes resulting into 404 but on CM-UI it says 100% SUCCESS in a freshly installed setup

During bulk export in a freshly installed setup, the CM service reported "Can't find config Item" for multiple attributes and returned 404 errors, while the CM UI still indicated 100% SUCCESS.

Doc Impact:

There is no doc impact.

3 25.2.100
38488238 During performance test call failures have been observed for short span

During a performance test (15 KTPS per site in a two-site GR setup), a brief dip in success percentage was observed during a short time window on site 1.

Doc Impact:

There is no doc impact.

3 25.2.100
38487770 Execution of traffic > 70K leads to intermittent ERROR Seen "Evaluate Policy failed with 500 INTERNAL_SERVER_ERROR"

When traffic exceeded 70K TPS (approximately 70K–80K TPS in the AM call model on site 1), intermittent errors were observed where policy evaluation failed with 500 INTERNAL_SERVER_ERROR.

Doc Impact:

There is no doc impact.

3 25.2.100
38521721 NullPointerException observed on binding service after rollback from 25.2.100 to 25.1.200

A NullPointerException was observed in the Binding service after a rollback from 25.2.100 to 25.1.200.

Doc Impact:

There is no doc impact.

3 25.2.100
38470154 Response code received from back-end service does not match errorCodeSeries configured

The response code returned by the back-end service did not match the configured errorCodeSeries.

Doc Impact:

Updated Route Level Mapping section in User Guide to include details of Failure Request Count Profile field. For more information see, SBI section in Oracle Communications Cloud Native Core, Converged Policy User Guide.

3 24.2.7
38452194 PCF processed PA request successfully while sponStatus attribute missing in PA request which should be treated as enabled as per 29.514

PCF processed the PA request successfully even though the sponStatus attribute was missing; the request should have been treated as enabled per 3GPP TS 29.514.

Doc Impact:

There is no doc impact.

3 25.2.100
38463371 PCF respond with incorrect Feature negotiation for PA create when requested suppFeat list matches with configured feature in PA config

PCF returned incorrect feature negotiation for PA creation when the requested suppFeat list matched the configured feature in the PA configuration.

Doc Impact:

Added a note for "Override Supported Features" field that if any supported feature is enabled in one site, then the same needs to be enabled on all other sites at the same time. For more information see, PCF Policy Authorization section in Oracle Communications Cloud Native Core, Converged Policy User Guide.

3 25.2.100
38443405 occnp_oc_ingressgateway_http_responses_total metric not getting incremented

The occnp_oc_ingressgateway_http_responses_total metric was not incremented as expected.

Doc Impact:

There is no doc impact.

3 24.2.7
38452105 PA request rejected by PCF when PA create request contains sponStatus-SPONSOR_DISABLED along with sponId, aspId

PCF rejected the PA request when the PA create request included sponStatus=SPONSOR_DISABLED along with sponId and aspId.

Doc Impact:

There is no doc impact.

3 25.2.100
38467499 PCF-SM doesn't validate notifUri syntax for N5 PA create request and prompt error while using same

PCF-SM did not validate the notifUri syntax for an N5 PA create request and returned an error when the URI was used.

Doc Impact:

There is no doc impact.

3 25.2.100
38465680 Post N7 session clean-up for subscriber, N5 sessions AppSession & dependentcontextbinding stuck as stale in DB

After N7 session cleanup for a subscriber, the N5 session records (AppSession and dependentcontextbinding) remained stale in the database.

Doc Impact:

There is no doc impact.

3 25.2.100
38478348 Match list configuration fails to import when an existing configuration is present in the database and the "replace" option is used

The match list configuration import failed when an existing configuration was present in the database and the replace option was used.

Doc Impact:

There is no doc impact.

3 25.2.100
38419577 Usage-Mon Service is registering with Audit for all configured slicing tables without configuring the ENABLE_UM_CONTEXT_TABLE_SLICING advanced settings after audit re-enable from usage-mon service

After audit was reenabled from the Usage-Mon service, the Usage-Mon service registered with Audit for all configured slicing tables even though the ENABLE_UM_CONTEXT_TABLE_SLICING advanced setting had not been configured.

Doc Impact:

There is no doc impact.

3 25.2.100
38706590 Diam connector enhanced warn log related to STA is not having STR ocLogId ,Instead it is having AAR oclogid

In the Diameter connector enhanced WARN logs for STA, the STR ocLogId was missing and the AAR ocLogId was logged instead.

Doc Impact:

There is no doc impact.

3 25.2.100
38624407 For Diam error 5005 "msgType" is wrong in DIAM GW logs when enhanced logging is enabled

When enhanced logging was enabled, the Diameter Gateway logs displayed an incorrect msgType for Diameter error 5005.

Doc Impact:

There is no doc impact.

3 25.2.100
38311399 Missing logs due to "Rejected by OpenSearch" error

Logs were missing because events were rejected with the “Rejected by OpenSearch” error.

Doc Impact:

There is no doc impact.

3 24.2.4
38284360 PRA test getting Exception as Object is not a function

The Presence Reporting Area (PRA) test failed with an exception indicating that an object was not a function.

Doc Impact:

There is no doc impact.

3 24.2.5
38266755 N1 message delivery not getting aborted if PRE policy defined for T3501 Timer Expiry

N1 message delivery was not aborted when a PRE policy was defined for T3501 timer expiry.

Doc Impact:

There is no doc impact.

3 25.1.200
38306777 App info and Perf info configuration rollback does not work using FluxCD when it upgrades to a version it was deployed earlier

When FluxCD upgraded to a version that had been deployed previously, rollback of the App Info and Perf Info configuration did not work.

Doc Impact:

There is no doc impact.

3 25.1.2X
38331560 "Binding Audit: Revalidation successfully restored the missing binding" WARN raised even session available on bsf db.

The warning “Binding Audit: Revalidation successfully restored the missing binding” was raised even though the session was already available in the BSF database.

Doc Impact:

There is no doc impact.

3 25.2.100
38333657 Some of the DBTier metrics are not pegging after rollback of DBTier from 25.2.100 to 25.1.200

After rolling back cnDBTier from 25.2.100 to 25.1.200, some cnDBTier metrics did not report (peg) as expected.

Doc Impact:

There is no doc impact.

3 25.2.100
38379128 Diameter Connector Configuration is successful with invalid parameter value for ratTypeNRRedCapEnabled parameter

Diameter Connector configuration had succeeded even when an invalid value was provided for the ratTypeNRRedCapEnabled parameter.

Doc Impact:

There is no doc impact.

3 25.2.100
38423823 Exception observed post upgrade in audit service with error "Exception caught while running AuditTask, breaking out of audit loop"

After an upgrade, the Audit service had thrown an exception with the message: “Exception caught while running AuditTask, breaking out of audit loop.”

Doc Impact:

There is no doc impact.

3 25.2.100
38413430 Audit fails to register for AmPolicyAssociation_N tables when table slicing is enabled from advanced settings after the upgrade

After an upgrade, Audit registration had failed for AmPolicyAssociation_N tables when table slicing was enabled in Advanced Settings.

Doc Impact:

There is no doc impact.

3 25.2.100
38413904 Audit table registering the usage-mon tables when Audit disabled on Usage-Mon screen after the upgrade from 25.1.200 to 25.2.100

After an upgrade from 25.1.200 to 25.2.100, Audit table registration had included usage-mon tables even though Audit was disabled on the Usage-Mon screen.

Doc Impact:

There is no doc impact.

3 25.2.100
38418975 diam-conn Returns 3004 Instead of Configured 3005 During POD Congestion

During pod congestion, diam-conn had returned 3004 instead of the configured 3005.

Doc Impact:

There is no doc impact.

3 25.2.100
38413707 Audit fails to register for UePolicyAssociation_N tables when table slicing is enabled from advanced settings after the upgrade

After an upgrade, Audit registration had failed for UePolicyAssociation_N tables when table slicing was enabled in Advanced Settings.

Doc Impact:

There is no doc impact.

3 25.2.100
38351513 ObjectOptimisticLockFailureException seen in binding service due to session limit and CleanUp API concurrency

The Binding service had reported ObjectOptimisticLockFailureException due to the session limit and concurrent CleanUp API requests.

Doc Impact:

There is no doc impact.

3 25.2.100
38359341 PA Subscribe with UMC data for explicit subscription is responded with 204 No Content instead of 201 Created.

A PA Subscribe request with UMC data for an explicit subscription had returned 204 No Content instead of 201 Created.

Doc Impact:

There is no doc impact.

3 25.2.100
38365186 Policy does not expose containerPortNames

The Policy custom values 25.1.100 YAML had not exposed containerPortName, which was required to provision backendPortName in CNLB annotations, increasing manual effort.

Doc Impact:

Documented details of containerPortName. For more information, see the "Service and Container Port Configuration" section in Oracle Communications Cloud Native Core, Converged Policy Installation, Upgrade, and Fault Recovery Guide.

3 25.2.100
38388572 Metric ocpm_late_arrival_rejection_total for PA create Late arrival not updating with correct dimensions

The ocpm_late_arrival_rejection_total metric for late-arrival PA creation did not update with the correct dimensions.

Doc Impact:

There is no doc impact.

3 25.2.100
38593199 msgType is printing wrong in DIAM GW logs when enhanced logging is enabled

When enhanced logging was enabled, msgType was logged incorrectly in Diameter Gateway logs.

Doc Impact:

There is no doc impact.

3 25.2.100
38591442 "sender" and "receiver" field in enhance warn log is not proper in binding microsservice pod Pre-Condition:

In the binding microservice pod, the enhanced WARN log contained incorrect sender and receiver fields.

Doc Impact:

There is no doc impact.

3 25.2.100
38591494 Stack trace is printing in warn LOG LEVEL in diam-connector pod log but for the same call other microservices are printing it in ERROR log level

For the same call flow, the Diameter Connector pod logged the stack trace at WARN level, while other microservices logged it at ERROR level.

Doc Impact:

Added a note related to generation of ocLogId by Ingress Gateway and Diameter Gateway services in "Support for End-to-End Log Identifier Across Policy Services" section in Oracle Communications Cloud Native Core, Converged Policy User Guide.

3 25.2.100
38593133 Stack trace is printing in warn LOG LEVEL in diam-gateway pod log level

The diam-gateway pod printed stack traces at WARN level instead of the expected ERROR level.

Doc Impact:

Added a note related to generation of ocLogId by Ingress Gateway and Diameter Gateway services in "Support for End-to-End Log Identifier Across Policy Services" section in Oracle Communications Cloud Native Core, Converged Policy User Guide.

3 25.2.100
38609377 Diam Connector is not having proper enhanced warn log information such as msg type,sender,receiver,OClogID for Diameter error code 5001

For Diameter error code 5001, the Diam Connector enhanced WARN logs did not include complete details (for example, message type, sender, receiver, and OC log ID).

Doc Impact:

There is no doc impact.

3 25.2.100
38610051 Binding Svc missing Parameter envOathAccessTokenType Causing Binding Error 500

The Binding service was missing the envOathAccessTokenType parameter, which caused a binding HTTP 500 error.

Doc Impact:

The "binding.envOathAccessTokenType" parameter is added to "OAuth Configuration" section in Oracle Communications Cloud Native Core, Converged Policy Installation, Upgrade, and Fault Recovery Guide.

3 25.1.200
38611899 Message interaction towards PDS is not same for AM & UE call when Query on update flag is true, GET PDS/user-data message is triggered for UE call but not for AM call

When the “Query on update” flag was set to true, UE calls triggered a GET PDS/user-data request, but AM calls did not, resulting in inconsistent PDS interactions.

Doc Impact:

There is no doc impact.

3 25.2.100
38593569 Diam gw pod does not conatin logs in WARN/ERROR for dioam code 5009 which was sent from pcrf

The Diameter Gateway pod did not log WARN/ERROR entries for DIAM code 5009 sent from PCRF.

Doc Impact:

There is no doc impact.

3 25.2.100
38685977 NRF-Management pod Reporting WARN log "Possibly consider using a shorter maxLifetime value" at Higher TPS.

At higher TPS, the NRF-Management pod reported the WARN message: “Possibly consider using a shorter maxLifetime value.”

Doc Impact:

There is no doc impact.

3 25.2.101
38727610 PCF PRE logs TypeError "Cannot read properties of undefined (reading 'presenceState')" when repPraInfos key is missing

When the repPraInfos key was missing, PCF PRE logged a TypeError indicating that presenceState could not be read from undefined.

Doc Impact:

There is no doc impact.

3 25.1.200
38669709 NULL Pointer exception in Bulkimport after configuring SM service configuration via REST API

After configuring the SM service using the REST API, Bulk Import encountered a null pointer exception.

Doc Impact:

There is no doc impact.

3 25.2.100
38669424 messages profiles are not getting deleted observing 404 path not found

Message profiles were not deleted, and a “404 path not found” error was returned.

Doc Impact:

There is no doc impact.

3 25.2.100
38641913 PA creation with invalid/wrong domain ID not present in binding data is getting created

A PA was created even when the domain ID was invalid or not present in the binding data.

Doc Impact:

There is no doc impact.

3 25.2.100
38710016 db-monitor-svc unable to fetch metric displaying a replication break on site 1 Grafana

The db-monitor-svc could not fetch metrics, and Grafana showed a replication break on site 1.

Doc Impact:

There is no doc impact.

3 25.1.201
38696463 NPE is seen in UDR-conn once in SM performance run with receivedHeaders is null

During an SM performance run, UDR Connector encountered an NPE when receivedHeaders was null.

Doc Impact:

There is no doc impact.

3 25.2.201
38883054 svc_pending_count (service_resource_stress metrics) getting stucked when the perf info pods gets restarted while the traffic is running

While traffic was running, svc_pending_count (service_resource_stress metric) became stuck when perf-info pods restarted.

Doc Impact:

There is no doc impact.

3 25.1.200
38860626 The changes performed in the json file for the PA Update Request are not getting changed

Updates made in the PA Update Request JSON file did not take effect.

Doc Impact:

There is no doc impact.

3 25.2.100
37099406 Error "Got temporary error 245 'Too many active scans, increase MaxNoOfConcurrentScans' from NDBCLUSTER" observed on Binding while running 43K new call model

During binding at 43K new call model load, NDBCLUSTER returned temporary error 245 indicating too many active scans and recommending an increase to MaxNoOfConcurrentScans.

Doc Impact:

There is no doc impact.

3 24.2.1
38998897 Congestion Control migration: Activities

After upgrading from Policy 24.2.7 to 25.1.200, migrating congestion control configuration failed with the message “migration of old congestion data of diameter-gateway failed,” even after updating the CONGESTED value to “8.”

Doc Impact:

There is no doc impact.

3 25.1.200
38435584 occnp-ocpm-cm-service pod logs error

After upgrading directly to 25.1.200, the occnp-ocpm-cm-service pod logged an error indicating that congestion migration details were not found.

Doc Impact:

There is no doc impact.

3 25.1.200
38510869 Exception entries are increasing continuously in SmPolicyAssociation all slices and pdssubscriber exception tables after upgrading DB to occndbtier-25.2.100-rc.4 / occndbtier-25.2.100-rc.3

After upgrading the DB to occndbtier-25.2.100, exception entries increased continuously in SmPolicyAssociation (all slices) and pdssubscriber exception tables.

Doc Impact:

There is no doc impact.

3 25.2.100
38826683 Remove the license information and other irrelevant information from REST documentation

REST documentation displayed license and other irrelevant information that was not maintained and was not applicable.

Doc Impact:

Removed the details from Oracle Communications Cloud Native Core, Converged Policy REST Specification Guide.

3 25.2.201
38341175 After bulk import some of the policy projects have no policy blocks (blank policy)

After a bulk import, some policy projects were blank and contained no policy blocks.

Doc Impact:

There is no doc impact.

3 25.1.200
38969993 Executing 54K TPS Bulwark services Exception Triggered in Between

During a 54K TPS Bulwark services run, exceptions were triggered intermittently.

Doc Impact:

There is no doc impact.

3 25.1.202
38607080 Execution of ~54K Traffic for longevity, it was observed that there are frequent Error "PDS request failed. 500 for UserData" Throughout the execution in AM-Svc pods

During ~54K longevity traffic, AM-Svc pods frequently logged “PDS request failed. 500 for UserData.”

Doc Impact:

There is no doc impact.

3 25.1.201
38591264 Udr connector pod having frequent WARN logs "cpu congestion state changed" although Svc won't reached the Congestion level

The UDR connector pod frequently logged “cpu congestion state changed” even though the service had not reached the configured congestion level.

Doc Impact:

There is no doc impact.

3 25.2.100
37988273 INFO level logs are visible in query service even if the logging level is set to WARN

The query service still produced INFO-level logs even when the logging level was set to WARN.

Doc Impact:

There is no doc impact.

3 25.1.200
38457336 Observed that mysql-cluster-db-backup-manager-svc pod restart during Voice Call Model performance test (15 KTPS each site on a 2 Site GR setup)

During a Voice Call Model performance test (15 KTPS per site on a 2-site GR setup), the mysql-cluster-db-backup-manager-svc pod restarted.

Doc Impact:

There is no doc impact.

3 25.2.100
38609347 Diam Gateway is not having enhanced warn log information for Diameter error code 5001 generated by diam connector.

For Diameter error code 5001 generated by the diam connector, the Diam Gateway did not provide enhanced WARN log details.

Doc Impact:

There is no doc impact.

3 25.2.100
38844133 Update overload threshold values for AM, UE, SM, PCRF-Core, Diam-Connector

Overload threshold values were updated for AM, UE, SM, PCRF-Core, and Diam-Connector.

Doc Impact:

Updated the overload threshold values in "Overload Control" section in Oracle Communications Cloud Native Core, Converged Policy User Guide.

3 25.2.201
38360839 PCF is not generating Correct cause code for "unsupported_Media_Type" [415] for Handling: Error State and Error Cause- Code Mapping

For “unsupported_Media_Type”, PCF did not generate the correct cause code for error-state and error-cause mapping.

Doc Impact:

There is no doc impact.

3 25.1.200
38626520 For Diam error 5014 DIAM GW logs are printing NPE and stack trace on different level when enhanced logging is enabled

With enhanced logging enabled, for Diameter error 5014 the Diameter Gateway logs printed an NPE and stack trace at inconsistent log levels.

Doc Impact:

There is no doc impact.

3 25.2.100
38626548 For Diam error 5014 DIAM GW logs, ocpm.pcf.util.LoggingUtils IS NOT PRESENT with enhance logging present

With enhanced logging enabled, for Diameter error 5014 the Diameter Gateway logs did not include ocpm.pcf.util.LoggingUtils.

Doc Impact:

There is no doc impact.

3 25.2.100
38246466 During performance test observed pod restart of Diam-connector service

During performance testing, the Diameter Connector service pod restarted.

Doc Impact:

There is no doc impact.

3 25.1.200
38193976 Contradictory observation for metric ue_n1_policy_consolidation_total with or without RAB configuration

The ue_n1_policy_consolidation_total metric showed contradictory behavior depending on whether RAB configuration was present.

Doc Impact:

There is no doc impact.

3 25.1.200
37977198 No subscriber tracing observing for new Binding Audit: Revalidation messages.

Subscriber tracing was not observed for the new Binding Audit revalidation messages.

Doc Impact:

There is no doc impact.

3 25.1.200
37983111 Header "subscriber-logging" and "table size update" are missing on binding restoration POST message.

The subscriber-logging and table size update headers were missing from the binding restoration POST request.

Doc Impact:

There is no doc impact.

3 25.1.200
38170881 Policy-ds service pre/post upgrade jobs taking longer time to comeup when upgrading from 24.2.5 to 25.1.200-rc.1

When upgrading from 24.2.5 to 25.1.200, PDS, pre- and post-upgrade jobs took longer than expected to start.

Doc Impact:

There is no doc impact.

3 25.1.200
37925417 In Prometheus for the DIAM Gateway, the diam_reject_total metric does not differentiate between specific request types

In Prometheus, the Diameter Gateway diam_reject_total metric did not differentiate among specific request types.

Doc Impact:

There is no doc impact.

3 25.1.2X
37836099 cnPCRF Interaction on CID Handling During UMDataLimit Updates

During UMDataLimit updates, cnPCRF CID handling interactions did not behave as expected.

Doc Impact:

There is no doc impact.

3 22.4.7
37546813 PCRF Core setting configuration Sample json contains invalid key values for audit and mcptt

The PCRF Core settings sample JSON contained invalid key values for audit and mcptt.

Doc Impact:

Updated the "PCRF Core Service" section in Oracle communications Cloud Native Core, Converged Policy REST Specification Guide.

3 25.1.200
37584831 Peer node created with dot "." char not allowed to use in Peer Node Sets by GUI

When a peer node name included a dot (for example, A.COM), it appeared in the Peer Node Set dropdown but could not be selected in CNC Console.

Doc Impact:

There is no doc impact.

3 24.3.0
37578767 Data Limit Profiles allows same plan to use as parent plan during edit causing import failure with such configuration

While editing Data Limit Profiles, the same plan could be selected as its own parent, which caused import failures for that configuration.

Doc Impact:

There is no doc impact.

3 24.3.0
37638632 AM service not sending request to User Service while query on UDR is enabled (Occasional issue- Configuration corruption)

With UDR query enabled, the AM service occasionally did not send requests to the User Service due to configuration corruption.

Doc Impact:

There is no doc impact.

3 24.2.4
38114254 PCF is not setting pcfIpEndPoints in register request towards BSF

PCF did not set pcfIpEndPoints in the register request sent to BSF.

Doc Impact:

Added "BINDING.REGISTRATION.INCLUDE.IP_ENDPOINTS" and "BINDING.REGISTRATION.INCLUDE.IP_ENDPOINTS.SERVICE_NAM" Advanced Settings keys to "PCF Session Management" section in Oracle Communications Cloud Native Core, Converged Policy User Guide.

3 23.4.6
37970913 Incorrect PDS workflow url is reflecting in metrics

Metrics reflected an incorrect PDS workflow URL.

Doc Impact:

There is no doc impact.

3 25.1.2X
37412375 core_services is not picking up the latest values after helm upgrade - app-info pod not restarted

After a Helm upgrade, core_services did not pick up the latest values because the app-info pod was not restarted.

Doc Impact:

There is no doc impact.

3 24.2.3
36915221 Upgrade fails from PCF 24.1.0_GA to 24.2.0_rc7 " Error creating bean with name 'hookService' defined in URL"

The upgrade from PCF 24.1.0 to 24.2.0 failed with “Error creating bean with name 'hookService' defined in URL.”

Doc Impact:

There is no doc impact.

3 24.2.0
37253510 NRF-Management pod is reporting "Possibly consider using a shorter maxLifetime value"

The NRF-Management pod reported the message: “Possibly consider using a shorter maxLifetime value.”

Doc Impact:

There is no doc impact.

3 24.3.0
36819078 Error Mapping: PCF is sending 400 Error Code with "error" as "Bad Request" instead of "title" as "Bad Request"

PCF returned HTTP 400 with error: "Bad Request" instead of using title: "Bad Request".

Doc Impact:

There is no doc impact.

3 23.4.4
37028378 EGW not retrying to sameNRF or NextNRF when "errorCodes: -1" for errorSetId: 5XX on retryErrorCodeSeriesForNex/SametNrf OauthClient configuration.

With errorCodes: -1 for errorSetId: 5XX in the OAuthClient retry configuration, Egress Gateway did not retry to the same NRF or the next NRF as expected.

Doc Impact:

There is no doc impact.

3 24.2.0
37350622 Import on PCF gave BAD_REQUEST for PCF Session Management along with other errors in partial success report.

Policy import on PCF returned BAD_REQUEST for “PCF Session Management” and produced a partial-success report with additional errors.

Doc Impact:

There is no doc impact.

3 24.3.0
36832070 Issue with "Enforcement Network Element Name" blockly

The “Enforcement Network Element Name” blockly caused the Policy Rule Engine (PRE) to stop evaluating the policy tree, and PCRF did not install rules or rulebases when the issue occurred.

Doc Impact:

There is no doc impact.

3 23.2.8
38196773 Exception table entries getting generated resulting in DB Entries on Site-1 and Site-2 out of sync after doing an in service upgrade from 24.2.6 to 25.1.200 on a 2 site GR setup

After an in-service upgrade from 24.2.6 to 25.1.200 on a 2-site GR setup, exception table entries were generated and caused DB entries on site 1 and site 2 to become out of sync.

Doc Impact:

There is no doc impact.

3 25.1.200
39141924 NPE seen in PCRF core "ocpm.pcrfcore.msc.rc.diameter.model.PCRFDiameterEnfSession.getServingGatewayAddressList()" is null" during PCRF call model

During a user call model run, PCRF core threw an NPE because PCRFDiameterEnfSession.getServingGatewayAddressList() returned null.

Doc Impact:

There is no doc impact.

3 25.1.200

4.2.11 SEPP Resolved Bugs

Release 25.2.200

Table 4-10 SEPP 25.2.200 Resolved Bugs

Bug Number Title Description Severity Found in Release
39114199 Missing dimensions in prometheus metric oc_ingressgateway_rss_ratelimit_total

The missing dimensions in the Prometheus metric oc_ingressgateway_rss_ratelimit_total.

The Prometheus metric oc_ingressgateway_rss_ratelimit_total was updated to include the missing dimensions peer_plmn_id and nf_instance_id, and the label names were aligned with the documentation: http_method and remote_sepp_set_name.

Doc Impact:

Updated the dimensions of "oc_ingressgateway_rss_ratelimit_total" metrics in the "Metrics" section of the Oracle Communications Cloud Native Core Security Edge Protection Proxy User Guide.

4 25.2.100
39043874 SEPP 25.1.200 | Some alerts missing in prometheus_alertrules.yaml

In 25.1.200, several documented alerts were absent or misnamed in prometheus_alertrules.yaml. This included upgrade and rollback events and the Cn32f HS routing alerts. The documentation was updated to state that upgrade and rollback alerts were application-generated and were surfaced through Alertmanager. The documentation also noted that the update-db pathToFetchAlertManagerEndPoint setting had to be set. Routing alert names were corrected to SEPPCn32fHSRoutingFailureAlertCritical and SEPPCn32fHSRoutingFailureAlertWarning in the documentation and in the alert rules.

Doc Impact:

Updated the alerts names in the "Alerts" section of Oracle Communications Cloud Native Core Security Edge Protection Proxy User Guide.

4 25.1.200
38374302 content-type should be updated for invalid value for /sepp-configuration/v1/nif/msg-copy/params The Content-Type header was incorrect when an invalid value was provided for /sepp-configuration/v1/nif/msg-copy/params during an update of NIF message copy parameters.

This issue occurred when a long string was supplied for apiName, and the response was 400. The response used Content-Type: application/json instead of the expected Content-Type: application/problem+json.

Doc Impact:

There is no doc impact.

4 25.2.100
38942095 SEPP: update required in user guide related to Originating Network Id feature

The following documentation issues were fixed:

  • A typographical error was corrected. “PLMMN” was changed to “PLMN”.
  • Product term spacing was made consistent. “CSEPP/PSEPP” and “C SEPP/P SEPP” were aligned to use a single format.
  • The capitalization of “SBI” was made consistent. “SBI” and “Sbi” were aligned to use a single format.

Doc Impact:

Updated the "Support for Originating Network Id Header Validation, Insertion, and Transposition" section in Oracle Communications Cloud Native Core Security Edge Protection Proxy User Guide.

2 25.1.200
38941487 SEPP: update required in user guide related to NIF

The endpoint documented for enabling the Rejected Message copy feature was incorrect. The SEPP user guide listed /sepp-configuration/v1/nif/options, but the correct endpoint was /sepp-configuration/v1/nif/msg-copy/options.

Doc Impact:

Updated the "Integrating SEPP with 5G Network Intelligence Fabric (5G NIF)" section in Oracle Communications Cloud Native Core Security Edge Protection Proxy User Guide.

4 25.2.100
38930316 Metric descriptions duplicated in Common and Gateway Metrics sections:Possible documentation issue - v25.2.200-beta.4

The metric descriptions for oc_configclient_request_total and oc_configclient_response_total were duplicated in the User Guide. These metrics were documented in both Section 5.1.2 (Common Metrics) and Section 5.1.31.1 (Ingress Gateway Metrics) with the same Metric Details, Microservice, Type, and Dimensions.

Doc Impact:

Updated the "Metrics" section in Oracle Communications Cloud Native Core Security Edge Protection Proxy User Guide to remove the duplicate metrics.

4 25.2.100
38902496 User Guide Update Required : Documentation Mismatch in ocsepp_n32f_mediation_requests_total metrics

A documentation mismatch was observed for the ocsepp_n32f_mediation_requests_total metric. The labels exposed in Prometheus and the labels documented in the SEPP 25.2.200 User Guide were different. The User Guide documented the label as requestType, but Prometheus exposed the label as message_type

Doc Impact:

Updated the dimension of "ocsepp_n32f_mediation_requests_total" metrics in the "Metrics" section of Oracle Communications Cloud Native Core Security Edge Protection Proxy User Guide.

4 25.2.100
38897961 Missing dimensions in prometheus metric cgroup_cpu_nanoseconds & cgroup_memory_bytes

A mismatch was observed between the documented dimensions and the dimensions visible in Prometheus for the cgroup_cpu_nanoseconds and cgroup_memory_bytes metrics.

Doc Impact:

Updated the dimensions of "cgroup_cpu_nanoseconds" and "cgroup_memory_bytes" metrics in the "Metrics" section of Oracle Communications Cloud Native Core Security Edge Protection Proxy User Guide.

4 25.2.100
38883730 SEPP_PERF: Missing dimensions in prometheus metric ocsepp_cn32f_requests_failure_total

The ocsepp_cn32f_requests_failure_total metric did not expose the expected dimensions. The http_error_message dimension documented in the SEPP 25.2.200 User Guide was missing in the Prometheus output.

Doc Impact:

Updated the dimensions of " ocsepp_cn32f_requests_failure_total" metrics in the "Metrics" section of Oracle Communications Cloud Native Core Security Edge Protection Proxy User Guide.

4 25.2.100
38882763 Documentation Mismatch in oc_egressgateway_outgoing_ip_type metrics – SEPP_25.2.200 User Guide Update Required

A mismatch was observed between the labels documented in the SEPP User Guide and the labels exposed in Prometheus for the oc_egressgateway_outgoing_ip_type metric. The User Guide documented the dimensions as destinationHost and destinationHostAddressType, but Prometheus exposed the labels as DestinationHost and DestinationHostAddressType.

Doc Impact:

Updated the dimensions of "oc_egressgateway_outgoing_ip_type" metrics in the "Metrics" section of Oracle Communications Cloud Native Core Security Edge Protection Proxy User Guide.

4 25.2.100
38861926 Missing dimensions in prometheus metric ocsepp_configmgr_routeupdate_total

The ocsepp_configmgr_routeupdate_total metric did not expose the expected dimensions. The vendor dimensions documented in the SEPP 25.2.200 User Guide were missing in the Prometheus output.

Doc Impact:

Updated the dimensions of " ocsepp_configmgr_routeupdate_total" metrics in the "Metrics" section of Oracle Communications Cloud Native Core Security Edge Protection Proxy User Guide.

4 25.2.100
37818065 ERRORs being reported in SEPP plmn egw pod logs intermittently

An intermittent error was observed in PLMN egress gateway pods even when there was no traffic. The logs showed the following message:

Watcher exception due to: errorMessage: too old resource version: 464623931 (554740871) errorCause: too old resource version: 464623931 (554740871)

This bug was based on GW bug 38082705.

Doc Impact:

There is no doc impact.

3 25.1.200
38186154 multipe DB related errors observed when SEPP is freshly installed

Errors were observed in SEPP pod logs during installation. The errors reported missing tables in SEPPDB. This occurred because the pods started before the database tables were created in SEPPDB. This timing issue was resolved in code.

Doc Impact:

There is no doc impact.

4 25.1.200
38186154 SEPP Grafana dashboard provided with release has incorrect expression for CN32F Request-Response Latency Time

The KPI used to calculate CN32F Request-Response Latency Time in the Grafana dashboard was incorrect.

Doc Impact:

There is no doc impact.

3 25.1.100
38840521 Request SEPP User Guide metric dimensions used in metrics all be defined in "Table 5-2 Dimension" and to correct metric and KPI typos

In 25.1.200, the Metrics and KPI documentation contained undefined or inconsistent dimensions and typos. Examples included PLMN_ID and peer_plmn_id, nfInstanceId, and error_message. The documentation also listed Kubernetes labels that were not metrics and were not included in Table 5-2. All metrics were audited. Missing dimension definitions were added for method, reason, database, assertedPlmnValue, PLMN_ID, and InstanceIdentifier. Non-metric labels were removed. Typos and gateway dimensions were corrected, including Route_path, configVersion, updated, vfqdn. KPI formulas were updated.

Doc Impact:

Updated the dimensions of across the metrics in the "Metrics" section of Oracle Communications Cloud Native Core Security Edge Protection Proxy User Guide. Removed the following dimensions from the metrics:

  • helm_sh_chart
  • heritage
  • instance
  • job
  • pod_template_hash
  • endpoint
  • app_kubernetes_io_* exported_namespace
  • exported_pod
Updated the KPI expressions too.
4 25.1.200
38829253 Internal server error from SEPP when the payload is bigger than 262144 bytes

An internal server error occurs in SEPP when the payload exceeds 262,144 bytes. Any requests with a payload size larger than 262,144 bytes will not be routed through SEPP.

Doc Impact:

There is no doc impact.

3 25.1.100
38924719 "java.nio.channels.ClosedChannelException" shall be added in the criteria set for NIF

During long-duration performance testing (10-hour and 72-hour runs) at 40,000 messages per second (MPS) on SEPP version 25.1.200, high latency was observed. The issue was reproducible when a 50ms server delay was introduced, suggesting a scalability or processing bottleneck when multiple SEPP features were enabled. Additionally, enabling the CAT-3 Previous Location Check feature increased latency by approximately 30ms across all call flows.

Doc Impact:

There is no doc impact.

3 25.1.200
38188255 Different error codes (nif enabled and disabled) for timeout when there is a delay of 1000 ms at server

Different timeout error codes were received at the consumer depending on whether NIF was enabled or disabled. The timeout behavior should have been consistent. Scenario 1:

  • The server delay was 1000 ms.
  • NIF was enabled.
  • A service request was sent.
  • The consumer received: 504 GATEWAY_TIMEOUT 1100 ms.

Scenario 2:

  • The server delay was 1000 ms.
  • NIF was disabled.
  • A service request was sent. The consumer received:

    {"type":null,"title":"Request Timeout","status":408,"detail":"sepp2.inter.oracle.com: egressgateway: Request Timeout: OSEPP-EGW-E002","instance":null,"cause":"Request Timeout at EGW","invalidParams":null}

Doc Impact:

Added the troubleshooting scenario 'Timeout Variations When NIF is Enabled or Disabled' in the "Integrating SEPP with 5G Network Intelligence Fabric (5GNIF) feature" section in Oracle Communications Cloud Native Core Security Edge Protection Proxy Troubleshooting Guide.

3 25.1.200
38187439 response is not in json format when there is a timeout error

Error response body is not in json format in case of timeout scenario. It shall be in JSON format.

Doc Impact:

There is no doc impact.

3 25.1.200
38482323 Discrepancy in "content-type" header in response to consumer and message copy to NIF

A discrepancy was observed in the Content-Type header between the response sent to the consumer and the message copy sent to NIF. The Content-Type in the response to the consumer should have been application/problem+json. There should have been no discrepancy between the headers and body sent to the consumer and to NIF.

Doc Impact:

There is no doc impact.

3 25.2.100
38553128 network-id-header-validation failing for 3gpp-sbi-routing-binding

network-id-header-validation failed for 3gpp-sbi-routing-binding. The validation should have been successful because the header contained the correct network id.

Doc Impact:

There is no doc impact.

3 25.2.100
38850450 SEPP 25.1.201 - RSS config allowed not matching Remote SEPP Name resulting in routing behavior Intermittent N32f failure occurs after editing RSS config because SEPP appears to validate/bind the RSS “Primary SEPP” to the Remote SEPP name in a case‑sensitive (or inconsistent) way, so changing GROUPSEPP/groupsepp to GroupSEPP breaks the Remote SEPP association and stops N32f traffic.

Implemented case-insensitive Remote SEPP name matching so groupSEPP and GroupSEPP are treated as the same, preventing RSS edits from breaking N32f.

Doc Impact:

There is no doc impact.

3 25.1.201
38808302 Configuration guidance requested: DNS-SRV and N32 Ingress traffic

When DNS SRV was used and multiple Remote SEPPs shared the same virtual host, SEPP rejected inbound requests (HTTP 500) from Remote SEPPs that were not present in the Remote SEPP Set. SEPP also could fail outbound requests with the error “N32Context or Remote SEPP Set Not Found,” even though the SRV and A records were valid.

Doc Impact:

There is no doc impact.

3 25.1.201

4.2.12 SCP Resolved Bugs

Release 25.2.201

Table 4-11 SCP 25.2.201 Resolved Bugs

Bug Number Title Description Severity Found in Release
39129627 Topology Source API giving 500 internal server error on ATS pipeline run

The Topology Source API intermittently returned HTTP 500 during ATS runs when the TSI API was triggered in parallel by two functional tests.

Doc Impact: There is no document impact.

3 25.2.201
39097113 Fix pegging of ocscp_metric_nf_oci_rx_total in response

The ocscp_metric_nf_oci_rx_total metric was not incremented when an OCI header was received in a response from the producer.

Doc Impact: There is no document impact.

3 24.3.0
39008820 Regression fail during nightly runs

ATS regression intermittently failed OCI overload enforcement scenarios because more than the expected number of messages were routed to an overloaded SCP instance, causing the associated validation metric check to fail.

Doc Impact: There is no document impact.

3 25.2.100
39008329 SCP uses port 80 instead of 443 towards UDR causing nudr-dr timeouts

After upgrading, SCP intermittently routed HTTPS nudr-dr requests to the UDR using port 80 instead of 443, which led to timeouts and HTTP 504 responses.

Doc Impact: There is no document impact.

2 25.1.202
39008130 SCP tampering with HTTP2 body in message feed

SCP modified the HTTP/2 message body sent to the message feed or OCNADD compared to the body received, which could also result in an incorrect Content-Length.

Doc Impact: There is no document impact.

3 25.1.200
39085966 Duplicate header values are being concatenated in response to consumer

SCP worker incorrectly concatenated duplicate response header values into a single value for the same header name when sending responses to the consumer.

Doc Impact: There is no document impact.

3 25.1.200
39067307 UPF profile registration failure due to Incorrect usage of NonEmpty annotation for Snssai

The UPF profile registration intermittently failed because SCP applied an invalid NotEmpty/NonEmpty validation constraint to the Snssai object, resulting in a validation exception and notification processing failure.

Doc Impact: There is no document impact.

3 25.1.200
39056397 cnc-jetty-client is converting the null body to empty byte array for the For 204 response

The cnc-jetty-client incorrectly returned an empty byte array (byte[0]) instead of a null body when handling HTTP 204 (No Content) responses.

Doc Impact: There is no document impact.

3 25.1.200
39055601 Exclude OCSCP-INITIATED-MSG from SCPIngressTrafficRoutedWithoutRateLimitTreatment alert expression

The SCPIngressTrafficRoutedWithoutRateLimitTreatment alert was incorrectly raised for SCP self-initiated requests (such as subscription/audit) when an XFCC header was present.

Doc Impact: There is no document impact.

4 24.3.0
39050249 Performance Improvements for SCP 25.2.200 with code changes like making headers case sensitive

Performance improvements were implemented through internal code optimizations (including changes related to header handling).

Doc Impact: There is no document impact.

3 25.1.200
38907506 Incorrect examples for metric ocscp_audit_2xx_empty_nf_array in User Guide

SCP User Guide showed incorrect examples for the ocscp_audit_2xx_empty_nf_array metric by displaying examples for ocscp_audit_2xx_empty_nf_array_rx_total instead.

Doc Impact:

Updated the example of the ocscp_audit_2xx_empty_nf_array metric in Oracle Communications Cloud Native Core, Service Communication Proxy User Guide.

4 25.2.101
38918755 SCP forwarding single header even when multiple headers are present with same header name

SCP forwarded only one instance of a header when multiple headers with the same name (for example, 3gpp-sbi-binding with different scope values) were present, causing the consumer to receive incomplete header information.

Doc Impact: There is no document impact.

2 25.2.100
38968429 Documentation Inconsistencies between SCP Retry and Reselection

SCP documentation contained semantic inconsistencies in how it described alternate routing behavior, using “retry” and “reselection” interchangeably (including OCI guidance where REROUTE reflected reselection/override rather than retry).

Doc Impact:

Updated the alternate routing and reselection in different sections of Oracle Communications Cloud Native Core, Service Communication Proxy User Guide and Oracle Communications Cloud Native Core, Service Communication Proxy REST Specification Guide.

4 25.1.100
38998812 REST API Discrepancies between RG and Configuration openApi spec

The configuration OpenAPI specification incorrectly included deprecated and internal APIs, and the Oracle Communications Cloud Native Core, Service Communication Proxy REST Specification Guide did not accurately reflect the intended configuration REST API set.

Doc Impact:

Updated /ocscp/scpc-configuration/version}/nf-instances/nfInstancelds and /ocscp/spc-configuration/{version}/scpfeaturestatus in "Table 2-90 Parameters for NF Topology Groups" and "Table 2-455 Resources" in Oracle Communications Cloud Native Core, Service Communication Proxy REST Specification Guide.

4 25.1.100
39002331 Inconsistent Scheme Usage in SCP Responses for ASM Setup with Egress HTTPS Support Disabled

In ASM deployments with egress HTTPS support disabled, SCP incorrectly used the HTTP scheme when constructing the 3gpp-sbi-target-apiRoot header (in error responses) and when updating Location headers (for relative-to-absolute conversion or aliasing), even when the producer NF profile scheme was HTTPS.

Doc Impact:

There is no document impact.

3 25.1.100
38793300 SCP MIB file has alert name as "scpGLEgressRLRemoteParticipantWithDuplicateNFInstanceId" whereas alert rule file has "SCPGlobalEgressRLRemoteParticipantWithDuplicateNFInstanceId"

The SCP MIB notification name for the duplicate NF instance ID global egress rate-limit alert did not match the corresponding alert rule name, which could cause confusion and mismapped alerting integrations.

Doc Impact:

There is no document impact.

4 25.1.200
38794379 SCP unable to route when port definition is missing from NFServices in NF Profile using only FQDN

SCP incorrectly routed HTTPS traffic to port 80 when an NF profile contained only an FQDN and did not specify a port in the nfServices definition, causing routing failures.

Doc Impact:

There is no document impact.

3 24.2.6
38795347 Incorrect pegging of ocscp_audit_2xx_empty_nf_array on getting error rsp for individual profile query

The ocscp_audit_2xx_empty_nf_array metric was incorrectly incremented when an individual profile query returned an error response.

Doc Impact:

There is no document impact.

4 25.1.100
38831055 SCP worker logs have producer FQDN against scpAuthority which is incorrect

SCP worker logs incorrectly recorded the producer FQDN as the scpAuthority, which could mislead troubleshooting and log analysis.

Doc Impact:

There is no document impact.

4 24.2.0
38902966 SCP log representation combines distinct 3gpp-sbi-binding headers, making individual instances indistinguishable

SCP Tx logs incorrectly represented multiple 3gpp-sbi-binding headers received as separate instances by aggregating them into a single header entry, which could mislead debugging and observability.

Doc Impact: There is no document impact.

3 25.2.101
38004328 Installation guide has incorrect definition of mediation_status parameter

The Installation Guide incorrectly described the mediation_status parameter (including implying that enabling it routes all requests to mediation), even though routing depended on mediation trigger-point matching.

Doc Impact:

There is no document impact.

4 25.1.100
37562535 Fortify - Privacy Violation; possible mishandle of confidential information

A Fortify scan flagged a potential privacy violation where checkConfigFilesPresent() could log confidential configuration information through a warning message.

Doc Impact:

There is no document impact.

3 25.1.100
37543889 SubscriptionInfo is getting ignored in case if User comments out customInfo in NRF Details.

When customInfo was commented out in the NRF profile configuration, SCP ignored subscriptionInfo settings (including scheme) and instead derived the scheme from scpInfo.

Doc Impact:

There is no document impact.

4 25.1.100
36926043 SCP shows unclear match header and body in mediation trigger points

In the Mediation Trigger Points UI, SCP displayed [object Object] instead of the configured match header and body values after saving a trigger point.

Doc Impact:

There is no document impact.

4 24.2.0
36600245 SCPIgnoreUnknownService Alerts is not getting raised for all the ignored services at SCP

The SCP Ignore Unknown Service alert did not trigger for every ignored service (including across different NF instance IDs) and did not reliably clear after a fresh deployment, even though the ignore metric was incremented.

Doc Impact: There is no document impact.

3 24.2.0
38701692 Need to peg "ocscp_worker_transaction_cancel_total", when a transaction gets cancelled due to stream reset from downstream.

When a downstream stream reset cancelled a transaction, SCP incorrectly incremented ocscp_metric_scp_generated_response_total instead of ocscp_worker_transaction_cancel_total.

Doc Impact:

Added the ocscp_worker_ingress_stream_cancelled_total metric in "Table 5-265 ocscp_worker_ingress_stream_cancelled_total" in Oracle Communications Cloud Native Core, Service Communication Proxy User Guide.

4 25.1.200
38701238 Every SCP Audit, picks up NRF profile for de-register event and SCP's self profile as change event to have trigger towards notification, which later gets ignored at notification

SCP audit incorrectly triggered audit requests for NF type NRF, causing unnecessary notification processing that was later ignored.

Doc Impact:

There is no document impact.

4 25.1.200
38701191 Metric "ocscp_metric_5gsbi_upstream_destination_rejection_total" to be pegged for egressRL and oauth failure conditions

The ocscp_metric_5gsbi_upstream_destination_rejection_total metric did not account for upstream destination rejections caused by egress rate limiting and OAuth/access token failures.

Doc Impact:

There is no document impact.

4 25.1.200
38683537 Config Questions

Documentation gaps and ambiguities in configuration examples and API outputs (system options, nfservice outputs, upgrade or rollback events, and OCI enablement dependencies) caused confusion.

Doc Impact:

Updated the systemoptions and Overload Control Information (OCI) feature REST APIs in Oracle Communications Cloud Native Core, Service Communication Proxy REST Specification Guide.

4 25.1.100
38653426 Document examples of activation-group in Mediation feature

The SCP Mediation documentation did not include configuration examples for activation-group, which made the feature harder to implement correctly.

Doc Impact:

Added use case 8 in the Mediation Rules Configuration section in Oracle Communications Cloud Native Core, Service Communication Proxy User Guide.

3 25.1.100
38645905 ocscp_metric_http_ar_tx_req_total – Clarification

Documentation for ocscp_metric_http_ar_tx_req_total was unclear because the metric incremented for both SCP alternate routing and SCP reselection scenarios.

Doc Impact:

Updated the possible values of alternate_route_cause and ocscp_cause dimensions in Oracle Communications Cloud Native Core, Service Communication Proxy User Guide.

3 25.1.100
38623006 SCP generated NRF Proxy 5xx during upgrade & rollback with Model D traffic

During upgrade and rollback in a TLS deployment with Model D traffic enabled, SCP intermittently generated NRF proxy 5xx errors, which caused traffic impact.

Doc Impact:

There is no document impact.

3 25.2.100
38619677 Inconsistent Values for serviceIpFamilies in CV File vs. Installation Guide

The custom_values.yaml file documentation for serviceIpFamilies listed incomplete allowed values, which conflicted with the Installation Guide and could confuse users during configuration.

Doc Impact:

There is no document impact.

3 25.2.100
38614032 Request to Clarify nativeEgressHttpsSupport Comment Line in SCP Custom Values yaml file

The comment for nativeEgressHttpsSupport in the SCP custom values YAML used unclear terminology (“PNF”) and could mislead users about how egress HTTP vs HTTPS is selected when the flag is disabled.

Doc Impact:

There is no document impact.

4 25.2.100
38603594 SCP displays an incorrect timestamp format in the SCP Feature Status tab on the console

The SCP Feature Status console displayed the creation timestamp in an incorrect format compared to the format documented in the User Guide.

Doc Impact:

There is no document impact.

4 25.2.100
38600256 SCP 24.3.0 SCPNotificationProcessingFailureForNF alert firing

The SCPNotificationProcessingFailureForNF alert was observed firing in a lab environment on SCP 24.3.0.

Doc Impact:

There is no document impact.

3 24.3.0
38540499 Need clarification on labels & Istio resources in serviceaccount, Role & Rolebinding

Installation guide samples did not reflect new labels and Istio sidecar resource annotations present in the ServiceAccount/Role/RoleBinding manifests in 25.2.100, which caused confusion during deployment validation.

Doc Impact:

There is no document impact.

3 25.2.100
38526996 Fix low severity code issue identified during 25.2.100 relese Code Audit

Low-severity issues identified during the 25.2.100 code audit were addressed to improve code quality and maintainability.

Doc Impact:

There is no document impact.

4 25.2.100
38463421 scpPrefix is not removed for notification message types

SCP did not remove the configured scpPrefix from notification messages (identified by the presence of the 3gpp-sbi-callback header), which caused the downstream NF to receive an unexpected path and fail to process the notification.

Doc Impact:

There is no document impact.

3 24.2.5
38471270 Update problem status of OSCP-WRK-NFSEL-E001 Error ID to configurable error code in SCP documentation

SCP was observed returning an HTTP 400 response with OSCP-WRK-NFSEL-E001 when NRF configuration contained incorrect NRF set details, and the documentation needed to reflect that the problem status/error code for this condition is configurable.

Doc Impact:

Added information about creation of HttpStatusCode for the OSCP-WRK-NFSEL-E001 error ID in Oracle Communications Cloud Native Core, Service Communication Proxy User Guide.

4 25.2.100
38471475 Problem Details update for notificationSender parameter in Routing Option Config API

Error responses from the Routing Options Config API were unclear or misleading for the notificationSender.apiNameAxHeading validation cases (missing field and invalid value).

Doc Impact:

There is no document impact.

4 25.2.100
38523731 Configuration Pre install hooks stuck if "+" in DB password

Configuration pre-install hooks was hanging when establishing a database connection if the configured DB password contained a + character (for example, Password123+123+123).

Doc Impact:

There is no document impact.

3 25.2.100
38415608 SCP is not limiting the traffic as per the defined overall aggregated rate for the NF Producer (BSF)

SCP did not enforce the overall aggregated global egress rate limit across all deployed sites for the BSF NF Producer, which caused each SCP to send traffic up to its local limit instead of sharing the configured aggregate rate.

Doc Impact:

There is no document impact.

3 24.2.5
38010696 Invalid Configuration Error Raised when Enhanced NF Status Processing Feature is Disabled

An invalid configuration error was raised when enhanced_nf_status_processing was disabled and empty arrays were provided for enhancedSuspendedStateRouting and suspendedStateRouting.

Doc Impact:

There is no document impact.

3 25.1.200
38004328 Installation guide has incorrect definition of mediation_status parameter

The installation guide incorrectly described the behavior of the mediation_status parameter and implied it affected traffic routing, whereas the parameter had no impact on routing behavior.

Doc Impact: Removed the scpProfileInfo.mediation_status parameter from the Global Parameters section of Oracle Communications Cloud Native Core, Service Communication Proxy Installation, Upgrade, and Fault Recovery Guide.

4 25.1.100
38701692 Need to peg "ocscp_worker_transaction_cancel_total" when a transaction gets cancelled due to stream reset from downstream

The cancellation metric was updated so ocscp_worker_transaction_cancel_total was pegged when a transaction was cancelled due to a downstream stream reset, instead of incorrectly pegging ocscp_metric_scp_generated_response_total.

Doc Impact:

There is no document impact.

4 25.1.200
38701238 Every SCP Audit picks up NRF profile for de-register event and SCP's self profile as change event leading to unnecessary notifications

SCP audit processing incorrectly picked up NRF profiles for deregistration events and its own profile as change events, triggering unnecessary notifications that were later ignored.

Doc Impact:

There is no document impact.

4 25.1.200
38746580 System intermittently hitting WARN threshold under high TPS load

The system intermittently hit WARN thresholds when processing high traffic loads due to suboptimal handling of threshold evaluation under peak throughput conditions.

Doc Impact:

There is no document impact.

3 25.2.100
38742245 Evaluate changing JDBC url to use latin1 instead of utf-8

The JDBC URI was updated to use latin1 instead of utf-8, and the use of connectionCollation in the URI was reviewed to ensure database connectivity remained correct.

Doc Impact:

There is no document impact.

4 25.2.100
38741854 OpenAPI file is not correctly created and packaged

The OpenAPI file was not generated and packaged correctly because the output path was not enabled, which resulted in an empty file being included in the package.

Doc Impact:

There is no document impact.

4 25.1.200
38444738 ocscp_nf_end_point value is not coming in ocscp_metric_5gsbi_rx_req_total

The ocscp_nf_end_point value was unavailable in the ocscp_metric_5gsbi_rx_req_total metric.

Doc Impact:

Updated the ocscp_metric_5gsbi_rx_req_total metric in Oracle Communications Cloud Native Core, Service Communication Proxy User Guide.

3 25.2.100
39094189 Addition of error code in Annex B of SCP UG

The supported healthcheckErrorProfile was missing in the SCP User Guide in the list of error codes generated by SCP.

Doc Impact:

Added a new HTTP Status Code, "503 Service Unavailable", in "Table B-1 SCP Generated Error Codes" in Oracle Communications Cloud Native Core, Service Communication Proxy User Guide.

4 25.2.100
39043243 Clarification and Documentation update needed for SCP Routing Options parameters

There was no clarification and documentation update for SCP Routing Options parameters in the SCP User Guide.

Doc Impact:

There is no document impact.

4 24.3.0

Note:

Resolved bugs from 24.2.4 and 24.3.0 have been forward ported to Release 25.1.200.

Table 4-12 SCP ATS 25.2.201 Resolved Bugs

Bug Number Title Description Severity Found in Release
38704762 ATS Framework fails to handle uri formation with bracket for ipv6 of worker pod IP In IPv6-only environments, the ATS framework failed to form a valid metrics URI for the worker pod IP, which caused a URL parse error during metric collection.

Doc Impact: There is no document impact.

4 25.1.100

4.2.13 Common Services Resolved Bugs

4.2.13.1 Egress Gateway Resolved Bugs

Release 25.2.111

There are no resolved bugs in this release.

Release 25.2.110

There are no resolved bugs in this release.

Release 25.2.109

Table 4-13 Egress Gateway 25.2.109 Resolved Bugs

Bug Number Title Description Severity Found In Release
39074445 Traffic failure on EGW when x-http2-scheme is added in GlobalRequestRemoveHeader When x-http2-scheme was added to GlobalRequestRemoveHeader with sbiRouting enabled in Egress Gateway, the request failed with a null pointer exception due to direct access of the removed header in SbiRoutingFilter. 2 25.2.109
38879902 Misconfiguration and Pod Restart (multi fault scenario) leads to 100% traffic failure Ingress traffic success rate dropped from 100% to about 9% during pod restarts and did not recover after the analysis. 2 25.2.106
38998368 EGW pods are restarting while running traffic Egress Gateway pods restarted continuously when traffic increased from 36K TPS to 44K TPS due to rerouting during SLF pod restarts, even though pod protection by rate limiting was enabled. 2 25.2.108
38339561 Metrics oc_ingressgateway_dd_unreachable and oc_egressgateway_dd_unreachable are not resetting to value zero after connection with DD is restored After the connection with Oracle Communications Network Analytics Data Director was restored, the oc_ingressgateway_dd_unreachable and oc_egressgateway_dd_unreachable metrics did not reset to 0. 3 24.1.5
38831695 Different value getting configured for parameter "refresh-scheduler" in EGW and ARS mircoservice in comparison to what is present in values.yaml Different values were configured for the refresh-scheduler parameter in Egress Gateway and Alternate Route Service microservice compared to the values specified in the values.yaml file. 3 25.2.106
38645273 Signature schemes in certificate request message TLS contains "not supported" values When Ingress Gateway acted as a TLS server in TLS 1.2, unsupported signature algorithm values were included in the signature algorithms extension, and there was no option to exclude these values. 3 25.1.200
38661597 Metric oc_egressgateway_peer_health_status showing incorrect peer status in EGW The oc_egressgateway_peer_health_status metric showed an incorrect peer status in Egress Gateway. 3 25.1.206

Note:

Resolved bugs from 25.2.1xx have been forward ported to Release 25.2.109.

Release 25.2.108

Table 4-14 Egress Gateway 25.2.108 Resolved Bugs

Bug Number Title Description Severity Found In Release
38957729 EGW is not able to send access token request towards NRF in TLS enabled setup In TLS-enabled deployments with OAuth enabled, Egress Gateway failed to send access-token requests to the NRF because it could not establish a TLS connection, causing call failures. 2 25.1.208
38894990 PCF EGW not giving precedence to IPV6 and resolving to IPV4 address Istio logs showed that PCF resolved an NRF host name to a downstream local IPv4 address when it should have resolved to an IPv6 address. 3 25.1.203
38569278 Ingress Gateway reports a NullPointerException after the installation of PCF After PCF is installed, Ingress Gateway pods reported a NullPointerException when they started. 3 25.2.108

Note:

Resolved bugs from 25.2.1xx have been forward ported to Release 25.2.108.

Release 25.2.107

Table 4-15 Egress Gateway 25.2.107 Resolved Bugs

Bug Number Title Description Severity Found In Release
38777247 PCF using expired token (25.2.107) PCF intermittently used an expired token, which caused calls to fail. 2 25.1.200
38795596 During EGW Upgrade from 25.1.203 to 25.2.106 Observed "java.io.InvalidClassException" on new 25.2.106 egw pods and No traffic drop observed During an Egress Gateway upgrade from 25.1.203 to 25.2.106, the new Egress Gateway 25.2.106 pods reported a java.io.InvalidClassException, and no traffic drop was observed. 3 25.2.106

Note:

Resolved bugs from 25.2.1xx have been forward ported to Release 25.2.107.

Release 25.2.106

Table 4-16 Egress Gateway 25.2.106 Resolved Bugs

Bug Number Title Description Severity Found In Release
38661262 HealthStatus/peerSet GET, giving 500, NULL POINTER EXCEPTION in response A GET request to the /egw/healthStatus/peerSet endpoint could have returned HTTP 500 with a NullPointerException when peer monitoring was enabled and peer/peerset/route configuration was present. 2 25.2.106
38704688 EGW peer health status is inconsistent in case of multiple EGW pods in IPv6 with a synchronization delay of ~>=1min In IPv6 deployments with multiple Egress Gateway replicas and peer monitoring enabled, peer health could have been reported inconsistently across pods (some pods marking a peer healthy while others marked it unhealthy), leading to intermittent call failures when an unhealthy peer was selected. 2 25.1.207
38702789 Peer health ping request timing out after fresh install/upgrade in IPv6 In dual stack IPv6 mode, Egress Gateway peer health pings to /health/v3 could time out after a fresh installation or upgrade, marking all peers unhealthy and causing call failures. 2 25.1.207
38719525 peer health status is showing incorrect peer scheme for https peer in EGW 25.2.105 build The Egress Gateway peer health status output could have displayed an incorrect peer scheme for peers configured to use HTTPS when TLS was enabled. 2 25.2.105
37914904 Required Grafana dashboard JSON containing all the metrics for PI-B-25 PoP25 feature (IGW+EGW) along with Traffic success The provided Grafana dashboard JSON for Egress Gateway and Ingress Gateway metrics set was missing Egress Gateway traffic success panel, resulting in incomplete visibility for traffic success. 4 25.1.200
38324716 Mounting of secrets is not backward compatible approach After secrets were changed to be volume mounted for TLS 1.3 support on Kubernetes, updating or adding a new secret (for example, for CCA-related configuration) could have required a Helm upgrade to include the new secret in the mount list, unlike earlier behavior. 3 25.1.200
38235950 NPE seen in egress gateway after pod restart After restarting Egress Gateway pods, Egress Gateway could have thrown a NullPointerException during startup, observed across all Egress Gateway pods. 3 25.2.100
38325304 cgiu_jetty_ip_address_fetch_failure metric name shall starts with oc rather cgiu The cgiu_jetty_ip_address_fetch_failure metric name did not follow the standard oc_ prefix naming convention and used a nonstandard prefix. 3 25.1.200

Note:

Resolved bugs from 25.1.1xx and 25.2.1xx have been forward ported to Release 25.2.106.

4.2.13.2 Ingress Gateway Resolved Bugs

Release 25.2.111

Table 4-17 Ingress Gateway 25.2.111 Resolved Bugs

Bug Number Title Description Severity Found In Release
39158445 APIGW 25.2.110 IGW init-container fails in CNCC LDAPS SSL setup: trustStorePassword is null In APIGW image 25.2.110, startup failed in InitialConfiguration.setConfigInfo() because the init container did not read the truststore password from the configured secret file, so trustStorePassword was null when it tried to create the truststore. 2 25.2.110

Note:

Resolved bugs from 25.2.1xx have been forward ported to Release 25.2.111.

Release 25.2.110

Table 4-18 Ingress Gateway 25.2.110 Resolved Bugs

Bug Number Title Description Severity Found In Release
39126366 Separate CIS/F5 labels for *-ingress-gateway-intra-nf service In the Gateway Services CNC Console 25.2.x, a new *-ingress-gateway-intra-nf service was created and inherited the same F5 CIS labels as the main ingress-gateway service, which caused F5/CIS mapping conflicts and external FQDN routing issues in Webscale 1.3. The service did not support separate label or annotation input and reused the main ingress-gateway CIS labels by default. 2 25.2.104

Note:

Resolved bugs from 25.2.1xx have been forward ported to Release 25.2.110.

Release 25.2.109

Table 4-19 Ingress Gateway 25.2.109 Resolved Bugs

Bug Number Title Description Severity Found In Release
39064453 Label error_reason for metrics occnp_oc_ingressgateway_http_responses_total resulting in high Cardinality The error_reason label in the occnp_oc_ingressgateway_http_responses_total metric resulted in high cardinality, as multiple entries were generated for the same error during the performance run. 2 25.2.108
38817454 IGW is unable to read the CCA secret when deployed with TLS v1.3 An SSL exception was observed in the Ingress Gateway logs, which prevented it from reading the CCA secret when NRF with Ingress Gateway was deployed with TLS 1.3 enabled and CCA configured in REST API mode. 2 25.2.106
38879902 Misconfiguration and Pod Restart (multi fault scenario) leads to 100% traffic failure Ingress traffic success rate dropped from 100% to about 9% during pod restarts and did not recover after the analysis. 2 25.2.106
38339561 Metrics oc_ingressgateway_dd_unreachable and oc_egressgateway_dd_unreachable are not resetting to value zero after connection with DD is restored After the connection with Oracle Communications Network Analytics Data Director was restored, the oc_ingressgateway_dd_unreachable and oc_egressgateway_dd_unreachable metrics did not reset to 0. 3 24.1.5
38339561 Metrics oc_ingressgateway_dd_unreachable and oc_egressgateway_dd_unreachable are not resetting to value zero after connection with DD is restored The oc_ingressgateway_dd_unreachable and oc_egressgateway_dd_unreachable metrics did not reset to zero after the connection with OCNADD was restored. 3 24.1.5
38645273 Signature schemes in certificate request message TLS contains "not supported" values When Ingress Gateway acted as a TLS server in TLS 1.2, unsupported signature algorithm values were included in the signature algorithms extension, and there was no option to exclude these values. 3 25.1.200
38537413 convertHelmToRest is not merging routesconfiguration after upgrade for serverHeaderDetails In NRF 25.1.100, when Ingress Gateway routes were configured using Helm and the serverHeader feature was configured in the REST API mode, the route-level server header configuration did not work as expected. 3 25.1.201

Note:

Resolved bugs from 25.2.1xx have been forward ported to Release 25.2.109.

Release 25.2.108

Table 4-20 Ingress Gateway 25.2.108 Resolved Bugs

Bug Number Title Description Severity Found In Release
38921131 When OC discards OverloadControlFilter attempts to update internal metrics using a null key or value, resulting in a NullPointerException in ConcurrentHashMap.put() Overload Control discarded attempts by OverloadControlFilter to update internal metrics with a null key or value, and this resulted in a NullPointerException in ConcurrentHashMap.put(). 2 25.2.107
38831358 Non-ASM: Memory leak observed in IGW Non-ASM set-up with 12k traffic, resulting IGW pod restarts after 7days of continuous run A memory leak was observed in an Ingress Gateway non-ASM setup with 12K traffic. 2 25.2.106
38861854 Increase of failure rate % after in-service upgrade to 24.2.4 and to 25.1.202 Ingress Gateway failure rate increased after an in-service upgrade to 24.2.4 and to 25.1.202. 2 24.2.13
38787849 New "tokenCacheSize" boundary value validation is not happening even though fresh install/upgrade is success In Gateway Services 25.2.106, ASM did not validate the boundary value for the new tokenCacheSize attribute even though the fresh installation or upgrade succeeded. 3 25.2.106
38470214 "occnp_oc_ingressgateway_http_responses_total" metric not getting incremented After the configuration of SBI Ingress Error Mapping for a controlled shutdown of PCF, 503 responses were not recorded in the occnp_oc_ingressgateway_http_responses_total metric. 3 24.2.7
38569278 Ingress Gateway reports a NullPointerException after the installation of PCF After PCF is installed, Ingress Gateway pods reported a NullPointerException when they started. 3 25.2.108
38867575 Issue - Metric oc_oauth_validation_failure_total with invalid-scope dimension not pegged During the UDR regression suite, the oc_oauth_validation_failure_total metric was not getting pegged for the specified curl request. 3 25.2.107

Note:

Resolved bugs from 25.2.1xx have been forward ported to Release 25.2.108.

Release 25.2.107

Table 4-21 Ingress Gateway 25.2.107 Resolved Bugs

Bug Number Title Description Severity Found In Release
38702154 After 1.5hrs run Continuous IRC are flooding in IGW when IGW freshly installed & Traffic loss is observed from 3K to 1.7K In ASM, after 1.5 hours of continuous run, Illegal Reference Count (IRC) messages surged in Ingress Gateway after a fresh Ingress Gateway installation, and traffic dropped from 3K to 1.7K. 1 25.1.207
38767321 NPE and 500 internal ERROR observed in the POP25 error code rejections with configurable ERROR code A NullPointerException and a 500 internal error occurred during pod protection error code rejections when a configurable error code was used. 2 25.2.106
38818360 helm install is failing with execution error at "custom-header.tpl:3:3): defaultVal is null" and same working fine in the 25.1.207 build The Helm installation failed with the execution error custom-header.tpl:3:3): defaultVal is null, even though it worked in Gateway Services 25.1.207. 2 25.2.106
38787849 New "tokenCacheSize" boundary value validation is not happening even though fresh install/upgrade is success The new tokenCacheSize boundary value validation did not occur even though the fresh installation or upgrade succeeded. 3 25.2.106

Note:

Resolved bugs from 25.2.1xx have been forward ported to Release 25.2.107.

Release 25.2.106

Table 4-22 Ingress Gateway 25.2.106 Resolved Bugs

Bug Number Title Description Severity Found In Release
38293029 High CPU utilisation was observed when OAuth feature is enabled with ASM When the OAuth feature was enabled with ASM, Gateway Services could have shown elevated CPU utilization (about 10% higher than a baseline configuration) during performance testing. 3 25.2.100
38665926 allowedClockSkewSeconds IE value is wrongly configured in values.yaml file for IGW The sample values.yaml for Ingress Gateway OAuth configuration could have specified allowedClockSkewSeconds as 1L, which caused Ingress Gateway to interpret the value as 0 at runtime. 2 25.2.106
38369251 observed "Service MapDistCache has been terminated" in the old IGW pod after that new pods are not coming up when some of the IGW pods are deleted Under heavy traffic and after partial pod restarts, Ingress Gateway pods could fail to come up after some replicas were removed, with logs showing “Service MapDistCache has been terminated,” which prevented new pods from taking traffic. 3 25.2.100
38468707 IGW continues discarding discovery requests after overload trigger during ISSU, despite receiving normal load level signals During ISSU scenarios, Ingress Gateway could have continued discarding discovery requests after overload protection was triggered, even after new pods reported a normal load level, and the condition did not self-recover without restarting pods. 3 25.1.205
38272205 fillrate accepting zero value and IGW pod is restarting continuously when the pop feature is disabled When the Pod Protection feature was disabled, Ingress Gateway could have accepted a fillrate value of 0 (despite validation requiring a positive value when the feature was enabled), which led to a divide-by-zero during policer initialization and caused the pod to restart continuously. 3 25.1.200
38148295 Some of the pod protection parameters validation happening with and without flag enabled. all parameters are not in sync. Some pod protection configuration parameters, for example, congestion and deniedAction, could have been validated even when the pod protection feature flag was not enabled, resulting in inconsistent validation behavior across parameters. 4 25.1.200
36089938 errorcodeserieslist api allows configuration of errorCodeSeries having errorSet with no errorCodes The validation logic in the errorcodeserieslist API only checked whether errorCodeSeries and errorCodes were null, but did not verify if these fields were empty arrays. 3 23.4.0

Note:

Resolved bugs from 25.1.1xx and 25.2.1xx have been forward ported to Release 25.2.106.

4.2.13.3 Alternate Route Service Resolved Bugs

Release 25.2.108

Table 4-23 Alternate Route Service 25.2.108 Resolved Bugs

Bug Number Title Description Severity Found In Release
38594342 Alternate Route Service reports a NullPointerException after the installation of PCF Alternate Route Service reported NullPointerException after the installation of PCF. 3 25.2.200

Note:

Resolved bugs from 24.2.x and 25.2.1xx have been forward ported to Release 25.2.108.

Release 25.2.107

Table 4-24 Alternate Route Service 25.2.107 Resolved Bugs

Bug Number Title Description Severity Found In Release
38828633 Scheme change for the same host:port was not handled correctly during concurrent updates When an update (Deletion) was received for HTTPS SRV records, the system removed all existing entries for the same vFQDN, including those associated with HTTP. 3 23.4.106

Note:

Resolved bugs from 24.2.x and 25.2.1xx have been forward ported to Release 25.2.107.

Release 25.2.106

Table 4-25 Alternate Route Service 25.2.106 Resolved Bugs

Bug Number Title Description Severity Found In Release
38644699 TTL value in lookup is showing greater than the value defined in DNS SRV records in ARS DNS SRV lookups processed by Alternate Route Service could have returned TTL values higher than those defined in the corresponding DNS SRV records, causing lookup responses to reflect an incorrect TTL. 2 25.2.106
35644465 ARS Metric oc_dns_srv_lookup_total does not peg as per the TTL The oc_dns_srv_lookup_total metric could have incremented every 60 seconds regardless of the DNS SRV record TTL, resulting in lookup counts that did not reflect actual TTL-based lookup behavior. 3 23.2.3

Note:

Resolved bugs from 24.2.x and 25.2.1xx have been forward ported to Release 25.2.106.

4.2.13.4 Common Configuration Service Resolved Bugs

Release 25.2.108

Table 4-26 Common Configuration Service 25.2.108 Resolved Bugs

Bug Number Title Description Severity Found In Release
38828770 /nf-common-component/v1/igw/applicationparams API is returning multiple entries during Policy NF upgrade causing pod resrarts on audit service of PCF During an in-service upgrade of PCF, the Gateway Services endpoint /nf-common-component/v1/igw/applicationparams returned multiple results, and the audit service could not determine which configuration to use and restarted. 2 25.2.106

Note:

Resolved bugs from 25.2.1xx have been forward ported to Release 25.2.108.

4.2.13.5 NRF-Client Resolved Bugs

Release 25.2.201

Table 4-27 NRF-Client 25.2.201 Resolved Bugs

Bug Number Title Description Severity Found In Release
38788873 Invalid error structure of NRF for NRF notification failures

The error response structure for NRF notification failures was corrected to match the previous behavior in Spring Boot after migration to Micronaut, ensuring consistent exception handling and serialization.

3 25.2.201

Release 25.2.100

There are no resolved bugs in this release.

4.3 Known Bug List

The following tables list the known bugs and associated Customer Impact statements.

4.3.1 ATS Known Bugs

Release 25.2.202

There are no known bugs in this release.

4.3.2 BSF Known Bugs

Release 25.2.200

Table 4-28 BSF 25.2.200 Known Bugs

Bug Number Title Description Customer Impact Severity Found in Release
39021617 500 Internal Error is received in the response code instead of 503 Method Not Allowed

When a request uses an unsupported method, the BSF Management service returns a 500 Internal Server Error instead of the expected Method Not Allowed error.

BSF Management service requests that use an unsupported HTTP method return HTTP 500 instead of HTTP 405 (Method Not Allowed).

The response includes an internal error message about multiple exception handlers, which can mislead users and complicate error handling.

Workaround: None

3 25.2.200
38788026 Duplicate port definition warning is displayed while restarting BSF Management service and Query service During fresh installation, upgrade, and restarts, BSF Management service and Query service display warnings about a duplicate container port definition for the monitoring and metrics port.

These warnings can cause install, upgrade, restart, or rollback operations to fail, which can impact service availability.

During upgrade, Kubernetes can also remove the monitoring and metrics port, because the port is defined twice (one named and one unnamed), which can prevent metrics scraping.

Workaround: None

3 25.2.200

4.3.3 CNC Console Known Bugs

Release 25.2.201

There are no new known bugs for this release.

Release 25.2.200

There are no new known bugs for this release.

4.3.4 cnDBTier Known Bugs

Release 25.2.201

Table 4-29 cnDBTier 25.2.201 Known Bugs

Bug Number Title Description Customer Impact Severity Found in Release
38979458 SEPP-PERF-CNDB: ndbmysqld-3 stuck after rollback from 25.2.200-rc.7 to 25.2.101-GA After upgrading the MySQL Cluster (OCC NDB tier) to a newer build and then performing a Helm rollback to the previously deployed stable build, one MySQL Server pod (ndbmysqld-3) fails to start and remains in CrashLoopBackOff. All other NDB components and the other MySQL Server pods continue running normally.

After a rollback, if this issue occurs, the ndbmysqld pod(s) may become stuck in the CrashLoopBackOff state. This happens because ndbmysqld can fail while attempting to create the local Data Dictionary (DD) from the NDB Data Dictionary.

Workaround:

A fatal GRR needs to perform to over come this issue.

2 25.2.200
38857144 Cluster Disconnect observed when horizontal scaling was performed for ndbappmysqld pods Cluster gets disconnected during the scaling of the ndbappmysqld and ndbmysqld pods during the addition of Geo redundant site. The cluster becomes disconnected when scaling ndbappmysqld and ndbmysqld pods during the process of adding a geo-redundant site.

Workaround:

The horizontal scaling of ndbappmysqld pods and the addition of cnDBTier geo-redundant sites procedures have been updated to address cluster disconnection issues during scaling operations.

2 25.2.100
38585013 dbtscale_vertical_pvc stuck for ndbapp pod in phase 4 with Waiting for localhost to restart on non-GR setup The dbtscale_vertical_pvc service doesn't work on sites where replication to other sites has been configured, but only one site has been installed. The dbtscale_vertical_pvc operation does not function correctly on sites configured for replication to additional sites when only a single site has been installed.

Workaround:

Perform the manual maintenance procedure for "Vertical Scaling - Updating PVC" for the affected StatefulSet or Deployment.

For more information, see "Updating PVC Using Helm Upgrade" under "Vertical Scaling" section in Oracle Communications Cloud Native Core, cnDBTier User Guide.

2 25.2.100
38877149 Replication break observed with skip error enabled on site post ndbmysqld and ndbappmysqld pods complete scale down for 15 Min. In previous releases, when the pods were scaled down and then scaled up, replication would come up successfully. However, in this release, replication is going down because the epoch loss exceeds the epochTimeIntervalHigherThreshold during that time window. If the last applied epoch is missing from one of the standby ndbmysqld pods at the source site, and both replication ndbmysqld pods go down, replication resumes without verifying the skip-error logic that checks whether the ndbmysqld pods have been disconnected for longer than the configured threshold. Consequently, no skip-error information is recorded in this scenario.

Workaround:

Perform the georeplication recovery procedure if the replication is broken.

3 25.2.200
38921972 Replication delay 10hrs(36000sec secondsBehindRemote) during the CNCC upgrade. An incorrect epoch value was used during skip error handling, which caused replication to restart from a previous point and reapply some transactions that had already been processed. Replaying transactions that have already been applied results in temporary data inconsistencies between sites.

Workaround:

Before initiating the NF upgrade on any site, ensure that all db-replication-svc pods are restarted across every site.

3 24.2.6
38947690 100% Traffic failure on UDR when we restart one ndbmtd pod Restarting a single ndbmtd (data node) pod results in unexpected cascading restarts across all ndbmtd pods, causing a full (100%) UDR traffic outage. During the initial ndbmtd pod restart, a cluster disconnect was also observed. A cluster disconnect during ndbmtd pod restart activity can disrupt data node synchronization and may result in data inconsistencies across multiple sites. In addition, the cascading ndbmtd restarts can cause a complete UDR traffic outage.

Workaround:

During maintenance operations that may restart ndbmtd pods (for example, platform upgrades, cnDBTier upgrades, rollbacks, or scaling activities), reroute NF traffic to alternate cnDBTier clusters to avoid service impact.

3 25.2.200

4.3.5 CNE Known Bugs

Release 25.2.200

Table 4-30 CNE 25.2.200 Known Bugs

Bug Number Title Description Customer Impact Severity Found in Release
36740199 bmCNE installation on X9-2 servers fail Preboot execution environment (PXE) booting occurs when installing Oracle Linux 9 (OL9) based BareMetal CNE on X9-2 servers. The OL9.x ISO UEK kernel installation hangs on X9-2 server. When booted with OL9.x UEK ISO, the screen runs for a while and then hangs with the following message "Device doesn't have valid ME Interface". BareMetal CNE installation on X9-2 servers fails.

Workaround:

Perform one of the following workarounds:

  • Use platform agnostic bmCNE deployment procedure of X9-2 servers" from Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
  • Use CNE 24.3.1 or older version on X9-2 servers.
2 23.4.0

4.3.6 NSSF Known Bugs

Release 25.2.200

Table 4-31 NSSF 25.2.200 Known Bugs

Bug Number Title Description Customer Impact Severity Found in Release
38819395 NSSF nsselection pod restarted with 143 Error Code During 80K TPS NS-Selection Traffic when the connection is recursively broken between mysql and ns-selection, ns-availability pods on site 1 for 10 minutes every 50 minutes The issue occurs when DB connectivity to all pods is disrupted for 10 minutes and then restored for 50 minutes, repeating cyclically for over 12 hours. This pattern is highly unlikely in a real production setup. Additionally, the issue is intermittent, observed in only one environment and not reproducible in another setup.

Customer impact is low, as the scenario is rare and environment-specific. When a restart occurs, traffic resumes normally. During the restart window, only in-flight messages are affected, and only one out of eight NS-Selection pods is impacted, limiting overall service degradation.

Workaround:None

3 25.2.200
38532145 CPU Utilization across ns-selection pods are not equally distributed. CPU utilization variance (~10%) is observed between the highest and lowest utilized NS-Selection pods. Resource consumption is not evenly distributed across all pods.

There is no customer impact. Traffic handling remains stable with zero traffic loss and no service degradation observed.

Workaround:None

3 25.2.200
38238999 oc_oauth_request_failed_cert_expiry Metric not getting pegged . When an OAuth authentication is rejected because the certificate is expired, the message is correctly rejected as per validation logic. However, the corresponding metric is not being recorded, resulting in a monitoring gap.

Customer impact is low, as message validation and rejection behavior are functioning correctly. The issue is limited to observability, where the related metric is not being pegged.

Workaround:None

3 25.2.200
38621015 If abatementValue is higher than onsetValue, NSSF should reject the overloadLevelThreshold configuration Validation for the overload control API configuration is missing. This can allow incorrect parameter settings, potentially causing overload control to trigger (onset) without proper abatement.

There is potential service degradation risk if overload parameters are misconfigured, which may result in sustained overload control without recovery. However, the issue is configuration-related and avoidable with correct setup.

Workaround:Configure overload control parameters as per the REST API guide, ensuring the abatement value is lower than the onset value to allow proper recovery behavior.

3 25.2.200
38621028 [72K TPS Success] [8K TPS Http Reset Stream] NSSF returns 503 for NS-Selection/Availability Success Traffic (72K) - Success Rate Drops to 0.5% for ns-selection and 2.15% for ns-availability traffic In a scenario where reset stream messages are sent for 10% of the traffic, an overall traffic loss of approximately 0.5% is observed.

There is a measurable impact of ~0.5% traffic loss when 10% of incoming traffic consists of reset streams. Outside this scenario, normal traffic handling remains unaffected.

Workaround:None

3 25.2.200
37623199 If an accept header is invalid, NSSF should not send a notification to AMF. it should send 4xx instead of 500 responses to the nsssai-auth PUT and DELETE configuration. NSSF intermittently accepts requests containing an invalid Accept header instead of rejecting them as expected.

There is no impact on traffic or service behavior. Traffic processing and success rate remain unaffected.

Workaround:None

3 25.1.100
37784755 Option not available to change log level for pods "ocnssf-ocpm-config" & "ocnssf-performance" via CNCC and via REST The perf-info microservice does not provide an option to modify the log level dynamically. It is currently fixed at the ERROR level.

There is no impact on customer traffic or call processing, as the perf-info microservice is not part of the live traffic handling or call flow path. Production services remain unaffected.

Workaround:None

3 25.1.100
36552026 KeyId, certName, kSecretName, and certAlgorithm invalid values are not validating in the oauthvalidator configuration. Invalid values configured for keyId, certName, kSecretName, and certAlgorithm in the oauthValidator configuration are currently not being validated by the system. The configuration accepts incorrect or unsupported values without raising validation errors.

There is no impact on live traffic. However, the absence of validation may lead to misconfigurations remaining undetected until runtime verification or certificate usage scenarios occur.

Workaround:

Follow the REST API guide to configure certificate parameters. While configuring the oauthValidator, the operator must ensure that:

  • keyId matches the expected key identifier configured in the certificate.
  • certName corresponds to a valid and existing certificate reference.
  • kSecretName correctly maps to the intended Kubernetes secret.
  • certAlgorithm uses a supported and valid algorithm value.
3 24.1.0
38843842 [27K TPS Each Site] Deleting All CNDB Pods in All 3 Sites Causes Irrecoverable Replication Breakage (Site 2 → Site 3) "The incident LOST_EVENTS occurred on the source. Message: cluster disconnect" Error, No Auto-Recovered If all CNDB pods across the three GR sites are deleted simultaneously, the replication channel does not automatically re-establish after the pods restart. Manual intervention is required to restore replication.
  • Loss of Group Replication (GR) connectivity between sites.
  • Potential for unexpected or inconsistent responses until replication is restored.
  • This represents a corner-case scenario, as it requires simultaneous deletion of all CNDB pods across all sites, which is rare and highly unlikely under normal operational conditions.

Workaround:Run the GRR (Group Replication Recovery) procedure to manually restore replication connectivity between the sites.

3 25.2.200
38819080 NSSF ns-selection Istio-Proxy pod crashed During 80K TPS NS-Selection traffic When All NSSF Pods Are Deleted on Site 1 When all pods are deleted using "kubectl delete pod --all -n ", one of the sidecar ASM (Aspen Service Mesh) Istio containers crashed.

This behavior is observed only in a corner-case scenario where all pods are forcefully deleted simultaneously, which is not representative of normal production or rolling upgrade operations. The ASM sidecar crash occurs during pod termination, after the pod has already been removed from active service endpoints. As a result, there is no significant impact to customer traffic or service availability. As this happens when all pods are deleted, there is minimal loss because of this crash.

Workaround:None

3 25.2.200
38552515 GW Metrics issues for NSSF 54K Success Scenario and 26K failure [Slice is not configured in PlmnInfo] TPS traffic on single site. A mismatch is observed in oc_ingressgateway_http_responses_total metrics.503 responses are not consistently pegged.Backend 403 (SNSSAI_NOT_SUPPORTED) is recorded as 500 or error_reason="UNKNOWN".IGW metrics do not accurately reflect actual backend response codes.

There is no impact on traffic or service behavior; backend responses remain correct (403 for invalid slice cases).

The impact is limited to observability, as metrics do not accurately reflect actual response codes.

403 errors may appear as 500, and 503 responses may not be consistently pegged, leading to inaccurate monitoring visibility.

Workaround:None

3 25.2.200
38796537 Encountered 500 error response in NS-Availability call flow during an 80K TPS load at Site 1 In a long run of more than 100 hours, intermittently in some setups, 0.003% of messages are failing with 5xx responses. This is happening in a specific setup; in other setups, this is not observed.

Intermittently, 0.003% message loss.

Workaround:None

3 25.2.200
38901019 NS-selection pods are being deleted one by one at 5-minute intervals, One of the NS-selection pods is utilizing only 1% of CPU after comes up ns-selection pods, success rate dropped 0.003% traffic. NS-selection pods were deleted sequentially at 5-minute intervals. After restart, one of the pods was utilizing only ~1% CPU. During this period, the overall success rate dropped by 0.003% of traffic.

The impact was negligible, with only a very small (0.003%) reduction in success rate. There was no significant service degradation or large-scale traffic loss observed.

Workaround:None

3 25.2.200
39008266 NSSF should reject avail put and patch request if SST type is string. NSSF is expected to reject PUT and PATCH requests when the SST parameter is provided in an incorrect format (e.g., string type). Currently, NSSF accepts requests where the SST parameter is sent with an empty string ("") instead of a valid integer value.
  • Impact occurs only when a peer NF sends an invalid SST value (empty string instead of integer).

  • No impact on live traffic handling or normal service behavior.

  • Valid requests with correctly formatted SST values continue to function as expected.

Workaround:None

3 25.2.200
38628736 While doing the scale down to 0 all NSSF deployment, NSSF Pods Enter Error State Before Termination During Scale-Down to 0. During scale-down of all NSSF deployments to zero replicas, several pods briefly enter the Error state before complete termination. Instead of terminating gracefully, pods transition through an Error status during the shutdown process.

There is no impact on live traffic, as this scenario occurs during an intentional scale-down to zero. The issue is limited to pod lifecycle behavior during shutdown and does not affect service functionality when the system is operational.

Workaround:None

4 25.2.200
36653494 If KID is missing in access token, NSSF should not send "Kid missing" instead of "kid configured does not match with the one present in the token" The error response string is not in line with expectations when the Kid does not match. Instead of responding with “Kid does not match,” the response string contains “kid missing.”

Minimal impact, as the error code is correct; only the description string is incorrect.

Workaround:None

4 24.1.0
38941167 [25.2.200]: Dynamic Logging Feature: With commonCfgClient.enabled set to false run time log level of services is still getting updated When commonCfgClient.enabled is set to false, the log level update during runtime must not be allowed, but it is being allowed.

No impact on traffic. Runtime log level update is enabled by default. Until the log level change with op gui is triggered, the log level does not change.

Workaround:None

4 25.2.200
38973933 Encountered 500 and 404 error response in NS-Availability Call Flow after site restore In a 3-site GR deployment handling ~27K TPS per site, failover was triggered sequentially for Site 3 and Site 2, resulting in traffic being redirected to Site 1 as the single active site. During this period, replication channels for the failed sites were temporarily broken. Once Site 2 and Site 3 were restored, traffic was redistributed and replication channels were successfully re-established across all sites. During the dual-site outage scenario (traffic converging to one active site), approximately 0.003% of messages were lost.

A very minimal traffic impact was observed, with 0.003% message loss during the scenario where two sites were down and all traffic was handled by a single active site. No prolonged service outage occurred, and full replication and traffic distribution were restored after site recovery.

Workaround:None

4 25.2.200
39015332 LCI header contains NfInstance ID but does not contain serviceInstanceId The LCI header includes the NfInstanceId, but the serviceInstanceId is not present in the header.
  • No impact on traffic handling or service functionality.

  • Absence of the serviceInstanceId in the LCI header may reduce service-level traceability and granular monitoring at the instance level.

  • ServiceInstanceId provides service-level information; however, this information can be clearly inferred from the API URI, thereby minimizing functional impact.

Workaround:None

4 25.2.200

4.3.7 OCCM Known Bugs

Release 25.2.200

There are no known bugs in this release.

4.3.8 OSO Known Bugs

Release 25.2.201

Table 4-32 OSO Known Bugs 25.2.201

Bug Number Title Description Severity Found In Release Customer Impact Workaround
38661250 IPV6 connections are not getting established between prom-svr and NF pods even when the OSO is deployed in IPV6 preferred mode. Any targeted pod, endpoints, and services are creating their emission endpoints using IPv4 even when cluster is dualstack. OSO should be creating prometheus targets with IPv6. 3 25.2.200

Customer can see all targets created with IPv4 even if cluster is dualstack. There is no functional impact as OSO supports "PreferredDualstack" indicating fallback to IPv4 is always acceptable.

There is no work around required.

39143113 Communication for OSO-PROM,OSO-ALM & OSO-APM is using IPv4 address instead of IPv6 In this reported issue OSO when deployed in dualstack cluster, pods when connecting from ALM to APM Service uses IPv4 instead of IPv6 even. This issue does not cause any functional concern. 3 25.2.200 Customer will see all communication between OSO components on IPv4 even if cluster is dualstack. There is no functional impact as OSO supports "PreferredDualstack" which means fallback to IPv4 is always acceptable

There is no work around required.

Release 25.2.200

There are no known bugs in this release.

4.3.9 NRF Known Bugs

Release 25.2.201

There are no known bugs in this release.

4.3.10 Policy Known Bugs

Release 25.2.201

Table 4-33 Policy 25.2.201 Known Bugs

Bug Number Title Description Found in Release Customer Impact Severity
38590212 Observing OCPM_PRE Pod is consuming more than 90% memory usage The PRE pods may exceed 90% memory utilization when the environment contains a large number of policy projects and users perform frequent clone, modify, or move operations between development and production states. Pod logs typically show warnings only; in some cases a pod restart may occur due to OOM (exit code 137).

Elevated memory consumption can lead to pod restarts (OOMKilled) and reduced stability/performance of the policy evaluation service.

Workaround:

Reduce the number of policy project versions (delete older/unused versions). Perform cloning/modification/state-move activities during a maintenance window or low-traffic period. After completing such activities, perform a rolling restart of the PRE pods to return memory usage closer to baseline.

2 24.2.7
38902416 Diameter Connector is printing error log "Timestamp cannot be processed because dateString is: 2025-07-04T09:50Z" during SM performance run During SM performance runs, the Diameter Connector may repeatedly log an error indicating it cannot process a timestamp formatted like
YYYY-MM-DDThh:mmZ
(example shown in the issue). The request flow continues; late-arrival processing does not handle the request, but normal processing proceeds. A fix is expected in the next release.

Continuous error logging may consume resources and could degrade performance over time, although request processing continues.

Workaround:

No workaround is available. Monitor resource usage and logs as needed until the fix is available in a subsequent release.

3 25.2.201
39061348 OclogId is missing in standalone server response for PA-UPDATE when call failed as 403 REQUESTED_SERVICE_NOT_AUTHORIZED When Security Analytics and Logging (SAL) is enabled and a PA-UPDATE fails with 403 REQUESTED_SERVICE_NOT_AUTHORIZED, the ocLogId may be missing from the server response due to loss of the ID in thread context.

One SAL log entry may be missing ocLogId metadata, which can reduce traceability when correlating logs across services.

Workaround:

No direct workaround. Use other SAL log fields to correlate the call flow where possible until the underlying ocLogId propagation issue is resolved.

3 25.2.201

4.3.11 SCP Known Bugs

Release 25.2.201

Table 4-34 SCP 25.2.201 Known Bugs

Bug Number Title Description Customer Impact Severity Found in Release
39088066 In case producer sending Server Header, SCP adding duplicate server header value while responding back to consumer after all retry exhausted When the producer NF sends a Server header, SCP adds a duplicate Server header value in the response returned to the consumer NF after all retries are exhausted.

A duplicate value is added to the header, but no information is lost, so there is no functional impact

Workaround:

None

3 25.2.100
39007898 Correct interscp routing for default notification for model-c and model-d Corrected inter-SCP routing for default notifications in Model C and Model D. In default notification routing using the callback URI, inter-SCP routing might not work in certain scenarios. This has low functional impact, as the use case is limited.

Workaround:

None

3 25.1.200
38973405 ocscp_worker_transaction_time_exhausted_total fails to peg randomly causing ATS failure. The ocscp_worker_transaction_time_exhausted_total metric was not consistently pegged, which caused intermittent ATS failures.

In some cases, ocscp_worker_transaction_time_exhausted_total might not be pegged, and instead ocscp_metric_transaction_timeout_total is incremented. This has no functional impact, as either metric indicates that the transaction timer has expired.

Workaround:

None

3 25.2.100
38964285 In Delayed Discovery request, when Timestamp Header feature is enabled, complete budget is being allocate to target-api-root header attempt. In delayed discovery requests, when the Timestamp header feature is enabled, the entire budget is allocated to the target-api-root header attempt.

It has limited impact on delayed discovery when the Timestamp header feature is enabled and no response is received from the target API root.

Workaround:

Overlapping regex should not be configured.

3 25.2.200
38952387 Case mismatch in Localities for INTER SCP cases (USWEST and USWest) is resulting in incorrect population of worker interscp routing maps A case mismatch in localities for inter-SCP scenarios (for example, USWEST and USWest) resulted in incorrect population of worker inter-SCP routing maps.

Impact on inter-SCP routing is affected only in cases of misconfiguration of localities.

Workaround:

None

3 25.2.200
38947469 Getting incorrect error cause when mandatory parameter is missing in Proxy Selection Routing Mode API request An incorrect error cause was returned when a mandatory parameter was missing in the Proxy Selection Routing Mode API request.

No functional impact. Only an incorrect error cause was returned in the configuration response.

Workaround:

None

4 25.2.200
38930373 SCP throwing incorrect error cause when nextHopProxyNfServiceData.nfServiceLoadBasedCongestionControlCfg value is invalid SCP returns an incorrect error cause when the nextHopProxyNfServiceData.nfServiceLoadBasedCongestionControlCfg value is invalid.

There is no functional impact; only the error cause reported in the configuration is incorrect.

Workaround:

Traffic is redistributed to other pods.

4 25.2.200
38920538 SCP behaving differently for patch requests for custom roaming proxy NFs compared with producer NFs SCP behaves inconsistently for PATCH requests on custom roaming proxy NFs compared to producer NFs.

In a race condition where an NF is deregistered, updates from a PATCH request for the same roaming proxy profile may be missed. This is a corner case and has very low functional impact.

Workaround:

Rule will get corrected on audit.

3 25.2.200
38872318 PUT API response returns null timestamps The PUT API response returned null values for timestamp parameter.

There is no functional impact; only the timestamp parameters are not returned in the API response.

Workaround:

None

3 25.2.200
38810553 Static NF Discovery: Incorrect ocscp_nf_setid and ocscp_alternate_route_type when primary NF is unavailable In static NF discovery, incorrect values for ocscp_nf_setid and ocscp_alternate_route_type were reported when the primary NF was unavailable.

There is no functional impact; only an observability impact on the ocscp_nf_setid metric dimension.

Workaround:

None

3 25.2.200
38782868 Routing continues to a deleted NF type when profile deregistration fails Routing continued to a deleted NF type when profile deregistration failed.

Routing may continue based on stale rules when an NF type is deleted but its profile deregistration fails. This has very low functional impact due to the limited use case.

Workaround:

The NF type configuration should not be deleted while NF profiles are still registered.

3 25.2.200
38444671 Partial rules sync at worker due to mismatch in internal maps data A partial rule synchronization occurs at the worker due to mismatches in internal map data.

Rule synchronization on the SCP worker can go out of synchronization with notifications in rare exception cases, such as timeouts during rule synchronization.

Workaround:

None

3 25.2.100
38098107 SCP is Not considering Version and Trailer fields from Jetty response SCP is not considering version and trailer fields from Jetty responses.

It does not have any impact as fields are not currently used.

Workaround:

None

4 25.1.200
38079614 SCP All Services: Remove use of java.util.date and org.joda.time. Use java.time instead because of threadsafety and better method list. SCP services relies on java.util.date and org.joda.time for date and time handling, which are not thread-safe and lack modern functionality.

It does not have any impact as it is a minor code enhancement.

Workaround:

None

4 25.1.200
38071919 Port is not derived from NFProfileLevelAttrConfig in case of ModelD Notification and SCP does AR using hardcoded port 80 When a Model-D notification is received, the port is not correctly derived from NFProfileLevelAttrConfig, resulting in SCP using a hard-coded port 80 for alternate routing.

The default port 80 is used irrespective of scheme for notification routing. Also, the port and scheme for the profile level FQDN or IP are not considered. The impact is limited to routing of non-default notification messages as part of Model-D.

Workaround:

None

3 25.1.200
38008367 Overlapping regex validation missing for apiSpecificResourceUri in routing config API The routing configuration REST API allows overlapping regex patterns in the apiSpecificResourceUri field, leading to ambiguous routing when a request matches multiple patterns. There is conflicting routing config set selection in case of overlapping regex in apiSpecificResourceUri.

Workaround:

Overlapping regex should not be configured.

3 25.1.100
37995299 SCP not able to delete foreign SCP routing details post deregistration When a foreign SCP profile is unregistered, SCP fails to remove the associated routing details for certain profiles. Some foreign SCP routing rules are not cleared if nfsetId is updated.

Workaround:

None

3 25.1.200
37970295 Worker pod restart observed due to coherence timeout when single cache pod is used When increasing the number of worker pods from 1 to 23 with only one cache pod in use, worker pods restart due to coherence timeout. It does not have any impact as SCP redeployment is required to update nfsetid and not a recommended change.

Workaround:

None

3 25.1.200
37969345 topologysourceinfo REST API is not case sensitive for nfType When updating the Topology Source of an NF Type from LOCAL to NRF using the PUT method, the REST API successfully processes the request without errors, but SCP triggers an on-demand audit with nfType=udm, resulting in empty NF responses. The REST API with a case not matching the 3GPP specified NFType would result in an empty response.

Workaround:

Provide NFType as per the 3GPP standard.

3 23.4.0
37949191 ocscp_metric_nf_lci_tx_total metric is incrementing even when no LCI headers are received from peer NFs The ocscp_metric_nf_lci_tx_total metric incorrectly increments even when no LCI headers are received from peer NFs.

It has a minor observability impact.

Workaround:

None

3 25.1.200
37887650 SCP-Worker restart observed with traffic feed enabled with 2 trigger points when Traffic exceeds 7K req/sec When traffic feed is enabled with two trigger points, the SCP-Worker crashes if traffic exceeds 7K requests per second.

The SCP-Worker pod restarts when the traffic feed requests are overloaded.

Workaround:

Traffic is redistributed to other pods.

3 25.1.200
37886252 High-cardinality metrics were observed on the SCP while running traffic at full capacity A high memory consumption in OSO is observed during the traffic run at 730K MPS, mainly due to high-cardinality samples generated by SCP.

Retrieving metrics from OSO is slow because of a large number of samples.

Workaround:

None

3 25.1.100
37622431 Audit failures observed during overload situation when traffic is operating at maximum rated capacity and surpasses the pod limits by 50% When traffic is operating at maximum rated capacity and exceeds the pod limits by 50%, audit failures are observed while SCP is in the overload condition.

In overload conditions, SCP-Worker pod protection mechanism discards some of the internally generated NRF audit requests.

Workaround:

Audit is periodic in nature and eventually successful when the overload condition subsides.

3 25.1.100
37575057 Duplicate Routing when producer responses with location header in 3xx cases SCP performs duplicate routing when the producer NF responds with the location header in 3xx cases.

SCP will send requests to producer NF again if the producer NF in redirect URL and alternate routing rules are the same.

Workaround:

None

3 25.1.100
37554502 SCP worker pod restart with overload errors observed on newly spawned pods after 25% or 50% of the SCP-worker pods goes into a restart state Newly spawned SCP-worker pods restart and show overload errors after 25% or 50% of the SCP-worker pods enter a restart state.

The SCP-Worker pod occasionally restarts due to a startup probe failure when it cannot retrieve configuration during startup. This issue occurs only during startup, so there is no functional impact because the pod has not started handling traffic.

Workaround:

Pod recovers after the restart when it is able to get configuration.

3 25.1.100
36757321 Observed 429's due to pod overload discards during upgrade and rollback During an upgrade from SCP 24.1.0 to 24.2.0, five worker nodes consumed more than six vCPUs while handling 60K MPS, resulting in the generation of 429 errors.

Some discards might be observed during an upgrade in case of bursty traffic due to the SCP-Worker pod protection mechanism.

Workaround:

It is recommended to perform an upgrade during low traffic rate to avoid pod overload.

3 24.2.0

4.3.12 SEPP Known Bugs

Release 25.2.200

Table 4-35 SEPP 25.2.200 Known Bugs

Bug Number Title Description Customer Impact Severity Found In Release
38278479 Getting "No https instances configured" intermittently in PLMN EGW logs

The error message "No https instances configured" appears intermittently in the PLMN EGW logs during normal operations.

Perform the following steps to reproduce this known issue:

  1. Register the NIF in NRF.
  2. Enable the NIF feature on SEPP.
  3. Run traffic mix at 1,000 requests.
  4. The following error appears in the PLMN EGW logs:
    errorReason='No https instances configured', status='500 INTERNAL_SERVER_ERROR'
    errorCause: errorStackTrace: ocpm.cne.gateway.filters.sbi.util.SbiRoutingRulesEngine.seppDisabledProcessing(SbiRoutingRulesEngine.java:602)
Egress Gateway is intermittently attempting to route messages to an HTTPS peer, even though no HTTPS peer is configured.

Workaround:

None

3 25.2.100
38257593 pn32f memory usage keep on increasing at 550 TPS with cat3 time check enabled The memory usage on PN32F keeps increasing with 550 TPS when the Cat-3 Time Check feature is enabled. Memory usage is constantly increasing.

Workaround:

Disable CAT-3Time Check feature is not in use.

3 25.1.200
39144294 NRF heartbeat are send with delays from SEPP to the NRF

The SEPP sends NRF heartbeat refreshes later than expected. When nfHeartbeatRate is set to 60 and NRF heartBeatTimer is set to 60s (which should result in an interval of approximately 36s), the heartbeats are delayed by more than 60s, causing a drift from the configured schedule.

NRF frequently marks the SEPP instance as Suspended, which poses a risk of deregistration, service discovery failures, and potential traffic disruptions involving the SEPP.

Workaround:

Mitigate the issue by lowering the nfHeartbeatRate (sending heartbeats earlier than expiry) or increasing the NRF heartBeatTimer. Ensure strict time synchronization (NTP) and adequate CPU/memory resources to prevent pod throttling in nrf-client-nfmanagement. If the scheduler stalls, consider restarting the pod and enabling debug logs to verify timing.

3 25.2.100
38818815 SEPP-PERF: Illegal character in authority at index 7 Error printed at n32-ingress-gateway microservice during long performance run The scheduled PerfInfo task on the n32-ingress-gateway constructs the URI for the perf-info endpoint using a templated host http://<release-name>-sepp-perf-info:5905/load. During extended performance runs, the <release-name> resolves to an empty value, resulting in a hostname that starts with a "-" character. This leads to the java.net.URI class throwing an "Illegal character in authority at index 7" error. Consequently, the service logs repeated ERRORs during prolonged performance runs. Perf metrics collection and polling from the n32-ingress-gateway fail during long performance runs. Repeated ERROR logs increase noise in the logs and trigger log rotation, resulting in minor resource overhead. However, call processing and functionality are not impacted.

Workaround:

  1. Override PerfInfo Endpoint URL: Update the sepp.yaml file to explicitly define the PerfInfo endpoint with a valid DNS name, such as sepp-perf-info.<namespace>.svc:5905, instead of using the templated format “<release-name>-sepp-perf-info”.
  2. Disable Jaeger Tracing (if not needed): If Jaeger tracing is not necessary, temporarily disable it by setting the jaeger.enabled field to false in the custom-values.yaml file.
3 25.2.200
39138384 SEPP does not remove x-next-hop-authority header for request towards Remote PLMN SEPP forwards internal routing headers (such as x-next-hop-authority and other x-* headers) to the remote PLMN on N32-f/N32-c. When attempting to globally remove the x-next-hop-authority header at N32-EGW, the N32-c handshake fails because EGW routing for routeId n32c depends on that header, resulting in errors. There is a potential security exposure of internal routing and topology hints to the peer, along with interoperability issues with the partner SEPP. Stripping the x-next-hop-authority header too early can lead to N32-c handshake and routing failures, blocking the interconnect setup.

Workaround:

None

3 25.1.201
39137537 Errors being reported in SEPP plmn egw pod logs intermittently EPP does not cancel or timeout stalled HTTP/2 responses. When a client advertises INITIAL_WINDOW_SIZE=0 and never sends a WINDOW_UPDATE, the response hangs indefinitely (for more than 30 seconds), causing the H2_defense_response_timeout check to fail.

There is a risk of resource exhaustion or DoS (streams and memory being held open), leading to degraded capacity and latency, as well as potential screening test failures during security assessments.

Workaround:

None

3 25.1.201
39131855 Conformance HTTP2: request with non-conforming pseudo-headers not considered as malformed On both SBA and N32 interfaces, SEPP does not treat HTTP/2 requests with missing mandatory pseudo-headers as malformed. Missing :scheme results in a 404 (N32fContext Not Found), and missing :method causes no response. According to RFC 7540/9113, such requests should be reset with a PROTOCOL_ERROR (RST_STREAM). There are interoperability issues with strict peers, potential resource hangs when no response is sent, and incorrect error signaling that could mask malformed or attack traffic during security assessments.

Workaround:

None

3 25.1.201
39126163 SEPP does not refuse a CONNECT method SEPP currently does not immediately reject HTTP/2 CONNECT requests; instead, it returns a 408 Request Timeout. CONNECT requests must be explicitly refused to prevent tunneling or backdoor behavior. Accepting CONNECT requests can enable tunneling misuse, posing a security risk and potential policy non-compliance.

Workaround:

None

3 25.1.201
38943638 OCSEPP 25.1.202 Roaming Hub SEPP sending plmnIdList with null value in the n32c-handshake In OCSEPP 25.1.202, the Roaming Hub SEPP sends the plmnIdList as null during the N32c capability exchange handshake, which violates 3GPP TS 29.573, as it requires an array type. No impact on handshake or n32f traffic.

Workaround:

None

3 25.1.202
39160060 SEPP-PERF: High 503 Errors and increased processing time observed after restart scenario of ocsepp-plmn-ingress-gateway pod at SEPP_25.2.200-rc.3 build After restarting the ocsepp-plmn-ingress-gateway on SEPP 25.2.200, the Ingress Gateway (IGW) begins returning sustained 503 Service Unavailable responses, accompanied by significantly higher request processing latency. Logs reveal scheduler exhaustion and task rejections (TaskRejectedException/RejectedExecutionException, “Scheduler unavailable”), indicating that the IGW ThreadPoolTaskExecutor/Reactive scheduler is saturated when routing to cn32f. A high rate of failed client requests (HTTP 503 OSEPP-IGW-E051), resulting in traffic drop. Increased end-to-end latency for successful requests passing through the Igress Gateway. Degraded service stability lasting for hours after restart, particularly under load.

Workaround:

None

3  
38462282 SEPP-PERF: 406 error code observed due to cat3 failure at SEPP-25.2.100-rc.2 during SEPP Performance run During 40K MPS performance testing on SEPP 25.2.100-rc.2, a small but consistent portion of traffic receives HTTP 406 errors due to SCM Category-3 (Cat-3) failures. The majority of other failures are caused by rate limiting, while pods remain stable and latencies remain normal. A slight drop in success rate (≈0.005% of calls rejected with HTTP 406 due to Cat-3) occurs under high overall load. This may manifest as sporadic client errors in specific flows evaluated by Cat-3.

Workaround:

None

3 25.2.100
39074274 Hosted SEPP: wrong error code is generated when Remote SEPP set is created with missing "hostedRemotePartnerSet" parameter. When creating a Remote SEPP set with the "hostedRemotePartnerSet" parameter missing in the payload, a 500 Internal Server Error is returned in the response, whereas a 4xx error code should be expected. Wrong error code in response.

Workaround:

None

3 25.2.200

SEPP 25.2.200 Gateway Known Bugs

Table 4-36 SEPP 25.2.200 Gateway Known Bugs

Bug Number Title Description Customer Impact Severity Found in Release
36263009 PerfInfo calculating ambiguous values for CPU usage when multiple services mapped to single pod In the cgroup.json file, multiple services are mapped to a single endpoint, causing ambiguity in the calculation of CPU usage. This affects the accuracy of the overall load calculation. The overall load calculation is incorrect, which may lead to inaccurate information regarding the system load.

Workaround

None

3 23.4.1
36614527 [SEPP-APIGW] Overload Control discard policies not working with REST API and CNCC The default values for the Overload Control discard policies cannot be edited or changed. An error message, "ocpolicymapping does not contain this policy name," is displayed when attempting to save the configuration. The same behavior is observed when using the REST API. The user is unable to edit the Overload Control discard policies through the CNC Console.

Workaround

Helm configuration can be used to configure the Overload Control discard policies.

3 24.2.0
38707511 NRF client alternate routing not working In the nrf-client, both primary and secondary NRF addresses are set. When the primary NRF goes down and a 503 gateway exception is configured, messages are not being re-routed to the secondary NRF as expected. The alternate NRF is not selected when the primary NRF is down or unreachable.

Workaround

None

2 25.2.200
38709149 NRF client nfmanagement service logging Rest API not working The REST API for changing the nrf-client logs is functioning as expected. When the nrf-client log level is changed using the REST API, the change is not effective.

Workaround

To update the log level of the nrf-client, modify the log level configuration through Helm and then perform a Helm upgrade.

3 25.2.200
38810446 [SEPP-APIGW] Missing ignoremaxresponsertime & sbiRoutingWeightBasedEnabled under metadata in EGW CNCC screen The ignoremaxresponsertime and sbiRoutingWeightBasedEnabled metadata fields are missing in the EGW CNCC screen. Specifically, the ignoremaxresponsertime header is absent under metadata in the CNCC screen. Since the behavior of the SBITimer feature varies based on the values (false, true, or absent) of this metadata over EGW, this omission prevents proper configuration when users edit or configure routes using the CNC Console. Additionally, the sbiRoutingWeightBasedEnabled field is missing, which is required by SEPP. The SBITimer and NRF route automation functionality over EGW are impacted, and as a result, the feature may not work correctly.

Workaround

Both the SBITimer and NRF route automation functionality can be configured through Helm.

3 25.2.200-beta.2
38810483 [SEPP-APIGW] No support for header based predicate under EGW Routesconfiguration in CNCC screen There is no support for header-based predicates under Egress Gateway Routes configuration in the CNC Console screen.

GW: 25.2.106 Release
Users will be impacted when configuring routes using REST if a header name is used as a filter.

Workaround

Can be configured using Helm.
3 25.2.200-beta.2
38769835 [SEPP-IGW] SBITimer Feature: Misleading logger in the IGW, when request contains 3gpp-Sbi-Origination-Timestamp and gpp-Sbi-S ender-Timestamp but no 3gpp-Sbi-Max-Rsp-Time A misleading log is printed in the IGW logs when a request contains 3gpp-Sbi-Origination-Timestamp and 3gpp-Sbi-Sender-Timestamp, but no 3gpp-Sbi-Max-Rsp-Time.

Current Logger Message: "message": "Sender/Origin headers are null. Using global or route timeout."

Expected Behavior:

The log should indicate that the 3gpp-Sbi-Max-Rsp-Time is missing in the request, which is why the global or route timeout is being used. It should not incorrectly suggest that the Sender or Origin headers are missing.

Users can be misled when reviewing the logs.

Workaround

None

3 25.2.200-beta.2
38769908 [SEPP-IGW]SBI Timer Feature: Issues related to validation of sbiTimer Headers 3gpp-Sbi-Max-Rsp-Time and 3gpp-Sbi-S ender-Timestamp The validation for the SBITimer headers has the following issues:
  1. Incorrect Error for 3gpp-Sbi-Max-Rsp-Time: When the value for the 3gpp-Sbi-Max-Rsp-Time header is incorrect (e.g., a string value instead of a valid number), a 500 Internal Server Error is returned to the user. Expectation: A 400 Bad Request should be returned instead of a 500 error for invalid header values.
  2. Incorrect Handling of 3gpp-Sbi-Sender-Timestamp: If the 3gpp-Sbi-Sender-Timestamp header contains an invalid value (for example, a string value), the request is rejected with a Late Arrival Error, which seems more appropriate for a timing issue rather than an invalid value scenario. Expectation: The request should be rejected with a Bad Request error, as the issue is with the invalid value in the header.
  3. Successful Call with Invalid 3gpp-Sbi-Sender-Timestamp: If a request does not contain the 3gpp-Sbi-Max-Rsp-Time header but contains an invalid value for the 3gpp-Sbi-Sender-Timestamp header, the call passes successfully. Expectation: If the 3gpp-Sbi-Sender-Timestamp header has an invalid value, the request should be rejected with a 400 Bad Request error, even if 3gpp-Sbi-Max-Rsp-Time is absent.
These issues lead to inconsistent and misleading behavior in the validation and error handling process.
None 3 25.2.200-beta.2
38769987 [SEPP-IGW] oc_ingressgateway_sbitimer_timezone_mismatch gauge metrics once pegged does not reset oc_ingressgateway_sbitimer_timezone_mismatch gauge metrics once pegged does not reset back to 0. The incorrect metric values potentially leading to misleading information being provided to the user.

Workaround:

None

3 25.2.200-beta.2
38771574 [SEPP-IGW] SBITimer Feature Enabled- sbiTimerTimezone related issues

Three Problems related to sbiTimerTimezone

  1. Gateway has Time Zone configured as ANY, and the request contains PDT timezone, as per the requirement the processing should happen in PDT time. Instead here the current time is in GMT and the sender time is also converted into GMT and the processing is done in GMT. If set to ANY then time zone should be specified and timeout calculation will be made as per the time zone specified. (i.e current time will be calculated as per the time zone. For example, If time zone is PDT then current time in PDT is used for calculation)
  2. Gateway has ANY configured and request sent has no timezone -> In this case Late Arrival Error happens, This does not seem to be a case of late arrival when even the timezone could not be identified.
  3. gateway has GMT configured-> if PDT is sent in request how should be the behaviour, can you please clearify? As of now the PDT sender time is converted into GMT and then processed.
This parameter value can be either GMT or ANY. If set to GMT then GMT should be specified in the 3gpp-Sbi-Sender-Timestamp or 3gpp-Sbi-Origination-Timestamp headers. If GMT is not specified then time zone is assumed as GMT. -> If request has no timezone in that case GMT is assumed, but for mentioned point 3 where request has PDT, it is not clear how should the processing be done.
There is no customer impact.

Workaround:

None

3 25.2.200-beta.2
38769780 [SEPP-IGW]With SbiTimer Feature enabled, Igw parses 3gpp-Sbi-Sender-Timestamp header even without millisecond granularity As per 3GPP TS 29.500, the 3gpp-Sbi-Sender-Timestamp header contains the date and time (with millisecond granularity) at which an HTTP request or response is originated. However, with the current gateway implementation, if the header does not contain milliseconds in the request, the header is still parsed, and processing is done.

Expectation: The millisecond granularity should be compulsary and if request does not have the millisecond part in header value, then that request must not be processed.
There is no customer impact.

Workaround:

None

3 25.2.200-beta.2
38973542 SPAD-EGW [25.2.108]: ~ 13% additional Mem usage and ~4% additional CPU in egress 25.2.108 build when compared with previous builds like 25.2.107 egress build. The Egress 25.2.108 build exhibits higher CPU and memory usage compared to the 25.2.107 build when processing the same traffic profile call flow. CPU cpu or memory usage is high.

Workaround:

None

2 25.2.200-rc.1
38973910 [SPAD-APIGW] : 25.2.108: ASM : More than 10% increase in CPU and Memory compared to earlier builds(25.2.107) during performance run of IGW. There is a more than 10% increase in CPU and memory usage in the egress 25.2.108 build compared to the 25.2.107 build during the performance run of Ingress Gateway. Memory usage has been increased.

Workaround:

None

2 25.2.200-rc.1

4.3.13 Common Services Known Bugs

4.3.13.1 Alternate Route Service Known Bugs

Release 25.2.1xx

There are no known bugs in this release.

4.3.13.2 Egress Gateway Known Bugs

Release 25.2.1xx

Table 4-37 Egress Gateway 25.2.1xx Known Bugs

Bug Number Title Description Customer Impact Severity Found in Release
39049678 Improve logging when catching NPE during Jetty Bean creation when TLS disabled deployment The following error log is observed, indicating a NullPointerException (NPE) during web bean creation in REST API mode installation:
{"instant":{"epochSecond":1772635550,"nanoOfSecond":783737564},"thread":"pool-13-thread-1","level":"ERROR",
"loggerName":"ocpm.cne.gateway.util.WebClientRoutingFilterBeanManager",
"message":"Cannot invoke \"ocpm.cne.gateway.ssl.extension.ReloadableX509KeyManager.getDefaultKeyManager()\" because \"this.reloadableX509KeyManager\" is null",
"endOfBatch":false,"loggerFqcn":"org.apache.logging.log4j.spi.AbstractLogger","threadId":82,"threadPriority":5,
"messageTimestamp":"2026-03-04T14:45:50.783+0000","ocLogId":"","xRequestId":"","pod":"","processId":"1","instanceType":"prod","egressTxId":""}
        at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:163)
        at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:156)
        at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:134)
        at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:434)
        at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:367)
        at com.oracle.common.scheduler.ReloadConfig.reloadProperties(ReloadConfig.java:217)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:568)
        at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:73)
        at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:43)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
        at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
        at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base/java.lang.Thread.run(Thread.java:842)

It might have observability impacted due to an unexpected error log during installation.

Workaround:

None

3 25.2.108
39123626 occnp_oc_egressgateway_outgoing_ip_type is missing dimension DestinationHost In PCF 25.2.200 and Gateway Services 25.2.109, the DestinationHost dimension is missing from the occnp_oc_egressgateway_outgoing_ip_type metric, although it was available in PCF 25.2.200 and Gateway Services 25.2.108

The issue affects observability in error scenarios and system performance.

Workaround:

None

3 25.2.109
39083890 EGW logs for message "HTTP response body is empty" doesn't contains ocLogId When an error response is received from a peer NF, PCF EGW generates logs that do not include ocLogId, causing those logs to be missed when filtering by ocLogId.

Log Snippet/Metrics used:

{"instant":{"epochSecond":1773547547,"nanoOfSecond":325432825},"thread":"egw-app-thread9","level":"WARN","loggerName":"ocpm.cne.gateway.pcf.filters.SubActLogGatewayFilterFactory","message":"HTTP response body is empty.","endOfBatch":false,"loggerFqcn":"org.apache.logging.slf4j.Log4jLogger","threadId":142,"threadPriority":5,"messageTimestamp":"2026-03-15T04:05:47.325+0000","ocLogId":"","xRequestId":"","pod":"ocpcf-occnp-egress-gateway-6c96788594-xdw8p","processId":"1","instanceType":"prod","egressTxId":"egress-tx-1984554961"}

Debugging is impacted because not all logs include ocLogId.

Workaround:

None

3 25.2.108
39088228 Host dimension in Egress gateway response metrics still has cardinality explosion In the Egress Gateway 25.2.109 test release, the Host dimension parameter is supported as part of Egress Gateway cardinality for Egress Gateway response metrics.

Impacts observability in error scenarios and affects system performance.

Workaround:

None

3 25.2.109
37751607 Egress gateway throwing NPE when trying to send oauth token request to "Default NRF Instance" when unable to find NRF instance to forward the request Egress Gateway failed to send requests to the configured primaryNrfApiRoot and secondaryNrfApiRoot endpoints specified in the configmap. Subsequently, it attempted to send an OAuth2 token request to the default NRF instance at "[http://localhost:port/oauth2/token]," but this request also failed. Egress Gateway displayed a NullPointerException.

This issue occurs only when an invalid host and port are provided. The port is mentioned with string value as "port" instead of a numeric port value, for example, 8080.

Workaround:

You must provide the valid host and port for the NRF client instance.

3 25.1.200
38504941 EGW/IGW should include LCI header when the current load is less than or equals to the difference between previously reported load and configured LoadThreshold value Ingress Gateway and Egress Gateway do not include the LCI header when the current load is less than or equal to the difference between the previously reported load and the configured LoadThreshold.

It has an impact on consumer NF to decide for traffic load as LCI information is not shared when the current load is less than or equal to the difference between the previously reported load and the configured LoadThreshold.

Workaround:

None

3 25.2.102
38304085 EGW is not Validating 3gpp-sbi-message-priority Header parameters in case of POP25 and Overload Egress Gateway do not validate the 3gpp-sbi-message-priority header parameters in the pod protection overload scenarios.

This Config validation issue causes the feature to malfunction in case invalid values are received.

Workaround:

The consumer NF should send valid values in the header to avoid any malfunctioning.

3 25.2.100
38294514 Observed NPE during oauth-acess-request message when "nrfClientQueryEnabled" flag enabled An NPE is observed during the oauth-access-request message when the nrfClientQueryEnabled parameter is enabled.

Due to Null Pointer Exception (NPE), the OAuth access token request does not reach the NRF, and more calls fail because the OAuth token request is failing.

Workaround:

None

3 25.2.100
38279961 "oauthDeltaExpiryTime" functionality not working during traffic run. Somtimes EGW is requesting NRF oauthtoken even though still ""oauthDeltaExpiryTime" not expired. The oauthDeltaExpiryTime functionality does not work during traffic run. Egress Gateway requests an NRF OAuth token before the configured oauthDeltaExpiryTime expires.

There is no traffic impact because token request processing occurs before timerExpiry.

Workaround:

None

3 25.2.100
38778598 occnp_oc_egressgateway_outgoing_ip_type metric updated for IPv4 in IPv6 preferred dual stack deployment even DNS removed IPv4 address from DNS response for NRF

In an IPv6 preferred dual stack deployment, the occnp_oc_egressgateway_outgoing_ip_type metric is updated for IPv4 even when DNS removes the IPv4 address from the DNS response.

Incorrect information about active connections is provided when DNS records change from IPv4 to IPv6 or vice versa, even when the old connections have already terminated.

Workaround:

None

3 25.2.104
38810446 Missing ignoremaxresponsertime & sbiRoutingWeightBasedEnabled under metadata in EGW CNCC screen In the Egress Gateway CNC Console screen, the ignoremaxresponsertime and sbiRoutingWeightBasedEnabled metadata fields are missing, so these fields cannot be included when the route is configured or edited. This affects SBITimer and NRF route automation functionality over Egress Gateway and may prevent the feature from working.
  • Load sharing is not supported among Producer NFs because the default setting for sbiRoutingWeightBasedEnabled is false.
  • The SBI Timer feature cannot be used for plmn-egw unless IgnoreMaxRspTimeHeader is explicitly set to false in the route configuration.

Workaround:

Configure the routes using REST APIs instead of using the CNC Console.

3 25.2.106
38810483 No support for header based predicate under EGW Routesconfiguration in CNCC screen The CNC Console screen does not support a header-based predicate in Egress Gateway route configuration, so routes cannot be configured that use a header name as a filter.

The NRF route configuration is affected because it relies on a header-based predicate at plmn-egw. This may impact inter-PLMN NRF requests that pass through SEPP.

Workaround:

None

3 25.2.106
4.3.13.3 Ingress Gateway Known Bugs

Release 25.2.1xx

Table 4-38 Ingress Gateway 25.2.1xx Known Bugs

Bug Number Title Description Customer Impact Severity Found in Release
35526243 Operational State change should be disallowed if the required pre-configurations are not present Currently, the operational state at Ingress Gateway can be changed even if the controlledshutdownerrormapping and errorcodeprofiles are not present. This indicates that the required action of rejecting traffic will not occur. There must be a pre-check to check for these configurations before allowing the state to be changed. If the pre-check fails, the operational state should not be changed.

Request will be processed by Gateway Services when it is supposed to be rejected.

Workaround:

None

3 23.2.0
38405814 Post_rollback_SM_Validation fails at alternate-route logging level validation The alternate-route logging level values are mismatching.

It has no impact because it is not a production use case. The log level is not changed from WARN to DEBUG.

Workaround:

None

3 25.2.100
38310333 In TLS setup when IGW rejected with 401 then IGW Request/Response Latency metrics are not updated In a TLS setup, when Ingress Gateway rejects a request with HTTP 401, the Ingress Gateway request and response latency metrics are not updated.

It has observability impact because the latency metric is not being updated.

Workaround:

None

3 25.2.100
38293511 IGW is not Validating 3gpp-sbi-message-priority Header parameters in case of POP25 and Overload Ingress Gateway does not validate the 3gpp-sbi-message-priority header parameters in the pod protection overload scenarios.

This Config validation issue causes the feature to malfunction in case invalid values are received.

Workaround:

The consumer NF should send valid values in the header to avoid any malfunctioning.

3 25.2.100
38181400 NPE seen in one of the IGW pod during pod initialization In Ingress Gateway 25.1.203, an NPE occurs in one of the Ingress Gateway pods during initialization in an idle state when no traffic is sent.

Due to Null Pointer Exception (NPE), the OAuth access token request does not reach the NRF, and more calls fail because the OAuth token request is failing.

Workaround:

None

4 25.1.203
37986338 For XFCC header failure case "oc_ingressgateway_http_responses_total" stats are not updated When deploying Ingress Gateway with XFCC header validation enabled in a three-route configuration (for create, delete, and update operations), and sending traffic without the XFCC header, Ingress Gateway rejected the traffic due to XFCC header validation failure. However, the oc_ingressgateway_http_responses_total metric was not updated, but the oc_ingressgateway_xfcc_header_validate_total metric was updated.

The metric will not be pegged when the XFCC header validation failure is observed.

Workaround:

None

4 25.1.200
38461465 Sender Attribute should only consist SEPP-<sepp-fqdn> when addtional error logging in enabled in gw logging config When any failure is observed in Gateway Services, the sender attribute format does not aligned with SEPP requirements when additional error logging is enabled in the Gateway Services logging configuration.

It has observability and debugging impact because it is a formatting issue for SEPP and SCP.

Workaround:

None

4 25.2.100
38769987 oc_ingressgateway_sbitimer_timezone_mismatch gauge metrics once pegged does not reset The oc_ingressgateway_sbitimer_timezone_mismatch metric does not reset to 0 after it is configured, and it remains 1 even after reset is attempted.

Observability impact will be there because metric is not getting reset.

Workaround:

None

3 25.2.105
38771574 SBITimer Feature Enabled- sbiTimerTimezone related Issues When the Gateway Services time zone is set to ANY, requests that include a PDT time zone are processed in GMT because the current time and sender time are converted to GMT. When ANY is set and the request has no time zone, a late arrival error occurs even though the time zone cannot be identified.

When the configured time zone is ANY and a time zone is not included in the header, an incorrect late arrival error is received instead of a wrong format error. When the configured time zone is GMT and a different time zone is sent, the timestamp interpretation can cause an incorrect late arrival error if the effective times do not match.

Workaround:

Ensure that the configuration and the timestamp in the header align with the configured time zone.

3 25.2.105
38817374 IGW reported NPE during installation when config server is unreachable Ingress Gateway reports NPE during installation when config server is unreachable.

Incorrect information is received about connectivity issues between Gateway Services and the config server when the config server is not yet fully up.

Workaround:

None

3 25.2.106
4.3.13.4 Common Configuration Service Known Bugs

Release 25.2.1xx

There are no known bugs in this release.