4 Resolved and Known Bugs
This chapter lists the resolved and known bugs for Network Analytics Suite Release 25.2.1xx.
These lists are distributed to customers with a new software release at the time of General Availability (GA) and are updated for each maintenance release.
4.1 Severity Definitions
Service requests for supported Oracle programs may be submitted by you online through Oracle’s web-based customer support systems or by telephone. The service request severity level is selected by you and Oracle and should be based on the severity definitions specified below.
Severity 1
Your production use of the supported programs is stopped or so severely impacted that you cannot reasonably continue work. You experience a complete loss of service. The operation is mission critical to the business and the situation is an emergency. A Severity 1 service request has one or more of the following characteristics:
- Data corrupted.
- A critical documented function is not available.
- System hangs indefinitely, causing unacceptable or indefinite delays for resources or response.
- System crashes, and crashes repeatedly after restart attempts.
Reasonable efforts will be made to respond to Severity 1 service requests within one hour. For response efforts associated with Oracle Communications Network Software Premier Support and Oracle Communications Network Software Support & Sustaining Support, please see the Oracle Communications Network Premier & Sustaining Support and Oracle Communications Network Software Support & Sustaining Support sections above.
Except as otherwise specified, Oracle provides 24 hour support for Severity 1 service requests for supported programs (OSS will work 24x7 until the issue is resolved) when you remain actively engaged with OSS working toward resolution of your Severity 1 service request. You must provide OSS with a contact during this 24x7 period, either on site or by phone, to assist with data gathering, testing, and applying fixes. You are requested to propose this severity classification with great care, so that valid Severity 1 situations obtain the necessary resource allocation from Oracle.
Severity 2
You experience a severe loss of service. Important features are unavailable with no acceptable workaround; however, operations can continue in a restricted fashion.
Severity 3
You experience a minor loss of service. The impact is an inconvenience, which may require a workaround to restore functionality.
Severity 4
You request information, an enhancement, or documentation clarification regarding your software but there is no impact on the operation of the software. You experience no loss of service. The result does not impede the operation of a system.
4.2 Resolved Bug List
This section provides information on the resolved bugs in Network Analytics Suite products release 25.2.1xx.
OCNADD Resolved Bugs
Resolved BugsTable 4-1 OCNADD 25.2.100 Resolved Bugs
Bug ID | Title | Description | Severity | Release Version |
---|---|---|---|---|
36745554 | Adapter and Alarm pods in crash-loop when datafeed created with incorrect endpoint | If incorrect endpoints were configured in the destination for an
HTTP2 feed, adapter and alarm pods entered a crash loop.
Doc Impact: No Doc impact. |
3 | 24.2.0 |
37490359 | Frequent heartbeat loss alarm raised and cleared every few seconds. | Heartbeat loss alarms were being raised and cleared every few
seconds.
Doc Impact: No Doc impact. |
3 | 25.1.100 |
37432163 | One of the pcap export stops without any reason | When two exports were configured, one PCAP export stopped and
stayed in progress with no export occurring. If traffic stopped and
resumed after hours, the PCAP export did not resume.
Doc Impact: No Doc impact. |
3 | 25.1.100 |
37431732 | Pcap export stops as soon the config service restarts | PCAP and CSV exports were running, but after the configuration
service was restarted, the PCAP export stopped.
Doc Impact: No Doc impact. |
3 | 25.1.100 |
37403907 | Storage adapter does not resume storing xDR after upgrade | After upgrade from 24.2.0, storage of xDRs stopped in the storage
adapter.
Doc Impact: No Doc impact. |
3 | 25.1.100 |
38037643 | UI dashboard page becomes unresponsive when user clicks on pvc utilization tab | The UI dashboard became unresponsive when clicking on the PVC
utilization tab for a controller.
Doc Impact: No Doc impact. |
3 | 25.1.200 |
37997391 | "Bad Request" error logs when config service is restarted | Configuration service logged "Bad Request" client exceptions on
restart. If the admin service returned a 400 error for existing
deployments, it was not correctly handled.
Doc Impact: No Doc impact. |
3 | 25.1.200 |
37990843 | Proper error is not being displayed when filter values are not properly filled | The error message for invalid filter values was not clear or
informative.
Doc Impact: No Doc impact. |
3 | 25.1.200 |
37990808 | Filter dynamic values are being populated from browser cache | Filter dynamic values were populated from the browser cache
instead of via new API calls.
Doc Impact: No Doc impact. |
3 | 25.1.200 |
37965808 | Loss of Heartbeat Alarm is raised for workerGroup when there is no wg added | Loss of heartbeat alarm was raised for a worker group that had
not been added, possibly due to older database entries.
Doc Impact: No Doc impact. |
3 | 25.1.200 |
37914403 | "loss of connection" alarm with kraft-controller is not getting cleared | After Kraft migration from Zookeeper in 25.1.200, the loss of
connection alarm was not cleared even though all controller pods
were running.
Doc Impact: No Doc impact. |
3 | 25.1.200 |
38019752 | Warnings while installing DD with with IntraTLS false with OCCM | Installing with IntraTLS false and OCCM resulted in warnings
about unknown fields "mountPath" and "readOnly" during
deployment.
Doc Impact: No Doc impact. |
4 | 25.1.200 |
37994508 | "TLS handshake failed" warning logs on every services | "TLS handshake failed" warnings logged at midnight when mutual
TLS was enabled, not seen when mutual TLS was disabled.
Doc Impact: No Doc impact. |
4 | 25.1.200 |
37990804 | Alarm list does not change to selected WG when WG is switched from Ask-Oracle | When switching worker groups on the alarm page, the alarm list
did not reflect the selected group unless the page was refreshed.
Switching from the datafeed page worked fine.
Doc Impact: No Doc impact. |
4 | 25.1.200 |
4.3 Known Bug List
Known Bugs tables list the known bugs and associated Customer Impact Statements.
OCNADD Known Bugs
The following table lists the known bugs for OCNADD Release 25.2.1xx.
Table 4-2 OCNADD 25.2.100 Known Bugs
Bug Number | Title | Description | Severity | Found In Release | Customer Impact and Workaround |
---|---|---|---|---|---|
37995257 | TSR Discrepancy Alarm is raised even when the TSR configuration is deleted and re-created | The discrepancy alarm raises when you delete and recreate the two-site redundancy configuration. Some configuration remains during deletion, causing the discrepancy on recreation. | 3 | 25.1.200 |
Impact: User may be confused by the discrepancy alarm, even when no discrepancy exists.Workaround: None |
37983780 | Ingress adapter crashes continuously when all kafka-brokers are down | The ingress adapter restarts continuously when all Kafka broker pods are down. Restarting stops after bringing the Kafka pods back up. | 3 | 25.1.200 |
Impact: The ingress adapter may not work correctly when Kafka brokers are down.Workaround: None |
38437217 | Kafka brokers down after disabling External access | Kafka brokers restart after disabling external access during broker extension. FQDNs update, and external access is disabled in the worker group. Traffic does not reach from SCP until the SCP worker pod restarts. | 3 | 25.1.200 |
Impact: Traffic may stop if communication between network functions and data distribution interrupts.Workaround: Verify DD configuration and broker instances. If traffic still does not reach DD, check NF configuration and logs, and restart NF worker/gateway pod if needed. |
38413911 | Aggregation feed creation leads to UI un-responsive | When creating an aggregated feed, clicking the submit button shows a 500 internal error, and the UI becomes unresponsive. | 3 | 25.1.200 |
Impact: The UI may become inaccessible.Workaround: Option 1: Delete the MAIN topic, then create the aggregated feed. Option 2: Create the aggregated feed with the topic's partition count and retention time matching the MAIN topic. |
38394514 | "Total Length" is given as 0 if xDR has only 1 PDU | The "Total Length" of XDR is zero if the XDR contains only one PDU. | 3 | 25.1.200 | Impact: No impact.
Workaround: None. |
38371911 | DD feeds inactive after DD resizing activity. RCA Required | After resizing the data distribution deployment with changes to pods and topic partitions, feeds remain inactive after the service restart. | 3 | 25.1.200 | Impact: Feeds show no traffic and status is inactive.
Workaround: Restart the SCPAggregation service. |
38187166 | Loss of Heartbeat alarm for secondary Redundancy agent is not getting cleared | The loss of heartbeat alarm from the secondary redundancy agent does not clear after the agent restarts. The primary agent alarm clears as expected. | 3 | 25.1.200 | Impact: No impact.
Workaround: None. |
38421397 | After resizing, new Kafka broker pods are in CrashLoopBackOff post helm upgrade | Increasing the Kafka broker pod count from 3 to 5 causes new pods to enter a CrashLoopBackOff state due to incomplete storage format updates during expansion. | 3 | 25.1.200 | Impact: New brokers do not work and the Kafka cluster may
be unhealthy.
Workaround: Update the scripts-config.yaml with the specified storage format commands, then perform a helm upgrade. If you previously increased kafkaReplica count before applying this, revert to the original count, perform helm upgrade, wait for pod stability, and delete the PVC created for the new replica before retrying. |
38300170 | OCNADD metric Latency_critical_threshold_crossed alert level of 100% | The latency critical threshold alert triggers at low traffic rates in high-performance profiles due to rebalancing and excessive partitions. | 3 | 24.3.0 | Impact: The critical latency alert may trigger
incorrectly.
Workaround: Reduce the number of adapter pods by adjusting the HPA minimum and maximum replicas, scale down to zero, and then scale up to the desired number. Ignore or disable the alert if other features increase latency beyond the threshold. |
38426124 | Health Profile not found for serviceId | Kafka-broker pod remains pending because it attempts heartbeats with service IDs no longer registered in the Health Service. This does not affect data flow. | 3 | 24.3.0 | Impact: User may be confused by the log
messages.
Workaround: None. |
38289680 | Message sequencing not working for REQUEST_RESPONSE type mode | Message sequencing does not work when set to REQUEST_RESPONSE mode with a specific expiry timer. The sequencing guarantees request and response pairs separately but not full transaction sequence. | 3 | 24.2.1 | Impact: No impact.
Workaround: None. |
36666809 | DD-GUI : "Done" button not getting active after saving kafka-template configuration | The "Done" button in DD-GUI does not activate after saving Kafka-template configuration. Message sequencing order is not as expected due to correlation ID sequence. | 4 | 24.2.0 |
Impact: No impact.Workaround: None. |
38023343 | Alarm does not raise or raised with invalid Alarm's additional detail value in secondary site, when its admin svc down and config creation or sync action performed in Primary site | Alarm does not raise or raises with invalid details when the admin service is down on the secondary site, especially after configuration or synchronization actions from the primary site. | 4 | 25.1.200 | Impact: User may not see alarms or may see
incorrect alarm details.
Workaround: None. |
38055171 | Non-Oracle NF showing as "Not Sending Data" even if the NON_ORACLE topic is getting updated with traffic | The UI incorrectly shows "Non-Oracle" network functions as not sending data, even when the data is flowing correctly to third-party topics. | 4 | 25.1.200 | Impact: User receive incorrect feed status.
Workaround: None. |
38228101 | The screen goes to top when we expand the Metadata attribute that be below in the list | The screen jumps to the top when expanding metadata attributes lower in the list. The view should remain at the expanded position. | 4 | 25.1.200 | Impact: The UI alignment is improper.
Workaround: None. |