7.1.2.1.1 OccapifNfStatusUnavailable

Table 7-62 OccapifNfStatusUnavailable

Field Details
Description CAPIF services unavailable'
Summary "namespace: {{$labels.namespace}}, timestamp: {{ with query \"time()\" }}{{ . | first | value | humanizeTimestamp }}{{ end }} : All OCCAPIF services are unavailable."
Severity Critical
Condition All the CAPIF services are unavailable.
OID 1.3.6.1.4.1.323.5.3.39.1.3.5001
Metric Used

'up'

Note: This is a Prometheus metric used for instance availability monitoring.

If this metric is not available, use a similar metric as exposed by the monitoring system.

Recommended Actions The alert is cleared automatically when the CAPIF services restart.

Steps:

  1. Check for service-specific alerts which may be causing the issues with service exposure.
  2. Run the following command to check the pod status:
    $ kubectl get po -n <namespace>
    1. Run the following command to analyze the error condition of the pod that is not in the Running state:
      $ kubectl describe pod <pod name not in Running state> -n <namespace>

      Where <pod name not in Running state> indicates the pod that is not in the Running state.

  3. Refer to the application logs on Kibana and check for database related failures such as connectivity and invalid secrets. The logs can be filtered based on the services.
  4. Check for helm status to make sure there are no errors:
    $ helm status <helm release name of the desired NF> -n <namespace>

    If it is not in “STATUS : DEPLOYED”, then capture logs and event again.

  5. In case the issue persists, capture all the outputs for the above steps and contact My Oracle Support.

    Note: Use CNC NF Data Collector tool for capturing logs. For more information on the Data Collector tool, see Oracle Communications Cloud Native Core, Network Function Data Collector User Guide.