7.1.2.1.2 OccapifPodsRestart

Table 7-63 OccapifPodsRestart

Field Details
Description 'Pod <Pod Name> has restarted.
Summary "namespace: {{$labels.namespace}}, podname: {{$labels.pod}}, timestamp: {{ with query \"time()\" }}{{ . | first | value | humanizeTimestamp }}{{ end }} : A Pod has restarted"
Severity Major
Condition A pod belonging to any of the CAPIF services has restarted.
OID 1.3.6.1.4.1.323.5.3.39.1.3.5002
Metric Used kube_pod_container_status_restarts_total
Recommended Actions

The alert is cleared automatically if the specific pod is up.

Steps:

  1. Refer to the application logs on Kibana and filter based on pod name, check for database related failures such as connectivity and Kubernetes secrets.
  2. To check the orchestration logs for liveness or readiness probe failures, do the following:
    1. Run the following command to check the pod status:
      $ kubectl get po -n <namespace>
    2. Run the following command to analyze the error condition of the pod that is not in the Running state:
      $ kubectl describe pod <pod name not in Running state> -n <namespace>

      Where <pod name not in Running state> indicates the pod that is not in the Running state.

  3. Check the database status. For more information, see Oracle Communications Cloud Native Core, cnDBTier User Guide.
  4. In case the issue persists, capture all the outputs for the above steps and contact My Oracle Support.

    Note: Use CNC NF Data Collector tool for capturing logs. For more information on the Data Collector tool, see Oracle Communications Cloud Native Core, Network Function Data Collector User Guide.