3.1.2 SCPWorkerPodMemoryUsage

Table 3-3 SCPWorkerPodMemoryUsage

Field Description
Description

Notify Worker per Pod memory usage is above threshold

Threshold value is 85% of allocated (8GB) memory: 7.3 GB

Summary instancename: {{$labels.instance}}, namespace: {{$labels.namespace}}, podname: {{$labels.pod}},scp_fqdn: ' {{$labels.scp_fqdn}} ', timestamp: {{ with query "time()" }}{{ . | first | value | humanizeTimestamp }}{{ end }}: Memory usage is above 70% (current value is: {{ $value }})
Severity major
Conditions sum(container_memory_usage_bytes{image!="",pod=~".*scp-worker.+"}) by (pod,namespace, instance) > 6012954214
OID used for SNMP Traps 1.3.6.1.4.1.323.5.3.35.1.2.7004
Metric Used
  • ocscp_metric_http_rx_req_total
  • ocscp_metric_http_tx_req_total
  • ocscp_metric_http_rx_res_total
  • ocscp_metric_http_tx_res_total
Recommended Actions

Cause: When there is high traffic rate, alternate routing, more number of routing rules and rules size, and due to network or producer NF latency.

Diagnostic Information: Monitor traffic rate, alerts, and latency on the KPI Dashboard.

Check the traffic rates of the following metrics if they are too high:
  • ocscp_metric_http_rx_req_total
  • ocscp_metric_http_tx_req_total
  • ocscp_metric_http_rx_res_total
  • ocscp_metric_http_tx_res_total

Check the upstream response time by using the following command and ensure whether upstream is taking too long to respond: ocscp_metric_upstream_service_time_total.

Check the following platform metric for current memory usage by the scp-worker pod: container_memory_usage_bytes.

Recovery: This alert is cleared automatically when the scp-worker pod memory usage reduces below the defined threshold. Reduce the traffic rate and improve the latency.

For any assistance, contact My Oracle Support.