Memory Management
Oracle Health Insurance applications consist of a cluster of nodes to which requests are distributed. A single node processes multiple user requests on different threads. These threads share a single memory space.
If a single thread starts using too much memory, the other threads on that same node are impacted. On the other hand, if a node runs out of memory, the node may crash with an out-of-memory error, resulting in service loss. An out-of-memory of a single node may impact the stability of the complete cluster.
| Before the actual out-of-memory happens, the performance of the node goes down considerably. | 
Because of the severe impact, a node must always have sufficient free memory.
Oracle Health Insurance applications have built-in guardrails that prevent from happening an out-of-memory situation.
Memory State
Every node monitors its own memory state. At any point of time, the memory state has one of the value from the following:
| State | Meaning | Application HealthCheck Status | Restrictions | 
|---|---|---|---|
| Normal | Sufficient free memory is available. | 200 | None | 
| Low | The free memory is below the threshold set by the system property ohi.system.memory.threshold.low | 429, informs the load-balancer or client programs that no new requests should be sent anymore. | No new background processing tasks are started. | 
| Lower | The free memory is below the threshold set by the system property ohi.system.memory.threshold.lower | 429, informs the load-balancer or client programs that no new requests should be sent anymore. | Same restrictions like the Low state. In addition, new HTTP IP/API requests are rejected with status 429. | 
| Critical | The free memory is below the threshold set by the system property ohi.system.memory.threshold.critical The system now approaches an out-of-memory situation. | 429, informs the load-balancer or client programs that no new requests should be sent anymore. | Same restrictions like the Lower state. In addition, existing requests are terminated. | 
| In a correctly configured cluster with proper work distribution, the Critical memory state should never be encountered. |