About the /tmp/readiness Readiness Probe

This section discusses the definition of ready whereby a tt container providing a TimesTen database that is loaded and open for connections is considered ready.

The TimesTen Operator creates and manages a file called /tmp/readiness in the tt container's file system to determines if the container is ready. If the /tmp/readiness file exists, the tt container is considered ready. If the file does not exist, the tt container is considered not ready.

The TimesTen Operator provides and defines the /tmp/readiness readiness probe for this definition of ready. The definition is in YAML format and is as follows:

readinessProbe:
 exec:
   command: 
   - cat
   - /tmp/readiness
 failureThreshold: 1
 periodSeconds: 10
 successThreshold: 1

In this example, Kubernetes runs the cat command in the tt container every 10 seconds. If the command exits with a return code of 0, the container is ready. If the command returns any other value, the container is not ready.

This readiness probe is useful if you want to replace one or more Nodes in your Kubernetes cluster. In this case, you can cause Kubernetes to drain the workload from the Node. This causes Kubernetes to evict any Pods that are running on that Node and create new Pods on other Nodes in the cluster to replace them. Kubernetes supports a Pod disruption budget whereby you specify a budget for your application. This budget tells Kubernetes how many evicted Pods in a given Deployment can be tolerated. For example, assume you configure a Deployment with 20 replicas of your application. You could tell Kubernetes to tolerate up to 5 of the replicas being down at a time. When moving a workload from one Node to another, Kubernetes is careful not to delete more than 5 replicas at a time, and waits for their replacements to become ready before deleting more.

In the case of the TimesTen Operator, you can use the /tmp/readiness readiness probe to prevent Kubernetes from terminating both the active and standby TimesTen Classic databases simultaneously while draining Kubernetes Nodes.

If you use a Pod disruption budget of 1 on TimesTen, you can drain the workload from one or more Nodes without creating a total TimesTen outage. When Kubernetes deletes a Pod that is running TimesTen in TimesTen Classic, Kubernetes does not know if the Pod contains an active or a standby database. Therefore, it may choose to delete the Pod that contains the active database. This causes a failover to the standby, which disrupts applications if performed during normal hours. There is no way to prevent this. However, Kubernetes does not proceed to delete the other database until the one that was deleted comes back up and is completely in the Healthy state.

For more information on the health of a Pod and the Healthy state, see About the High Level State of TimesTen Pods.

For information on Kubernetes Pod disruption budgets, see https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ and https://kubernetes.io/docs/tasks/run-application/configure-pdb/.