The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.
This section discusses restoring Kubernetes nodes from back ups in an Oracle Linux Cloud Native Environment.
These restore steps are intended for use when one or more Kubernetes master clusters needs to be reconstructed as part of a planned disaster recovery scenario. Unless there is a total cluster failure you do not need to manually recover individual master nodes in a high availability cluster that is able to self-heal with replication and failover.
In order to restore a master node, you must have a pre-existing Oracle Linux Cloud Native Environment, and have deployed the Kubernetes module. You cannot restore to a non-existent environment.
To restore a master node:
Make sure the Platform Agent is running correctly on the replacement master nodes before proceeding:
$
sudo systemctl status olcne-agent
On the operator node, use the olcnectl module restore command to restore the key containers and manifests for the master nodes in your cluster. For example:
$
olcnectl module restore --environment-name myenvironment --name mycluster
The files from the latest timestamped folder from
/var/olcne/backups/
are used to restore the cluster to its previous state.environment-name
/kubernetes/module-name
/You may be prompted by the Platform CLI to perform additional set up steps on your master nodes to fulfil the prerequisite requirements. If that happens, follow the instructions and run the olcnectl module restore command again.
You can verify the restore operation was successful by using the kubectl command on a master node. For example:
$
kubectl get nodes
NAME STATUS ROLES AGE VERSION master1.example.com Ready master 9m27s v1.17.x+x.x.x.el7 worker1.example.com Ready <none> 8m53s v1.17.x+x.x.x.el7 $kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE coredns-5bc65d7f4b-qzfcc 1/1 Running 0 9m coredns-5bc65d7f4b-z64f2 1/1 Running 0 9m etcd-master1.example.com 1/1 Running 0 9m kube-apiserver-master1.example.com 1/1 Running 0 9m kube-controller-master1.example.com 1/1 Running 0 9m kube-flannel-ds-2sjbx 1/1 Running 0 9m kube-flannel-ds-njg9r 1/1 Running 0 9m kube-proxy-m2rt2 1/1 Running 0 9m kube-proxy-tbkxd 1/1 Running 0 9m kube-scheduler-master1.example.com 1/1 Running 0 9m kubernetes-dashboard-7646bf6898-d6x2m 1/1 Running 0 9m