6 Backup and Restore
This chapter provides instructions for administrators who work with the integrated backup service. The purpose of this service is to store data that allows a crucial system service or component to be restored to its last-known healthy state. It does not create backups of the environment created by users of the cloud resources in the Compute Enclave.
Implementation details and technical background information for this feature can be found in the Oracle Private Cloud Appliance Concepts Guide. Refer to the section "Backup and Restore" in the chapter Appliance Administration Overview.
Activating Standard Daily Backup
System backups are not available by default. To activate it, the administrator must set up a Kubernetes CronJob by running the applicable script from the management node that owns the virtual IP of the cluster.
Caution:
Make sure that daily backups are activated after system initialization. If this procedure is omitted, there will be no backup data to restore a component or service from a last known good state.
Execute these steps when the system initialization process has been completed.
-
Log on to one of the management nodes.
# ssh root@pcamn01
-
Retrieve the name of the Kubernetes pod that runs the backup and restore service. Use the following command:
# kubectl get pods -A | grep brs default brs-5bdc556546-gxtx9 3/3 Running 0 17d
-
Execute the
default-backup
script as shown below to set up the Kubernetes CronJob to make a daily backup.kubectl exec brs-5bdc556546-gxtx9 -c brs -- /usr/sbin/default-backup
-
Verify that the CronJob has been added in the default namespace.
# kubectl get cronjobs -A NAMESPACE NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE default brs-cronjob-1629969790-backup 0 0 * * * False 0 <none> 32s health-check cert-checker */10 * * * * False 0 4m6s 17d health-check etcd-checker */10 * * * * False 0 4m6s 17d health-check flannel-checker */10 * * * * False 0 4m6s 17d health-check kubernetes-checker */10 * * * * False 0 4m6s 17d health-check l0-cluster-services-checker */10 * * * * False 0 4m6s 17d health-check mysql-cluster-checker */10 * * * * False 0 4m6s 17d health-check network-checker */10 * * * * False 0 4m6s 17d health-check registry-checker */10 * * * * False 0 4m6s 17d health-check sauron-checker */10 * * * * False 0 4m6s 17d health-check vault-checker */10 * * * * False 0 4m6s 17d sauron sauron-sauron-prometheus-gw-cj 30 19 * * * False 0 18h 17d
Backups are created on the ZFS Storage Appliance at this location, as seen from the management node mount point:
/nfs/shared_storage/backups/
.Each backup is identified by its unique path containing the job OCID and time stamp:
/nfs/shared_storage/backups/ocid1.backup_cronjob....uniqueID/backup_<timestamp>/
Executing a Backup Operation
It is critical that the standard daily backups are activated on your appliance. In addition, it is possible to initiate a system backup manually, if necessary.
Execute these steps to manually initiate a system backup.
-
Log on to one of the management nodes.
# ssh root@pcamn01
-
Retrieve the name of the Kubernetes pod that runs the backup and restore service. Use the following command:
# kubectl get pods -A | grep brs default brs-5bdc556546-gxtx9 3/3 Running 0 17d
-
Execute the
default-backup
script with the "backup-now" option, as shown below.kubectl exec brs-5bdc556546-gxtx9 -c brs -- /usr/sbin/default-backup backup-now
-
Verify that the backup job is executed, and that it is completed successfully.
# kubectl get pods -A | grep brs default brs-5bdc556546-gxtx9 3/3 Running 0 17d default brs-job-1641877703-backup-jkwx7 0/2 Running 0 8m40s # kubectl get pods -A | grep brs default brs-5bdc556546-gxtx9 3/3 Running 0 17d default brs-job-1641877703-backup-jkwx7 0/2 Completed 0 8m40s
Backups are created on the ZFS Storage Appliance at this location, as seen from the management node mount point:
/nfs/shared_storage/backups/
.Each backup is identified by its unique path containing the job OCID and time stamp:
/nfs/shared_storage/backups/ocid1.backup_cronjob....uniqueID/backup_<timestamp>/