Table of Contents
This chapter contains procedures that may be generally useful to the Oracle NoSQL Database administrator.
To back up the KVStore, you take snapshots of nodes in the store and optionally copy the resulting snapshots to a safe location. Note that the distributed nature and scale of Oracle NoSQL Database makes it unlikely that a single machine can hold the backup for the entire store. These instructions do not address where and how snapshots are stored.
To create a backup, you take a snapshot of the store. A snapshot provides consistency across all records within the same partition, but not across partitions in independent shards. The underlying snapshot operations are performed in parallel to the extent possible in order to minimize any potential inconsistencies.
To take a snapshot from the admin CLI,
use the snapshot create
command:
kv-> snapshot create -name <snapshot name>
Using this command, you can create or remove a named snapshot. (The name of the snapshot is provided using the <name> parameter.) You can also remove all snapshots currently stored in the store.
For example, to create and remove a snapshot:
kv-> snapshot create -name thursday Created snapshot named 110915-153514-thursday kv-> snapshot remove -name 110915-153514-thursday Removed snapshot 110915-153514-thursday
You can also remove all snapshots currently stored in the store:
kv-> snapshot create -name thursday Created snapshot named 110915-153700-thursday kv-> snapshot create -name later Created snapshot named 110915-153710-later kv-> snapshot remove -all Removed all snapshots
Snapshots should not be taken while any configuration (topological) changes are being made, because the snapshot might be inconsistent and not usable.
When you run a snapshot, data is collected from every Replication Node in the system, including both masters and replicas. If the operation does not succeed for at least one of the nodes in a shard, it fails.
If you decide to create an off-store copy of the snapshot, you should copy the snapshot data for only one of the nodes in each shard. If possible, copy the snapshot data taken from the node that was serving as the master at the time the snapshot was taken.
You can identify which nodes are currently running as the master using the ping command. There is a master for each shard in the store and they are identified by the keyword: MASTER. For example, in the following example, replication node rg1-rn1, running on Storage Node sn1, is the current master:
> java -jar KVHOME/lib/kvstore.jar ping -port 5000 -host node01 Pinging components of store mystore based upon topology sequence #107 mystore comprises 300 partitions on 3 Storage Nodes Datacenter:Boston [dc1] Storage Node [sn1] on node01:5000 Datacenter: Boston [dc1] Status: RUNNING Ver: 11gR2.1.0.28 Rep Node [rg1-rn1] Status:RUNNING,MASTER at sequence number:31 haPort:5011 Storage Node [sn2] on node02:5000 Datacenter: Boston [dc1] Status: RUNNING Ver: 11gR2.1.0.28 Rep Node [rg1-rn2] Status:RUNNING,REPLICA at sequence number:31 haPort:5011 Storage Node [sn3] on node03:5000 Datacenter: Boston [dc1] Status: RUNNING Ver: 11gR2.1.0.28 Rep Node [rg1-rn3] Status:RUNNING,REPLICA at sequence number:31 haPort:5011
Snapshots include the admin database. Depending on how the store might need to be restored, the admin database may or may not be useful.
Snapshot data for the local Storage Node is stored in a directory inside of the
KVROOT
directory. For each
Storage Node in the store, you have a directory
named:
KVROOT/<store>/<SN>/<resource>/snapshots/<snapshot_name>/files
where:
<store> is the name of the store.
<SN> is the name of the storage node
<resource> is the name of the resource running on the storage node. Typically this is the name of a replication node.
<snapshot_name> is the name of the snapshot.
Snapshot data consists of a number of files, and they all are important. For example:
> ls /var/kvroot/mystore/sn1/rg1-rn1/snapshots/110915-153828-later 00000000.jdb 00000002.jdb 00000004.jdb 00000006.jdb 00000001.jdb 00000003.jdb 00000005.jdb 00000007.jdb