Table of Contents
This chapter contains procedures that may be generally useful to the Oracle NoSQL Database administrator.
Oracle NoSQL Database Storage Nodes and Admins make use of an embedded
database (Oracle Berkeley DB, Java Edition). You should never
directly manipulate the files maintained by this database. In
general it is a bad idea to move, delete or modify the files
and directories located under KVROOT unless you are asked to do
so by Oracle Customer Support. But in particular,
never move or delete any file ending with
a jdb
suffix. These will all be found in an
env
directory somewhere under KVROOT.
To back up the KVStore, you take snapshots of nodes in the store and copy the resulting snapshots to a safe location. Note that the distributed nature and scale of Oracle NoSQL Database makes it unlikely that a single machine can hold the backup for the entire store. These instructions do not address where and how snapshots are stored.
A snapshot provides consistency across all records within the same shard, but not across partitions in independent shards. The underlying snapshot operations are performed in parallel to the extent possible in order to minimize any potential inconsistencies.
To take a snapshot from the admin CLI,
use the snapshot create
command:
kv-> snapshot create -name <snapshot name>
Using this command, you can create or remove a named snapshot. (The name of the snapshot is provided using the <name> parameter.) You can also remove all snapshots currently stored in the store.
For example, to create and remove a snapshot:
kv-> snapshot create -name Thursday Created snapshot named 110915-153514-Thursday on all 3 nodes kv-> snapshot remove -name 110915-153514-Thursday Removed snapshot 110915-153514-Thursday
You can also remove all snapshots currently stored in the store:
kv-> snapshot create -name Thursday Created snapshot named 110915-153700-Thursday on all 3 nodes kv-> snapshot create -name later Created snapshot named 110915-153710-later on all 3 nodes kv-> snapshot remove -all Removed all snapshots
Snapshots should not be taken while any configuration (topological) changes are being made, because the snapshot might be inconsistent and not usable. At the time of the snapshot, use ping and then save the information that identifies masters for later use during a load or restore. For more information, see Snapshot Management.
When you run a snapshot, data is collected from every Replication Node in the system, including both masters and replicas. If the operation does not succeed for at least one of the nodes in a shard, it fails.
If you decide to create an off-store copy of the snapshot, you should copy the snapshot data for only one of the nodes in each shard. If possible, copy the snapshot data taken from the node that was serving as the master at the time the snapshot was taken.
At the time of the snapshot, you can identify which nodes are currently running as the master using the ping command. There is a master for each shard in the store and they are identified by the keyword: MASTER. For example, in the following example, replication node rg1-rn1, running on Storage Node sn1, is the current master:
java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar ping -port 5000 -host node01 Pinging components of store mystore based upon topology sequence #107 300 partitions and 3 storage nodes Time: 2015-03-04 21:07:44 UTC Version: 12.1.3.2.15 Shard Status: total:1 healthy:1 degraded:0 noQuorum:0 offline:0 Zone [name=Boston id=zn1 type=PRIMARY] RN Status: total:3 online:3 maxDelayMillis:0 maxCatchupTimeSecs:0 Storage Node [sn1] on node01:5000 Zone: [name=Boston id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.15 2015-03-04 06:35:02 UTC Build id: 8e70b50c0b0e Admin [admin1] Status: RUNNING,MASTER Rep Node [rg1-rn1] Status: RUNNING,MASTER sequenceNumber:31 haPort:5011 Storage Node [sn2] on node02:5000 Zone: [name=Boston, id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.15 2015-03-04 06:35:02 UTC Build id: 8e70b50c0b0e Rep Node [rg1-rn2] Status: RUNNING,REPLICA sequenceNumber:31 haPort:5011 delayMillis:0 catchupTimeSecs:0 Storage Node [sn3] on node03:5000 Zone: [name=Boston, id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.15 2015-03-04 06:35:02 UTC Build id: 8e70b50c0b0e Rep Node [rg1-rn3] Status: RUNNING,REPLICA sequenceNumber:31 haPort:5011 delayMillis:0 catchupTimeSecs:0
You should save the above information and associate it with the respective snapshot, for later use during a load or restore.
Snapshots include the admin database. Depending on how the store might need to be restored, the admin database may or may not be useful.
Snapshot data for the local Storage Node is stored in a directory inside of the
KVROOT
directory. For each
Storage Node in the store, you have a directory
named:
KVROOT/<store>/<SN>/<resource>/snapshots/<snapshot_name>/files
where:
<store> is the name of the store.
<SN> is the name of the Storage Node.
<resource> is the name of the resource running on the Storage Node. Typically this is the name of a replication node.
<snapshot_name> is the name of the snapshot.
Snapshot data consists of a number of files, and they all are important. For example:
> ls /var/kvroot/mystore/sn1/rg1-rn1/snapshots/110915-153828-later 00000000.jdb 00000002.jdb 00000004.jdb 00000006.jdb 00000001.jdb 00000003.jdb 00000005.jdb 00000007.jdb