There are two ways to recover your store from a previously created snapshot. The first mechanism allows you to use a backup to create a store with any desired topology. The second method requires you to restore the store using the exact same topology as was in use when the snapshot was taken.
If you had to replace a failed Storage Node, that qualifies
as a topology change. In that case, you must use the
Load
program to restore your store.
For information on how to replace a failed Storage Node, see Replacing a Failed Storage Node.
You can use the oracle.kv.util.Load
program to restore a store from a previously created
snapshot. You can run this program directly, or you can
access it using kvstore.jar
, as
shown in the examples in this section.
By using this tool, you can restore the store to any topology, not just the one that was in use when the snapshot was created.
This mechanism works by iterating through all records in a snapshot, putting each record into the target store as it proceeds through the snapshot. It should be used only to restore to a new, empty store. Do not use this with an existing store because it only writes records if they do not already exist.
Note that to recover the store, you must load records from snapshot data captured for each shard in the store. For best results, you should load records using snapshot data captured from the replication nodes that were running as Master at the time the snapshot was taken. (If you have three shards in your store, then there are three Masters at any given time, and so you need to load data from three sets of snapshot data). To identify the master, use ping at the time the snapshot was taken.
You should use snapshot data taken at the same point in time; do not, for example, use snapshot data for shard 1 that was taken on Monday, and snapshot data for shard 2 that was taken on Wednesday because this can cause your store to be restored in an inconsistent state.
This mechanism can only go at the speed of insertion of the
new store. Because you probably have multiple shards in
your store, you should be restoring your store from data
taken from each shard. To do this, run the
Load
program in parallel, with each
instance operating on data captured from different
replication nodes.
The program's usage is:
java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar load [-verbose] -source <backupDir> -host <hostname> -port <port> -store <storeName> -username <user> -security <security-file-path> [-load-admin] [-force] [-status <pathToFile>]
where:
-load-admin
Loads the store metadata from the snapshot to the new
store. In this case the -source
directory
must point to the environment directory of the admin node from
the snapshot. The store must not be available for use by users
at the time of this operation.
This option should not be used on a store unless
that store is being restored from scratch. If -force
is specified in conjunction with -load-admin
,
any existing metadata in the store, including tables and
security metadata, will be overwritten. For more information,
see Load Program and Metadata.
-host <hostname>
identifies the host name of a node in your store.
-port <port>
identifies
the registry port in use by the store's node.
-status <pathToFile>
is an
optional parameter that causes the status of the
load operation to be saved in the named location on
the local machine.
-security <security-file-path>
identifies the security file used to specify properties
for login.
-source <backupDir>
identifies the on-disk location where the snapshot
data is stored.
-store <storeName>
identifies the name of the store.
-username <user>
identifies the name of the user to login to the
secured store.
For example, suppose there is a snapshot in
/var/backups/snapshots/110915-153828-later
,
and there is a new store named "NewStore" on host "NewHost"
using registry port 12345. Run the Load
program on the host that has the
/var/backups/snapshots
directory:
java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar load \ -source /var/backups/snapshots/110915-153828-later -store NewStore \ -host NewHost -port 12345
If the load fails part way through the restore, it can start where it left off by using the status file. The granularity of the status file is per-partition in this NoSQL DB release. If a status file is not used and there is a failure, the load needs to start over from the beginning. The target store does not need to be re-created if this happens, existing records are skipped.
You can use the Load program to restore a store with metadata (tables, security) from a previously created snapshot.
The following steps describe how to load from a snapshot with metadata to a newly created store:
Create, start and configure the new store (target). Do not configure security yet, even though the target store will eventually have security information. Also, do not make the store accessible to applications yet.
Create the new store:
java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar makebootconfig \ -root KVROOT \ -host NewHost -port 8000 -admin 8001 \ -harange 8010,8020 \ -capacity 1 \ -store-security none
Start the new store:
java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar start \ -root KVROOT&
Configure the new store:
java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar runadmin \ -port 8000 -host NewHost kv-> configure -name NewStore Store configured: NewStore
Loading security metadata requires the names of the source store and the target store to be the same, otherwise the security metadata cannot be used later.
Locate the snapshot directories for the source store. There should be one for the admin nodes plus one for each shard. For example in a 3x3 store there should be 4 snapshot directories used for the load. The load program must have direct file-based access to each snapshot directory loaded.
In this case, the snapshot source directory is in datacenter1/kvroot/newstore/sn1/admin1/env.
Load the store metadata using the -load-admin
option.
Host, port, and store refer to the target store. In this case the
-source
directory must point to the environment
directory of the admin node from the snapshot.
java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar load \ -source datacenter1/kvroot/newstore/sn1/admin1/env/ \ -store NewStore -host NewHost -port 8000 -load-admin
This command can be run more than once if something goes wrong, as long as the store is not accessible to applications.
Once the topology is deployed, load the shard data for each shard. To do this,
run the Load program in parallel, with each instance operating on data
captured from different replication nodes. For example,
suppose there is a snapshot of OldStore
in var/backups/snapshots/140827-144141-back.
java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar load \ -source var/backups/snapshots/140827-144141-back -store NewStore \ -host NewHost -port 8000
This step may take a long time or might need to be restarted. In order to significantly reduce retry time, the use of a status file is recommended.
Configure security if the store is to be secure. For more information on configuring Oracle NoSQL Database securely, see the Oracle NoSQL Database Security Guide.
The store is now ready for applications.
You can restore a store directly from a snapshot.
This mechanism is faster than using the Load
program described in the previous section, but it can be
used only to restore to the exact same
topology as was used when the snapshot was taken. This
means that all ports and host names or IP addresses (depending on
your configuration) must be exactly the same as when the snapshot was taken.
You must perform this procedure for each Storage Node in your store, and for each service running on each Storage Node.
Put the to-be-recovered snapshot data in the recovery
directory for the service corresponding to the
snapshot data. For example, if
you are recovering Storage Node sn1, service
rg1-rn1 in store mystore
, then
log in to the node where that service is running and:
> mkdir KVROOT/mystore/sn1/rg1-sn1/recovery > mv /var/kvroot/mystore/sn1/rg1-rn1/snapshots/110915-153828-later \ KVROOT/mystore/sn1/rg1-sn1/recovery/110915-153828-later
Do this for each service running on the Storage Node. Production systems should have only one resource running on a given Storage Node, but it is possible to deploy, for example, multiple replication nodes on a single Storage Node. A Storage Node can also have an administration process running on it, and this also needs to be restored.
Having done this, restart the Storage Node
java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar stop -root /var/kvroot \ > nohup java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar start -root /var/kvroot&
On startup, the Storage Node notices the recovery directory, and moves that directory to the resource's environment directory and use it.
Remember that this procedure recovers the store to the time of the snapshot. If your store was active since the time of the snapshot, then all data modifications made since the time of the last snapshot are lost.