This section contains upgrade information that is generally true for all versions of Oracle NoSQL Database. Upgrade instructions and notes for specific releases are given in sections following this one.
When Oracle NoSQL Database is first installed, it is placed in a KVHOME
directory, which may be per-machine, or optionally be
shared by multiple Storage Nodes (for example, using
NFS). Here, we call this existing KVHOME location,
OLD_KVHOME
.
It is useful for installations to adopt a convention
for KVHOME that includes the release number. That
is, always use a KVHOME location such as
/var/kv/kv-M.N.O
, where
M.N.O
are the release.major.minor
numbers. This can be easily achieved by simply
unzip/untarring the distribution into a common
directory (/var/kv in this example).
Installing new software requires that each node be restarted. Oracle NoSQL Database is a replicated system, so to avoid excessive failover events it is recommended that any node that is running as a MASTER be restarted after all those marked REPLICA. This command tells you which nodes are MASTER and REPLICA:
java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar ping -host <hostname> -port <port>
To make the process more debuggable, when upgrading a node and while the Storage Node is stopped, you should move the existing log files under KVROOT and KVROOT/<storename>/log to any other directory.
Use the host and registry port for any active node in the
store. For example, in the following example, rg1-rn1 and
rg2-rn1 are running as MASTER and should be restarted last
(note that only part of the ping
output
is presented here so as to allow it to fit in the available
space):
java -Xmx256m -Xms256m \ -jar KVHOME/lib/kvstore.jar ping -port 5000 -host node01 Pinging components of store mystore based upon topology sequence #315 300 partitions and 6 storage nodes Time: 2015-03-04 08:37:22 UTC Ver: 12.1.3.2.15 Storage Node [sn1] on node01:5000 Zone: [name=Boston id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.15 2015-03-04 06:35:02 UTC ... Admin [admin1] Status: RUNNING,MASTER Rep Node [rg1-rn1] Status: RUNNING,MASTER ... Storage Node [sn2] on node02:5000 Zone: [name=Boston id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.15 2015-03-04 06:35:02 UTC ... Rep Node [rg1-rn2] Status: RUNNING,REPLICA ... Storage Node [sn3] on node03:5000 Zone: [name=Boston id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.15 2015-03-04 06:35:02 UTC ... Rep Node [rg1-rn3] Status: RUNNING,REPLICA ... Storage Node [sn4] on node04:5000 Zone: [name=Boston id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.15 2015-03-04 06:35:02 UTC ... Rep Node [rg2-rn1] Status: RUNNING,MASTER ... Storage Node [sn5] on node05:5000 Zone: [name=Boston id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.15 2015-03-04 06:35:02 UTC ... Rep Node [rg2-rn2] Status: RUNNING,REPLICA ... Storage Node [sn6] on node06:5000 Zone: [name=Boston id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.15 2015-03-04 06:35:02 UTC ... Rep Node [rg2-rn3] Status: RUNNING,REPLICA ...
When upgrading your store, place the updated software in a new KVHOME directory on a Storage Node running the admin service. The new KVHOME directory is referred to here as NEW_KVHOME. If the KVHOME and NEW_KVHOME directories are shared by multiple Storage Nodes (for example, using NFS), it is necessary to maintain both locations while the upgrade is going on. The original KVHOME directory is no longer needed once the upgrade is complete. In this case, the start up procedure on each node needs to be modified to refer to the value of NEW_KVHOME in order to have it use the new software.
In cases where each node has its own copy of the software installation, then it is possible to replace the installation in place and not modify the value of KVHOME.