General Upgrade Notes
This section contains upgrade information that is generally true for all versions of Oracle NoSQL Database. Upgrade instructions and notes for specific releases are given in sections following this one.
When Oracle NoSQL Database is first installed, it is placed in a KVHOME
directory. Such a directory can exist on each machine, or be shared by multiple Storage Nodes (for example, using NFS). Here, we refer to this existing KVHOME
location,
.
OLD_KVHOME
Note:
We recommend that installations adopt a naming convention for KVHOME
that includes the release number. If you always use a KVHOME
location such as /var/kv/kv-M.N.O
, where M.N.O
represents the release.major.minor numbers, the version is easily visible. You can achieve this naming by unzipping or untarring the distribution into a common directory, /var/kv
in this example.
Installing new software requires that each node be restarted. Oracle NoSQL Database is a replicated system. To avoid excessive failover events, we recommend restarting any node that is running as a MASTER after all those marked REPLICA. This command lists which nodes are MASTER and REPLICA:
java -Xmx64m -Xms64m \
-jar KVHOME/lib/kvstore.jar ping -host <hostname> -port <port> \
-security USER/security/admin.security
Note:
Listing this information assumes that you followed the steps in Configuring Security with Remote Access.
To make the upgrade process easy to debug when upgrading a node while the Storage Node is stopped, move the existing log files under KVROOT
and KVROOT/<storename>/log
to any other directory.
Use the host and registry port for any active node in the store. For example, in the following example, rg1-rn1
and rg2-rn1
are running as MASTER, so restart those last:
java -Xmx64m -Xms64m \
-jar KVHOME/lib/kvstore.jar ping -port 5100 -host node01 \
-security USER/security/admin.security
Pinging components of store mystore based upon topology sequence #315
300 partitions and 6 storage nodes
Time: 2020-07-30 15:13:23 UTC Version: 18.1.20
Shard Status: healthy:2 writable-degraded:0 read-only:0 offline:0 total:2
Admin Status: healthy
Zone [name=Boston id=zn1 type=PRIMARY allowArbiters=false
masterAffinity=false] RN Status: online:6 offline:0
maxDelayMillis:0 maxCatchupTimeSecs:0
Storage Node [sn1] on node01:5100
Zone: [name=Boston id=zn1 type=PRIMARY allowArbiters=false
masterAffinity=false] Status: RUNNING Ver: 18.1.20 2018-09-19 06:43:20 UTC
Build id: 9f5c79a9f7e8 Edition: Enterprise
Admin [admin1] Status: RUNNING,MASTER
Rep Node [rg1-rn1] Status: RUNNING,MASTER
sequenceNumber:338 haPort:5111
Storage Node [sn2] on node02:5200
Zone: [name=Boston id=zn1 type=PRIMARY allowArbiters=false
masterAffinity=false] Status: RUNNING Ver: 18.1.20 2018-09-19 06:43:20 UTC
Build id: 9f5c79a9f7e8 Edition: Enterprise
Admin [admin2] Status: RUNNING,REPLICA
Rep Node [rg1-rn2] Status: RUNNING,REPLICA
sequenceNumber:338 haPort:5211 delayMillis:0 catchupTimeSecs:0
Storage Node [sn3] on node03:5300
Zone: [name=Boston id=zn1 type=PRIMARY allowArbiters=false
masterAffinity=false] Status: RUNNING Ver: 18.1.20 2018-09-19 06:43:20 UTC
Build id: 9f5c79a9f7e8 Edition: Enterprise
Rep Node [rg1-rn3] Status: RUNNING,REPLICA
sequenceNumber:338 haPort:5310 delayMillis:0 catchupTimeSecs:0
Storage Node [sn4] on node04:5400
Zone: [name=Boston id=zn1 type=PRIMARY allowArbiters=false
masterAffinity=false] Status: RUNNING Ver: 18.1.20 2018-09-19 06:43:20 UTC
Build id: 9f5c79a9f7e8 Edition: Enterprise
Rep Node [rg2-rn1] Status: RUNNING,MASTER
sequenceNumber:338 haPort:5410
Storage Node [sn5] on node05:5500
Zone: [name=Boston id=zn1 type=PRIMARY allowArbiters=false
masterAffinity=false] Status: RUNNING Ver: 18.1.20 2018-09-19 06:43:20 UTC
Build id: 9f5c79a9f7e8 Edition: Enterprise
Rep Node [rg2-rn2] Status: RUNNING,REPLICA
sequenceNumber:338 haPort:5510 delayMillis:0 catchupTimeSecs:0
Storage Node [sn6] on node06:5600
Zone: [name=Boston id=zn1 type=PRIMARY allowArbiters=false
masterAffinity=false] Status: RUNNING Ver: 18.1.20 2018-09-19 06:43:20 UTC
Build id: 9f5c79a9f7e8 Edition: Enterprise
Rep Node [rg2-rn3] Status: RUNNING,REPLICA
sequenceNumber:338 haPort:5610 delayMillis:0 catchupTimeSecs:0
When upgrading your store, place the updated software in a new KVHOME
directory on a Storage Node running the admin service. This section refers to the new KVHOME
directory as NEW_KVHOME
. If the KVHOME
and NEW_KVHOME
directories are shared by multiple Storage Nodes (for example, using NFS), maintain both locations while the upgrade is in process. After the upgrade is complete, you no longer need the original KVHOME
directory. In this case, you must modify the start up procedure on each node to refer to the NEW_KVHOME
directory so it uses the new software.
Note:
In cases where each node has its own copy of the software installation, then it is possible to replace the installation in place and not modify the value of KVHOME
.
To add security after upgrading from a non-secure store, see Adding Security to a New Installation in the Security Guide.
Upgrading the XRegion Service Agent
You should upgrade your store first before upgrading the XRegion Service agent. If the agent is upgraded first before the store is upgraded, the agent may get blocked when accessing the new system table and wait for the store to be upgraded. To configure the XRegion Service agent See, Configure XRegion Service.