6 Upgrading Oracle Key Vault from Release 18.x in a Multi-Master Cluster Environment

Similar to a standalone or primary-standby upgrade, this type of upgrade includes the Oracle Key Vault server software and endpoint software-related utilities.

6.1 About Upgrading Oracle Key Vault from Release 18.x in a Multi-Master Cluster Environment

To perform this upgrade, you must upgrade each multi-master cluster node.

There are different steps for upgrading the multi-master cluster depending on your deployment. A 2-node cluster, running Oracle Key Vault release 18.5 or earlier, configured as a single read-write pair would involve running a pre-upgrade script which no other deployment requires. Multi-master cluster nodes deployed in a read-write configuration must follow different upgrade steps than those deployed as read-only nodes.

Oracle does not support direct upgrades from Oracle Key Vault release 18.1 or earlier. You must upgrade to Oracle Key Vault release 18.2 or later before upgrading to release 21.3.

The upgrade process involves performing the upgrade on each multi-master cluster node. After you have begun a cluster upgrade, ensure that you upgrade all the nodes in the cluster one after the other, without too much intervening time between upgrades of two nodes.

Upgrading an Oracle Key Vault multi-master cluster includes upgrading each cluster node to the new later version. You must upgrade all nodes to the same Oracle Key Vault version. You should first upgrade the read-only nodes of the cluster, and then upgrade the read-write pairs. As each cluster node is upgraded, its node version is updated to the new version of the Oracle Key Vault. After you complete the upgrade of all cluster nodes, the cluster version is updated to the new version of the Oracle Key Vault. You can check the node version or the cluster version by selecting the Cluster tab, then in the left navigation bar, selecting Management. Oracle Key Vault multi-master cluster upgrade is considered complete when node version and cluster version at each cluster node is updated to the latest version of Oracle Key Vault.

Before you perform the upgrade, note the following:

  • Perform the entire upgrade process on all multi-master cluster nodes, without interruption. That is, after you have started the cluster upgrade process, ensure that you try and upgrade all nodes, individually one after the other or in read-write pairs. Do not perform any critical operations or make configurational changes to Oracle Key Vault until you have completed upgrading all the nodes in your environment.
  • Be aware that you cannot use any new features that were introduced in this release until you have completed upgrading all of the multi-master cluster nodes. An error is returned when such features are used from the node that has been upgraded. Oracle recommends that you plan the upgrade of all cluster nodes close to each other to ensure availability of the new features sooner.
  • Starting in Oracle Key Vault release 21.2, expiration alerts for deactivated or destroyed objects are not generated. If you are upgrading from Oracle Key Vault release 21.1 or earlier, then the following behavior is expected:
    • As each cluster node is upgraded, Oracle Key Vault deletes all expiration alerts for any certificate and secret objects, as well as for key objects that have been revoked or destroyed.
    • Cluster nodes that have not been upgraded yet will continue to generate alerts for these same objects, and also send email notifications for these alerts. This behavior that results in deletion and recreation of alerts may repeat until the last cluster node is upgraded.
    • After the upgrade is complete, expiration alerts for the certificate and secret objects will have the alert type of Certificate Object Expiration and Secret Object Expiration, respectively.

6.2 Step 1: Perform Pre-Upgrade Tasks for the Upgrade from Release 18.x

Similar to a standalone or primary-standby environment, you must perform pre-upgrade tasks such as backing up the Oracle Key Vault server.

  1. In the server where Oracle Key Vault is installed, log in as user support, and then switch to the root user.
  2. Back up the server so that you can recover data in case the upgrade fails.
  3. Ensure that no full or incremental backup jobs are running. Delete all scheduled full or incremental backup jobs before the upgrade.
  4. Plan for downtime according to the following specifications:
    Oracle Key Vault Usage Downtime required

    Wallet upload or download

    NO

    Java Keystore upload or download

    NO

    Transparent Data Encryption (TDE) direct connect

    YES (NO with persistent cache)

    Primary Server Upgrade in a primary-standby deployment

    YES (NO with persistent cache)

    If Oracle Key Vault uses an online master key, then plan for a downtime of 15 minutes during the Oracle Database endpoint software upgrades. Database endpoints can be upgraded in parallel to reduce total downtime.

  5. Set the $OKV_HOME to the location where endpoint is installed so that the upgrade process for the endpoint software can complete successfully.
  6. If the Oracle Key Vault system has a syslog destination configured, ensure that the remote syslog destination is reachable from the Oracle Key Vault system, and that logs are being correctly forwarded. If the remote syslog destination is not reachable from the Oracle Key Vault system, then the upgrade process can become much slower than normal.
  7. Check the disk size before you begin the upgrade. If any of the nodes in question have a disk size that is greater than 2 TB, then you cannot upgrade that system to the new release. Oracle recommends that you remove the node from the cluster and if possible, replace it with a node whose disk is less than 2 TB in size.
  8. Check the boot partition size. If any of the nodes in question have a boot partition that is less than 500 MB, then you cannot upgrade that system to the new release. You can check this size as follows:
    1. Mount the /boot partition.
      /bin/mount /boot
    2. Check the Size column given by the following command:
      /bin/df -h /boot
    3. Unmount the /boot partition:
      /bin/umount /boot
    If the boot partition given by this command shows less than 488 MB, then you cannot upgrade to the current release. Oracle recommends that you remove the node from the cluster and if possible, replace it with a node that has been freshly installed with the same Oracle Key Vault version as the rest of the cluster nodes.
  9. Increase the Maximum Disable Node Duration setting as appropriate so that any disabled cluster nodes have sufficient time to be upgraded then enabled back into the cluster. Note that increasing the Maximum Disable Node Duration setting also increases disk space usage.
  10. For each node that had been integrated with the Oracle Audit Vault in Oracle Key Vault release 21.2 or earlier, do the following to disable and remove the Oracle Audit Vault integration:
    1. Disable the Oracle Audit Vault integration: Log into the Oracle Key Vault management console as a System Administrator, select the System tab and then System Settings from the left navigation bar. In the Audit Vault integration pane that appears, disable Oracle Audit Vault. Click Save.
    2. Log in to the Oracle Key Vault server through SSH as user support, switch user su to root and then switch user su to oracle.
    3. Stop the agent by executing the following command:
      agent_installation_directory/bin/agentctl stop
    4. Log in to the Oracle Audit Vault Server console as an Oracle Audit Vault administrator.
    5. Delete the corresponding agent and target.
    6. Log in to the Oracle Key Vault server through SSH as user support, then switch user su to root.
    7. Delete the installation directory for the Oracle Audit Vault agent.
  11. Ensure that the Oracle Key Vault server certificate has not expired, nor is close to expiry, before you begin the upgrade.
    You can find how much time the Oracle Key Vault server certificate has before it expires by checking the OKV Server Certificate Expiration setting on the Configure Alerts page in the Oracle Key Vault management console.

6.3 Step 2: Add Disk Space to Extend the vg_root for Upgrade to Oracle Key Vault Release 21.3

Before upgrading to Oracle Key Vault release 21.3 from release 18x, you will need to extend the vg_root to increase disk space.

When you add disk space for upgrades to Oracle Key Vault 21.3 in a multi-master cluster configuration, you must perform this step after the node has been disabled. Before you start this procedure, ensure that all endpoints have persistent cache enabled and in use.
  1. Log in to the server for which you will perform the upgrade and switch user as root.
  2. Ensure that the persistent cache settings for Oracle Key Vault have been set.
    You will need to ensure that the persistent cache has been enabled because in a later step in this procedure, you must shut down the server. Shutting down the Oracle Key Vault server will incur downtime. To avoid any downtime, Oracle recommends that you turn on persistent cache.
  3. Run the vgs command to determine the free space
    vgs

    The VFree column shows how much free space you have (for example, 21 GB).

  4. Disable the node.
    Select the Cluster tab, and then Management in the left navigation bar. Under Cluster Details, select check box for the node to disable, and then click Disable. The node's status will change from DISABLING to DISABLED.
  5. Power off the server in order to add a new disk
    /sbin/shutdown -h now
  6. Add a new disk to the server with a capacity of 100 GB or greater
  7. Start the server.
  8. Log in to the Oracle Key Vault server through SSH as user support, then switch user su to root.
    ssh support@okv_server_IP_address
    su - root
    
  9. Stop the Oracle Key Vault services.
    service tomcat stop;
    service httpd stop;
    service kmipus stop;
    service kmip stop;
    service okvogg stop;
    service javafwk stop;
    service monitor stop;
    service controller stop;
    service dbfwlistener stop;
    service dbfwdb stop;
    service rsyslog stop;
    
  10. Run the fdisk -l command to find if there are any available partitions on the new disk.
    fdisk -l
    At this stage, there should be no available partitions.
  11. Run the fdisk disk_device_to_be_added command to create the new partition.
    For example, to create a disk device named /dev/sdb:
    fdisk /dev/sdb

    In the prompts that appear, enter the following commands in sequence:

    • n for new partition
    • p for primary
    • 1 for partition number
    • Accept the default values for cylinder (press Enter twice).
    • w to write and exit
  12. Use the pvcreate disk_device_partition command to add the newly added disk to the physical volume.
    For example, for a disk device named /dev/sdb1, which is the name of the disk partition that you created (based on the name used for the disk device that was added).
    pvcreate /dev/sdb1

    Output similar to the following appears:

    Physical volume "/dev/sdb1" successfully created
  13. Use the vgextend vg_root disk_device_partition command to extend the logical volume with this disk space that you just added.
    For example, for the partition /dev/sdb1, you would run:
    vgextend vg_root /dev/sdb1

    Output similar to the following appears:

    Volume group "vg_root" successfully extended
  14. Run the vgs command again to ensure that VFree shows an increase of 100 GB.
    vgs

    Output similar to the following appears:

    VG      #PV #LV #SN Attr   VSize   VFree
    vg_root   2  12   0 wz--n- 598.75g <121.41g
    
  15. Restart the Oracle Key Vault server.
    /sbin/reboot

    After you have successfully added the disk, re-enable the node. After you re-enable the disabled multi-master cluster node, its status changes from DISABLED to ENABLING, then to ACTIVE. The status of the node will remain at ENABLING and will not change to ACTIVE unless bidirectional replication between it and all other nodes is occurring successfully.

6.4 Step 3: Upgrade Multi-Master Clusters

Depending on your multi-master cluster configuration, you must follow the steps that are specific to your deployment.

6.4.1 About Upgrading Multi-Master Clusters

When upgrading a multi-master cluster, you may upgrade the read-only nodes one after the other and in the case of read-write pairs, you must upgrade both of the nodes simultaneously.

To perform the upgrade, you must upgrade each multi-master cluster node. There are different steps for upgrading the multi-master cluster depending on your deployment. A two-node cluster, running Oracle Key Vault release 18.5 or earlier, configured as a single read-write pair would involve running a pre-upgrade script which no other deployment requires. For multi-master cluster nodes that were deployed in a read-write configuration, you must follow different upgrade steps than those that were deployed as read-only nodes.

This section describes the upgrade methods for the various deployments. Choose the method that is appropriate for your configuration. When upgrading read-write pairs, after disabling both the nodes, you can upgrade the nodes at the same time. However, Oracle recommends that you upgrade the cluster nodes one at a time. If you have a multi-master cluster with three or more nodes, then you can upgrade two nodes at the same time with no down time.

When upgrading read-write pairs, it is critically important that you perform the steps in the proper order on the two nodes.

If your cluster consists of only two nodes in a read-write configuration and if you are upgrading from Oracle Key Vault release 18.2 through 18.5, then you must execute a pre-upgrade script before performing the upgrade. The pre-upgrade script is not to be executed in any other multi-master cluster configuration.

6.4.2 Upgrading Multi-Master Cluster Read-Only Nodes

Before upgrading multi-master cluster read-only nodes, ensure that you understand the requirements for performing this kind of upgrade.

Oracle recommends that you upgrade the read-only nodes one at a time. You must upgrade read-only nodes ahead of any read-write pairs. Direct upgrades from release 18.1 or earlier are not supported. You must upgrade to Oracle Key Vault release 18.2 or later before upgrading to release 21.3. Do not perform any critical operations or make configurational changes to Oracle Key Vault until you have completed upgrading all multi-master cluster nodes.
You must perform these steps on each read-only node of the cluster, one after the other.
  1. Ensure that you have performed the pre-upgrade steps.
  2. Log in to the management console as a user with the System Administrator role.
  3. Disable the multi-master cluster node.
    Select the Cluster tab, and then Management in the left navigation bar. Under Cluster Details, select check box for the node to disable, and then click Disable. The node's status will change from DISABLING to DISABLED
  4. Ensure that you have added disk space to extend the vg_root for the upgrade to release 21.3.
  5. Perform the upgrade as you would upgrade a standalone Oracle Key Vault server by performing steps 2 through 10 (but not a primary-standby pair).
    When you run the /usr/bin/ruby /images/upgrade.rb --confirm step during the upgrade, you may be asked to confirm that you completed the pre-upgrade steps.
  6. After the node has been successfully upgraded, re-enable it.
    To enable it, select the Cluster tab, and then Management in the left navigation bar. Under Cluster Details, select check box for the node to enable, and then click Enable.
    After you re-enable the disabled multi-master cluster node, its status changes from DISABLED to ENABLING, then to ACTIVE. The status of the node will remain at ENABLING and will not change to ACTIVE unless bidirectional replication between it and all other nodes is occurring successfully.
  7. As necessary, disable SSH access on this node.

    Log in to the Oracle Key Vault management console as a user who has the System Administrator role. Select the System tab, then Settings. In the Network Details area, click SSH Access. Select Disabled. Click Save.

  8. After you have successfully completed this procedure, repeat these upgrade steps on all remaining read-only multi-master cluster nodes.

6.4.3 Upgrading Multi-Master Cluster Read-Write Pairs

Before upgrading multi-master cluster read-write pairs, ensure that you understand the requirements for performing this kind of upgrade.

Do not perform any critical operations or make configuration changes to Oracle Key Vault until you have completed upgrading all multi-master cluster nodes.

You must perform these steps on both nodes of the cluster read-write pair in the order specified for all read-write pairs of your cluster. In order to perform the upgrade using this method, you must arbitrarily decide which of the read-write nodes of your pair will be Node A and which node will be Node B. The steps below will refer to Node A and Node B which correspond to your Node A and Node B.

Direct upgrades to Oracle Key Vault 21.3 from releases 18.1 or earlier are not supported. You must upgrade to Oracle Key Vault release 18.2 or later before upgrading to release 21.3. If you are upgrading a two-node cluster that runs Oracle Key Vault release 18.5 or earlier and is configured as a single read-write pair, then you must run the pre-upgrade script on each multi-master cluster node after mounting the ISO, but before performing the full upgrade.

Generally, once your cluster nodes are disabled, they become unavailable for use. Therefore, in order to allow operational continuity when you upgrade a two-node cluster that is configured as a read-write pair, applying the pre-upgrade script on both nodes allows the node to remain available in a read-only mode, even when the node is disabled. After both the nodes are disabled, you can upgrade the nodes one at a time; the order is at your discretion. However, when you enable the nodes after they have been upgraded, you must enable them in the reverse order that they were disabled.

If your deployment required running the pre-upgrade script, then after you run the pre-upgrade script, proceed with the standard upgrade process as follows. Disable both nodes (the order of disabling matters) of the read-write pair, add the extra disk space as necessary and then perform the upgrade and reboot. When you run the upgrade and reboot commands, Oracle recommends running them on one node of the pair before running it on the other node to avoid down time.

  1. Log into the Node A Oracle Key Vault management console as a user who has the System Administrator role.
  2. Ensure that SSH access is enabled.

    Log in to the Oracle Key Vault management console as a user who has the System Administrator role. Select the System tab, then Settings. In the Network Details area, click SSH Access. Select IP address(es) and then enter only the IP addresses that you need, or select All. Click Save.

  3. Ensure you have enough space in the destination directory for the upgrade ISO files.
  4. Log in to the Oracle Key Vault server through SSH as user support, then switch user su to root.
    If the SSH connection times out while you are executing any step of the upgrade, then the operation will not complete successfully. Oracle recommends that you ensure that you use the appropriate values for the ServerAliveInterval and ServerAliveCountMax options for your SSH sessions to avoid upgrade failures.
    Using the screen command prevents network disconnections interrupting the upgrade. If the session terminates, resume as follows:
    screen -r
  5. Copy the upgrade ISO file to the destination directory using Secure Copy Protocol or other secure transmission method.
    scp remote_host:remote_path/okv-upgrade-disc-new_software_release.iso /var/lib/oracle
    • remote_host is the IP address of the computer containing the ISO upgrade file.
    • remote_path is the directory of the ISO upgrade file. Do not copy this file to any location other than the /var/lib/oracle directory.
  6. Make the upgrade accessible by using the mount command:
    /bin/mount -o loop,ro /var/lib/oracle/okv-upgrade-disc-new_software_release.iso /images
  7. If your Oracle Key Vault deployment consists solely of two cluster nodes in a read-write configuration running Oracle Key Vault 18.5 or earlier, then you must prepare the cluster for the upgrade process by executing a pre-upgrade script on both nodes.
    You must run this pre-upgrade step only when you are upgrading a two-node cluster, running Oracle Key Vault 18.5 or earlier, configured as a single read-write pair. Do not run this pre-upgrade script on a cluster of any other configuration.
    1. Unzip the upgrade script and save it in /tmp. Perform this step on both nodes.
      /usr/bin/unzip -d /tmp/ /images/preupgrade/cluster_preupgrade_211.zip
    2. Execute the upgrade script on each node to be upgraded.
      /tmp/cluster_preupgrade_211.sh
    3. Check for errors on each node by executing the following command:
      echo $?
    4. Check the upgrade script log /tmp/cluster_preupgrade_211.log on each node for any errors.
  8. Disable Node A.
    Select the Cluster tab, and then Management in the left navigation bar. Under Cluster Details, select check box for the node to disable, and then click Disable. The node's status will change from DISABLING to DISABLED
  9. Wait until Node A is disabled before proceeding.
  10. Log into the Node B management console as a user with the System Administrator role.
  11. Disable Node B.
    Select the Cluster tab, and then Management in the left navigation bar. Under Cluster Details, select check box for the node to disable, and then click Disable. The node's status will change from DISABLING to DISABLED
  12. Wait until Node B is disabled before proceeding
  13. Ensure that you have added disk space to extend the vg_root for both nodes.
  14. For each node of the read-write pair, do the following:
    Complete these steps on each node of a read-write pair in turn. In other words, perform the steps on the disabled Node A, allow the upgrade to complete, and then perform the steps on the disabled Node B.
    1. Log in to the Oracle Key Vault server through SSH as user support, then switch user su to root.
      ssh support@okv_server_IP_address
      su - root
    2. Clear the cache using the clean all command:
      root# yum -c /images/upgrade.repo clean all
    3. Apply the upgrade with upgrade.rb command:
      root# /usr/bin/ruby /images/upgrade.rb --confirm

      When you run the /usr/bin/ruby /images/upgrade.rb --confirm step during the upgrade, you may be asked to confirm that you completed the pre-upgrade steps.

      When you run the upgrade.rb command, Oracle recommends that you execute this step and the next step (reboot) on one node first and then on the second after the first has completed. If your multi-master cluster deployment consists of three or more nodes, then you can upgrade both nodes of the read-write pair at the same time and avoid any down time.

      If the system is successfully upgraded, then the command will display the following message:

      Remove media and reboot now to fully apply changes.

      If you see an error message, then check the log file /var/log/messages for additional information.

    4. Restart the Oracle Key Vault server by running reboot command:
      root# /sbin/reboot

      On the first restart of the computer after the upgrade, the system will apply the necessary changes. This can take a few hours. Do not shut down the system during this time.

      The upgrade of the cluster node is completed when the screen with heading: Oracle Key Vault Server new_software_release appears, with new_software_release reflecting the release number of the upgraded version. Following the heading appears the menu item Display Appliance Info. Select Display Appliance Info and press the Enter key to see the IP address settings for the appliance.

  15. After each node has been upgraded, clear your browser's cache before attempting to log in.
  16. After both nodes has been successfully upgraded, re-enable Node B first (nodes must be enabled in the reverse order that they were disabled).
    After you re-enable the disabled multi-master cluster node, its status changes from DISABLED to ENABLING, then to ACTIVE. The status of the node will remain at ENABLING and will not change to ACTIVE unless bidirectional replication between it and all other nodes is occurring successfully.
  17. Re-enable Node A.
    After you re-enable the disabled multi-master cluster node, its status changes from DISABLED to ENABLING, then to ACTIVE. The status of the node will remain at ENABLING and will not change to ACTIVE unless bidirectional replication between it and all other nodes is occurring successfully.
  18. As necessary, disable SSH access on each node.

    Log in to the Oracle Key Vault management console as a user who has the System Administrator role. Select the System tab, then Settings. In the Network Details area, click SSH Access. Select Disabled. Click Save.

  19. After you have successfully completed this procedure, repeat these upgrade steps on all remaining multi-master cluster read-write pairs.

6.5 Step 4: Check the Node Version and the Cluster Version

After you complete the upgrade of at least one node, you can log into any of the upgraded nodes to check the node and cluster versions.

Oracle Key Vault tracks the version information of each cluster node as well as the version of the cluster as a whole. The node version represents the version of the Oracle Key Vault software on a given node. When a node is upgraded, its node version is updated to the new version of the Oracle Key Vault software. The cluster version is derived from the version information of the cluster nodes and is set to the minimum version of any cluster node. During cluster upgrade, node version is updated as each cluster node is upgraded to the later version. When all of the cluster nodes have been upgraded, the cluster version is then updated to the new version. (The Cluster Version and Node Version fields are available in Oracle Key Vault release 18.2 or later.)
  1. Log in to the Oracle Key Vault management console as a user who has the System Administrator role.
  2. Select the Cluster tab.
  3. In the left navigation bar, select Management.
  4. Check the following areas:
    • To find the node version, check the Cluster Details area.
    • To find the cluster version, check the Cluster Information area.

6.6 Step 5: If Necessary, Change the Network Interface for Upgraded Nodes

Nodes that were created in Oracle Key Vault releases earlier than release 21.1 use Classic mode, in which only one network interface was used.

If you prefer to use dual NIC network mode, which supports the use two network interfaces, then you can switch the node to use this mode, from the command line.

6.7 Step 6: Upgrade the Endpoint Software

After you have upgraded all the nodes in the cluster, you must reenroll endpoints that were created in earlier releases of Oracle Key Vault, or update the endpoint software.

If you are upgrading from an earlier release to the latest release of Oracle Key Vault, then you must reenroll the endpoint instead of upgrading the endpoint software. Reenrolling the endpoint automatically updates the endpoint software.
  1. Ensure that you have upgraded the Oracle Key Vault servers. If you are upgrading the endpoint software for an Oracle database configured for online TDE master encryption key management, then shut down the database.
  2. Download the endpoint software (okvclient.jar) for your platform from the Oracle Key Vault server as follows:
    1. Go to the Oracle Key Vault management console login screen.
    2. Click the Endpoint Enrollment and Software Download link.
    3. In the Download Endpoint Software Only section, select the appropriate platform from the drop-down list.
    4. Click the Download button.
  3. Identify the path to your existing endpoint installation that you are about to upgrade (for example, /home/oracle/okvutil).
  4. Install the endpoint software by executing the following command:
    java -jar okvclient.jar -d existing_endpoint_directory_path

    For example:

    java -jar okvclient.jar -d /home/oracle/okvutil

    If you are installing the okvclient.jar file on a Windows endpoint system that has Oracle Database release 11.2.0.4 only, then include the -db112 option. (This option is not necessary for any other combination of endpoint platform or Oracle Database version.) For example:

    java -jar okvclient.jar -d /home/oracle/okvutil -v -db112
  5. Install the updated PKCS#11 library file.
    This step is needed only for online TDE master encryption key management by Oracle Key Vault. If an endpoint uses online TDE master encryption key management by Oracle Key Vault, then you must upgrade the PKCS#11 library while upgrading the endpoint software.
    • On UNIX/Linux platforms: Run root.sh from the bin directory of endpoint installation directory to copy the latest liborapkcs.so file for Oracle Database endpoints.
      $ sudo $OKV_HOME/bin/root.sh

      Or

      $ su - root
      # bin/root.sh
    • On Windows platforms: Run root.bat from the bin directory of endpoint installation directory to copy the latest liborapkcs.dll file for Oracle Database endpoints. You will be prompted for the version of the database in use.
      bin\root.bat
  6. Update the SDK software.
    Oracle recommends that you redeploy the SDK software in the same location after you complete the upgrade to Oracle Key Vault release 21.3. This enables you to have access to the new SDK APIs that were introduced since the Oracle Key Vault version that you are upgrading from.
    1. Go to the Oracle Key Vault management console login screen.
    2. Click the Endpoint Enrollment and Software Download link.
    3. In the Download Software Development Kit section, select the appropriate language and platform for your site.
    4. Click the Download button to get the SDK zip file.
    5. Identify the existing location where SDK software was already deployed.
    6. Navigate to the directory in which you saved the SDK zip file.
    7. Unzip the SDK zip file.

      For example, on Linux, to unzip the Java SDK zip file, use the following command:

      unzip -o okv_jsdk.zip -d existing_endpoint_sdk_directory_path

      For the C SDK zip file, use this command:

      unzip -o okv_csdk.zip -d existing_endpoint_sdk_directory_path
    8. Do not exit this page.
  7. If you had deployed the RESTful services utility in the previous release, then re-deploy the latest okvrestclipackage.zip file.
    The latest okvrestclipackage.zip file enables you to have access to the new RESTful services utility commands that were introduced since the Oracle Key Vault version that you are upgrading from.
    You can use wget or curl to download okvrestclipackage.zip.
    wget --no-check-certificate https://Oracle_Key_Vault_IP_address:5695/okvrestclipackage.zip
    
    curl -O -k https://Oracle_Key_Vault_IP_address:5695/okvrestservices.jar
  8. Restart the endpoint if it was shut down.

6.8 Step 7: If Necessary, Add Disk Space to Extend Swap Space

If necessary, extend the swap space on each node. Oracle Key Vault release 21.3 requires a hard disk size greater than or equal to 1 TB in size with approximately 64 GB of swap space.

If your system does not meet this requirement, follow these instructions to extend the swap space. You can check how much swap space you have by running the swapon -s command. By default, Oracle Key Vault releases earlier than release 18.1 were installed with approximately 4 GB of swap space. After you complete the upgrade to release 18.1 or later, Oracle recommends that you increase the swap space allocation for the server on which you upgraded Oracle Key Vault. A new Oracle Key Vault installation is automatically configured with sufficient swap space. However, if you upgraded from a previous release, and your system does not have the desired amount of swap space configured, then you must manually add disk space to extend the swap space, particularly if the intention is to convert the upgraded server into the first node of a multi-master cluster.
  1. Log in to the server in which you upgraded Oracle Key Vault and connect as root.
  2. Check the current amount of swap space.
    [root@my_okv_server support]# swapon -s

    Output similar to the following appears. This example shows that the system has 4 GB of swap space.

    Filename Type Size Used Priority
    /dev/dm-0 partition 4194300 3368 -1
    

    There must be 64 GB of swap space if the disk is greater than 1 TB in size.

  3. Run the vgs command to determine how much free space is available.
    vgs

    The VFree column shows how much free space you have (for example, 21 GB).

  4. Power off the server in order to add a new disk.
    /sbin/shutdown -h now
  5. Add a new disk to the server of a size that will bring the VFree value to over 64 GB.
  6. Start the server.
  7. Log in to the Oracle Key Vault server through SSH as user support, then switch user su to root.
    ssh support@okv_server_IP_address
    su - root
    
  8. Run the fdisk -l command to find if there are any available partitions on the new disk.
    fdisk -l

    At this stage, there should be no available partitions.

  9. Run the fdisk disk_device_to_be_added command to create the new partition.
    For example, to create a disk device named /dev/sdc:
    fdisk /dev/sdc

    In the prompts that appear, enter the following commands in sequence:

    • n for new partition
    • p for primary
    • 1 for partition number
    • Accept the default values for cylinder (press Enter twice).
    • w to write and exit
  10. Use the pvcreate disk_device_partition command to add the newly added disk to the physical volume.
    For example, for a disk device named /dev/sdc1, which is the name of the disk partition that you created (based on the name used for the disk device that was added).
    pvcreate /dev/sdc1

    Output similar to the following appears:

    Physical volume "/dev/sdc1" successfully created
  11. Use the vgextend vg_root disk_device_partition command to extend the logical volume with this disk space that you just added.
    For example, for the partition /dev/sdc1, you would run:
    vgextend vg_root /dev/sdc1

    Output similar to the following appears:

    Volume group "vg_root" successfully extended
  12. Run the vgs command again to ensure that VFree shows an increase of 64 GB.
    vgs
  13. Disable swapping.
    [root@my_okv_server support]# swapoff -v /dev/vg_root/lv_swap
  14. To extend the swap space, run the lvresize command.
    [root@my_okv_server support]# lvresize -L +60G /dev/vg_root/lv_swap

    Output similar to the following appears:

    Size of logical volume vg_root/lv_swap changed from 4.00 GiB (128 extents) to 64.00 GiB (2048 extents)
    Logical volume lv_swap successfully resized.
    
  15. Format the newly added swap space.
    [root@my_okv_server support]# mkswap /dev/vg_root/lv_swap

    Output similar to the following appears:

    mkswap: /dev/vg_root/lv_swap: warning: don't erase bootbits sectors
    on whole disk. Use -f to force.
    Setting up swapspace version 1, size = 67108860 KiB
    no label, UUID=fea7fc72-0fea-43a3-8e5d-e29955d46891
    
  16. Enable swapping again.
    [root@my_okv_server support]# swapon -v /dev/vg_root/lv_swap
  17. Verify the amount of swap space that is available.
    [root@my_okv_server support]# swapon -s

    Output similar to the following appears:

    Filename Type Size Used Priority 
    /dev/dm-0 partition 67108860 0 -1
  18. Restart the Oracle Key Vault server.
    /sbin/reboot

    For primary-standby deployments, ensure that the primary and standby nodes sync up before proceeding further with next steps.

6.9 Step 8: If Necessary, Remove Old Kernels

For each multi-master cluster node, Oracle recommends that you clean up the older kernels that were left behind after the upgrade.

While the older kernel is not in use, it may be marked as an issue by some code analysis tools.
  1. Log in to the Oracle Key Vault server as the support user.
  2. Switch to the root user.
    su - root
  3. Mount /boot if it was not mounted on the system.
    1. Check if the /boot is mounted. The following command should display /boot information if it was mounted.
      df -h /boot;
    2. Mount it if /boot is not mounted.
      /bin/mount /boot;
  4. Check the installed kernels and the running kernel.
    1. Search for any kernels that are installed.
      rpm -q kernel-uek | sort;

      The following example output shows that two kernels are installed:

      kernel-uek-4.1.12-103.9.4.el6uek.x86_64
      kernel-uek-4.1.12-112.16.7.el6uek.x86_64
    2. Check the latest kernel.
      uname -r;

      The following output shows an example of a kernel version that was installed at the time:

      4.1.12-112.16.7.el6uek.x86_64

      This example assumes that 4.1.12-112.16.7.el6uek.x86_64 is the latest version, but newer versions may be available by now. Based on this output, you will need to remove the kernel-uek-4.1.12-103.9.4.el6uek.x86_64 kernel. You should remove all kernels that are older than the latest kernel.

  5. Remove the older kernel and its associated RPMs.

    For example, to remove the kernel-uek-4.1.12-103.9.4.el6uek.x86_64 kernel:

    yum --disablerepo=* remove `rpm -qa | grep 4.1.12-103.9.4.el6uek`;

    Output similar to the following appears:

    Loaded plugins: security
    Setting up Remove Process
    Resolving Dependencies
    --> Running transaction check
    ---> Package kernel-uek.x86_64 0:4.1.12-103.9.4.el6uek will be erased
    ---> Package kernel-uek-devel.x86_64 0:4.1.12-103.9.4.el6uek will be erased
    ---> Package kernel-uek-firmware.noarch 0:4.1.12-103.9.4.el6uek will be erased
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    =================================================================================================================
     Package               Arch    Version                Repository                                            Size
    =================================================================================================================
    Removing:
     kernel-uek            x86_64  4.1.12-103.9.4.el6uek  @anaconda-OracleLinuxServer-201410181705.x86_64/6.6  241 M
     kernel-uek-devel      x86_64  4.1.12-103.9.4.el6uek  @anaconda-OracleLinuxServer-201410181705.x86_64/6.6   38 M
     kernel-uek-firmware   noarch  4.1.12-103.9.4.el6uek  @anaconda-OracleLinuxServer-201410181705.x86_64/6.6  2.9 M
    
    Transaction Summary
    =================================================================================================================
    Remove        3 Package(s)
    
    Installed size: 282 M
    Is this ok [y/N]:
  6. Enter y to accept the deletion output.
  7. Repeat these steps starting with Step 4 for all kernels that are older than the latest kernel.

6.10 Step 9: If Necessary, Remove SSH-Related DSA Keys

For each multi-master cluster node, you should remove SSH-related DSA keys left behind after the upgrade, because they can cause problems with some code analysis tools.

  1. Log in to the Oracle Key Vault management console as a user with the System Administrator role.
  2. Enable SSH.

    Log in to the Oracle Key Vault management console as a user who has the System Administrator role. Select the System tab, then Settings. In the Network Details area, click SSH Access. Select IP address(es) and then enter only the IP addresses that you need, or select All. Click Save.

  3. Login to the Oracle Key Vault support account using SSH.
    ssh support@OracleKeyVault_serverIPaddress
  4. Switch to the root user.
    su - root
  5. Change directory to /etc/ssh.
    cd /etc/ssh
  6. Rename the following keys.
    mv ssh_host_dsa_key.pub ssh_host_dsa_key.pub.retire
    mv ssh_host_dsa_key ssh_host_dsa_key.retire
  7. Disable SSH access.

    Log in to the Oracle Key Vault management console as a user who has the System Administrator role. Select the System tab, then Settings. In the Network Details area, click SSH Access. Select Disabled. Click Save.