Updating an Exadata Cloud Service Instance Manually

This topic covers how to update the operating system and the tooling on the compute server nodes (for cloud VM clusters, these are called virtual machines) of an Exadata Cloud Service instance. Review all of the information carefully before you begin the updates.

For information on diagnosing issues with the cloud tooling for Exadata Cloud Serviceand Exadata Cloud@Customer systems, see DBAAS Tooling: Using dbaascli to Collect Cloud Tooling Logs and Perform a Cloud Tooling Health Check.

Updating the Exadata Cloud VM Cluster OS Manually

You update the operating systems of Exadata compute nodes by using the patchmgr tool. This utility manages the entire update of one or more compute nodes remotely, including running pre-reboot, reboot, and post-reboot steps. You can run the utility from either an Exadata compute node or a non-Exadata server running Oracle Linux. The server on which you run the utility is known as the "driving system." You cannot use the driving system to update itself. Therefore, if the driving system is one of the Exadata compute nodes on a system you are updating, you must run a separate operation on a different driving system to update that server.

The following two scenarios describe typical ways of performing the updates:

Scenario 1: Non-Exadata Driving System

The simplest way to run the update the Exadata system is to use a separate Oracle Linux server to update all Exadata compute nodes in the system.

Scenario 2: Exadata Node Driving System

You can use one Exadata compute node to drive the updates for the rest of the compute nodes in the system, and then use one of the updated nodes to drive the update on the original Exadata driver node.

For example: You are updating a half rack Exadata system, which has four compute nodes - node1, node2, node3, and node4. First, use node1 to drive the updates of node2, node3, and node4. Then, use node2 to drive the update of node1.

The driving system requires root user SSH access to each compute node the utility will update.

Preparing for the OS Updates

Caution

Do not install NetworkManager on the Exadata Cloud Service instance. Installing this package and rebooting the system results in severe loss of access to the system.

  • Before you begin your updates, review Exadata Cloud Service Software Versions (Doc ID 2333222.1) to determine the latest software version and target version to use.
  • Some steps in the update process require you to specify a YUM repository. The YUM repository URL is:

    http://yum-<region_identifier>.oracle.com/repo/EngineeredSystems/exadata/dbserver/<latest_version>/base/x86_64.

    Region identifiers are text strings used to identify Oracle Cloud Infrastructure regions (for example, us-phoenix-1). You can find a complete list of region identifiers in Regions.

    You can run the following curl command to determine the latest version of the YUM repository for your Exadata Cloud Service instance region:

    curl -s -X GET http://yum-<region_identifier>.oracle.com/repo/EngineeredSystems/exadata/dbserver/index.html |egrep "18.1."

    This example returns the most current version of the YUM repository for the US West (Phoenix) region:

    curl -s -X GET http://yum-us-phoenix-1.oracle.com/repo/EngineeredSystems/exadata/dbserver/index.html |egrep "18.1."
    <a href="18.1.4.0.0/">18.1.4.0.0/</a> 01-Mar-2018 03:36 -
  • To apply OS updates, the system's VCN  must be configured to allow access to the YUM repository. For more information, see Option 2: Service Gateway Access to Both Object Storage and YUM Repos.
To update the OS on all compute nodes of an Exadata Cloud Service instance

This example procedure assumes the following:

  • The system has two compute nodes, node1 and node2.
  • The target version is 18.1.4.0.0.180125.3.
  • Each of the two nodes is used as the driving system for the update on the other one.
  1. Gather the environment details.

    1. SSH to node1 as root and run the following command to determine the version of Exadata:

      [root@node1]# imageinfo -ver
      12.2.1.1.4.171128
    2. Switch to the grid user, and identify all computes in the cluster.

      [root@node1]# su - grid
      [grid@node1]$ olsnodes
      node1
      node1
  2. Configure the driving system.

    1. Switch back to the root user on node1, check whether a root ssh key pair (id_rsa and id_rsa.pub) already exists. If not, then generate it.

      [root@node1 .ssh]#  ls /root/.ssh/id_rsa*
      ls: cannot access /root/.ssh/id_rsa*: No such file or directory
      [root@node1 .ssh]# ssh-keygen -t rsa
      Generating public/private rsa key pair.
      Enter file in which to save the key (/root/.ssh/id_rsa):
      Enter passphrase (empty for no passphrase):
      Enter same passphrase again:
      Your identification has been saved in /root/.ssh/id_rsa.
      Your public key has been saved in /root/.ssh/id_rsa.pub.
      The key fingerprint is:
      93:47:b0:83:75:f2:3e:e6:23:b3:0a:06:ed:00:20:a5 root@node1.fraad1client.exadataclientne.oraclevcn.com
      The key's randomart image is:
      +--[ RSA 2048]----+
      |o..     + .      |
      |o.     o *       |
      |E     . o o      |
      | . .     =       |
      |  o .   S =      |
      |   +     = .     |
      |    +   o o      |
      |   . .   + .     |
      |      ...        |
      +-----------------+
    2. Distribute the public key to the target nodes, and verify this step. In this example, the only target node is node2.

      [root@node1 .ssh]# scp -i ~opc/.ssh/id_rsa ~root/.ssh/id_rsa.pub opc@node2:/tmp/id_rsa.node1.pub
      id_rsa.pub
      
      [root@node2 ~]# ls -al /tmp/id_rsa.node1.pub
      -rw-r--r-- 1 opc opc 442 Feb 28 03:33 /tmp/id_rsa.node1.pub
      [root@node2 ~]# date
      Wed Feb 28 03:33:45 UTC 2018
      
    3. On the target node (node2, in this example), add the root public key of node1 to the root authorized_keys file.

      [root@node2 ~]# cat /tmp/id_rsa.node1.pub >> ~root/.ssh/authorized_keys
      
    4. Download dbserver.patch.zip as p21634633_12*_Linux-x86-64.zip onto the driving system (node1, in this example), and unzip it. See dbnodeupdate.sh and dbserver.patch.zip: Updating Exadata Database Server Software using the DBNodeUpdate Utility and patchmgr (Doc ID 1553103.1) for information about the files in this .zip.

      [root@node1 patch]# mkdir /root/patch
      [root@node1 patch]# cd /root/patch
      [root@node1 patch]# unzip p21634633_181400_Linux-x86-64.zip
      Archive:  p21634633_181400_Linux-x86-64.zip   creating: dbserver_patch_5.180228.2/
         creating: dbserver_patch_5.180228.2/ibdiagtools/
        inflating: dbserver_patch_5.180228.2/ibdiagtools/cable_check.pl
        inflating: dbserver_patch_5.180228.2/ibdiagtools/setup-ssh
        inflating: dbserver_patch_5.180228.2/ibdiagtools/VERSION_FILE
       extracting: dbserver_patch_5.180228.2/ibdiagtools/xmonib.sh
        inflating: dbserver_patch_5.180228.2/ibdiagtools/monitord
        inflating: dbserver_patch_5.180228.2/ibdiagtools/checkbadlinks.pl
         creating: dbserver_patch_5.180228.2/ibdiagtools/topologies/
        inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/VerifyTopologyUtility.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/verifylib.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Node.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Rack.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Group.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Switch.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/topology-zfs
        inflating: dbserver_patch_5.180228.2/ibdiagtools/dcli
         creating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/remoteScriptGenerator.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/CommonUtils.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/SolarisAdapter.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/LinuxAdapter.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/remoteLauncher.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/remoteConfig.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/spawnProc.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/runDiagnostics.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/OSAdapter.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/SampleOutputs.txt
        inflating: dbserver_patch_5.180228.2/ibdiagtools/infinicheck
        inflating: dbserver_patch_5.180228.2/ibdiagtools/ibping_test
        inflating: dbserver_patch_5.180228.2/ibdiagtools/tar_ibdiagtools
        inflating: dbserver_patch_5.180228.2/ibdiagtools/verify-topology
        inflating: dbserver_patch_5.180228.2/installfw_exadata_ssh
         creating: dbserver_patch_5.180228.2/linux.db.rpms/
        inflating: dbserver_patch_5.180228.2/md5sum_files.lst
        inflating: dbserver_patch_5.180228.2/patchmgr
        inflating: dbserver_patch_5.180228.2/xcp
        inflating: dbserver_patch_5.180228.2/ExadataSendNotification.pm
        inflating: dbserver_patch_5.180228.2/ExadataImageNotification.pl
        inflating: dbserver_patch_5.180228.2/kernelupgrade_oldbios.sh
        inflating: dbserver_patch_5.180228.2/cellboot_usb_pci_path
        inflating: dbserver_patch_5.180228.2/exadata.img.env
        inflating: dbserver_patch_5.180228.2/README.txt
        inflating: dbserver_patch_5.180228.2/exadataLogger.pm
        inflating: dbserver_patch_5.180228.2/patch_bug_26678971
        inflating: dbserver_patch_5.180228.2/dcli
        inflating: dbserver_patch_5.180228.2/patchReport.py
       extracting: dbserver_patch_5.180228.2/dbnodeupdate.zip
         creating: dbserver_patch_5.180228.2/plugins/
        inflating: dbserver_patch_5.180228.2/plugins/010-check_17854520.sh
        inflating: dbserver_patch_5.180228.2/plugins/020-check_22468216.sh
        inflating: dbserver_patch_5.180228.2/plugins/040-check_22896791.sh
        inflating: dbserver_patch_5.180228.2/plugins/000-check_dummy_bash
        inflating: dbserver_patch_5.180228.2/plugins/050-check_22651315.sh
        inflating: dbserver_patch_5.180228.2/plugins/005-check_22909764.sh
        inflating: dbserver_patch_5.180228.2/plugins/000-check_dummy_perl
        inflating: dbserver_patch_5.180228.2/plugins/030-check_24625612.sh
        inflating: dbserver_patch_5.180228.2/patchmgr_functions
        inflating: dbserver_patch_5.180228.2/exadata.img.hw
        inflating: dbserver_patch_5.180228.2/libxcp.so.1
        inflating: dbserver_patch_5.180228.2/imageLogger
        inflating: dbserver_patch_5.180228.2/ExaXMLNode.pm
        inflating: dbserver_patch_5.180228.2/fwverify
      
    5. Create the dbs_group file that contains the list of compute nodes to update. Include the nodes listed after running the olsnodes command in step 1 except for the driving system node. In this example, dbs_group should include only node2.

      [root@node1 patch]# cd /root/patch/dbserver_patch_5.180228
      [root@node1 dbserver_patch_5.180228]# cat dbs_group
      node2
      
  3. Run a patching precheck operation.

    patchmgr -dbnodes dbs_group -precheck -yum_repo <yum_repository> -target_version <target_version> -nomodify_at_prereq
    Important

    You must run the precheck operation with the -nomodify_at_prereq option to prevent any changes to the system that could impact the backup you take in the next step. Otherwise, the backup might not be able to roll back the system to its original state, should that be necessary.

    The output should look like the following example:

    [root@node1 dbserver_patch_5.180228]# ./patchmgr -dbnodes dbs_group -precheck -yum_repo  http://yum-phx.oracle.com/repo/EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version 18.1.4.0.0.180125.3  -nomodify_at_prereq
    
    ************************************************************************************************************
    NOTE    patchmgr release: 5.180228 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
    NOTE
    WARNING Do not interrupt the patchmgr session.
    WARNING Do not resize the screen. It may disturb the screen layout.
    WARNING Do not reboot database nodes during update or rollback.
    WARNING Do not open logfiles in write mode and do not try to alter them.
    ************************************************************************************************************
    2018-02-28 21:22:45 +0000        :Working: DO: Initiate precheck on 1 node(s)
    2018-02-28 21:24:57 +0000        :Working: DO: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:26:15 +0000        :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:26:47 +0000        :Working: DO: dbnodeupdate.sh running a precheck on node(s).
    2018-02-28 21:28:23 +0000        :SUCCESS: DONE: Initiate precheck on node(s). 
  4. Back up the current system.

    patchmgr -dbnodes dbs_group -backup -yum_repo <yum_repository> -target_version <target_version>  -allow_active_network_mounts
    Important

    This is the proper stage to take the backup, before any modifications are made to the system.

    The output should look like the following example:

    [root@node1 dbserver_patch_5.180228]#  ./patchmgr -dbnodes dbs_group -backup  -yum_repo  http://yum-phx.oracle.com/repo/EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version 18.1.4.0.0.180125.3 -allow_active_network_mounts
    
    ************************************************************************************************************
    NOTE    patchmgr release: 5.180228 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
    NOTE
    WARNING Do not interrupt the patchmgr session.
    WARNING Do not resize the screen. It may disturb the screen layout.
    WARNING Do not reboot database nodes during update or rollback.
    WARNING Do not open logfiles in write mode and do not try to alter them.
    ************************************************************************************************************
    2018-02-28 21:29:00 +0000        :Working: DO: Initiate backup on 1 node(s).
    2018-02-28 21:29:00 +0000        :Working: DO: Initiate backup on node(s)
    2018-02-28 21:29:01 +0000        :Working: DO: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:30:18 +0000        :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:30:51 +0000        :Working: DO: dbnodeupdate.sh running a backup on node(s).
    2018-02-28 21:35:50 +0000        :SUCCESS: DONE: Initiate backup on node(s).
    2018-02-28 21:35:50 +0000        :SUCCESS: DONE: Initiate backup on 1 node(s).
    
  5. Remove all custom RPMs from the target compute nodes that will be updated. Custom RPMs are reported in precheck results. They include RPMs that were manually installed after the system was provisioned.

    Note

    • If you are updating the system from version 12.1.2.3.4.170111, and the precheck results include krb5-workstation-1.10.3-57.el6.x86_64, remove it. (This item is considered a custom RPM for this version.)
    • Do not remove exadata-sun-vm-computenode-exact or oracle-ofed-release-guest. These two RPMs are handled automatically during the update process.
  6. Run the nohup command to perform the update.

    nohup patchmgr -dbnodes dbs_group -upgrade -nobackup -yum_repo <yum_repository> -target_version <target_version> -allow_active_network_mounts &

    The output should look like the following example:

    [root@node1 dbserver_patch_5.180228]# nohup ./patchmgr -dbnodes dbs_group -upgrade -nobackup  -yum_repo  http://yum-phx.oracle.com/repo/EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version 18.1.4.0.0.180125.3  -allow_active_network_mounts &
    
    ************************************************************************************************************
    NOTE    patchmgr release: 5.180228 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
    NOTE
    NOTE    Database nodes will reboot during the update process.
    NOTE
    WARNING Do not interrupt the patchmgr session.
    WARNING Do not resize the screen. It may disturb the screen layout.
    WARNING Do not reboot database nodes during update or rollback.
    WARNING Do not open logfiles in write mode and do not try to alter them.
    *********************************************************************************************************
    
    2018-02-28 21:36:26 +0000        :Working: DO: Initiate prepare steps on node(s).
    2018-02-28 21:36:26 +0000        :Working: DO: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:37:44 +0000        :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:38:43 +0000        :SUCCESS: DONE: Initiate prepare steps on node(s).
    2018-02-28 21:38:43 +0000        :Working: DO: Initiate update on 1 node(s).
    2018-02-28 21:38:43 +0000        :Working: DO: Initiate update on node(s)
    2018-02-28 21:38:49 +0000        :Working: DO: Get information about any required OS upgrades from node(s).
    2018-02-28 21:38:59 +0000        :SUCCESS: DONE: Get information about any required OS upgrades from node(s).
    2018-02-28 21:38:59 +0000        :Working: DO: dbnodeupdate.sh running an update step on all nodes.
    2018-02-28 21:48:41 +0000        :INFO   : node2 is ready to reboot.
    2018-02-28 21:48:41 +0000        :SUCCESS: DONE: dbnodeupdate.sh running an update step on all nodes.
    2018-02-28 21:48:41 +0000        :Working: DO: Initiate reboot on node(s)
    2018-02-28 21:48:57 +0000        :SUCCESS: DONE: Initiate reboot on node(s)
    2018-02-28 21:48:57 +0000        :Working: DO: Waiting to ensure node2 is down before reboot.
    2018-02-28 21:56:18 +0000        :Working: DO: Initiate prepare steps on node(s).
    2018-02-28 21:56:19 +0000        :Working: DO: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:57:37 +0000        :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:57:42 +0000        :SEEMS ALREADY UP TO DATE: node2
    2018-02-28 21:57:43 +0000        :SUCCESS: DONE: Initiate update on node(s)
  7. After the update operation completes, verify the version of the kernel on the compute node that was updated.

    [root@node2 ~]# imageinfo -ver
    18.1.4.0.0.180125.3
    
  8. If the driving system is a compute node that needs to be updated (as in this example), repeat steps 2 through 7 of this procedure using an updated compute node as the driving system to update the remaining compute node. In this example update, you would use node2 to update node1.
  9. On each compute node, run the uptrack-install command as root to install the available ksplice updates.

    uptrack-install --all -y
    

Updating Tooling on an Exadata Cloud Service Instance

You can update the cloud-specific tooling included on an Exadata Cloud Service compute node by downloading and applying an RPM file containing the latest version of the tools.

Note

Oracle highly recommends that you maintain the same version of cloud tooling across your Exadata Cloud Service environment. Perform the following procedure on every compute node in the Exadata Cloud Service instance.

Prerequisite

The compute nodes in the Exadata Cloud Service instance must be configured to access the Oracle Cloud Infrastructure Object Storage service. For more information, see Node Access to Object Storage: Static Route.

Updating the Cloud Tooling on Each Compute Node Manually

The method for updating the tooling depends on the tooling release that is currently installed on the compute node.

To check the installed tooling release
  1. Connect to the compute node as the opc user.
  2. Start a root-user command shell.

    $ sudo -s
    #
  3. Use the following command to display information about the installed cloud tooling and note the release label, shown in red in the example that follows.

    # rpm -qa|grep -i dbaastools_exa
    
    dbaastools_exa-1.0-1+18.1.2.1.0_180511.0801.x86_64

    In this example, the release version is 18.1.2.1.0_180511.0801.

To integrate customer-managed key management into Exadata Cloud Service

If you choose to encrypt databases in an Exadata Cloud Service instance using encryption keys that you manage, then you may update the following two packages (using Red Hat Package Manager) to enable DBAASTOOLS to interact with the APIs that customer-managed key management uses.

KMS TDE CLI

To update the KMS TDE CLI package, you must complete the following task on all nodes in the Exadata Cloud Service instance:

  1. Deinstall current KMS TDE CLI package, as follows:
    rpm -ev kmstdecli
  2. Install the updated KMS TDE CLI package, as follows:
    rpm -ivh kms_tde_cli

LIBKMS

LIBKMS is a library package necessary to synchronize a database with customer-managed key management through PKCS11. When a new version of LIBKMS is installed, any databases converted to customer-managed key management continue to use the previous LIBKMS version, until the database is stopped and restarted.

To update the LIBKMS package, you must complete the following task on all nodes in the Exadata Cloud Service instance:

  1. Confirm that the LIBKMS package is already installed, as follows:
    rpm -qa --last | grep libkmstdepkcs11
  2. Install a new version of LIBKMS, as follows:
    rpm -ivh libkms
  3. Use SQL*Plus to stop and restart all databases converted to customer-managed key management, as follows:
    shutdown immediate;
    startup;
  4. Ensure that all converted databases are using the new LIBKMS version, as follows:
    for pid in $(ps aux | grep "<dbname>" | awk '{print $2;}'); do echo $pid; sudo lsof -p $pid | grep kms | grep "pkcs11_[0-9A-Za-z.]*" | sort -u; done | grep pkcs11
  5. Deinstall LIBKMS packages that are no longer being used by any database, as follows:
    rpm -ev libkms