8.12 Running Manual Pre- and Post-Upgrade Checks in Combination with Oracle PCA Upgrader

As of Release 2.3.4, controller software updates must be installed using the Oracle PCA Upgrader. While the Upgrader tool automates a large number of prerequisite checks, there are still some tasks that must be performed manually before and after the upgrade process. The manual tasks are listed in this section. For more detailed information, please refer to the support note with Doc ID 2442664.1.

Start by running the Oracle PCA Upgrader in verify-only mode. The steps are described in Section 3.3.3, “Verifying Upgrade Readiness”. Fix any issues reported by the Upgrader and repeat the verification procedure until all checks complete without errors. Then, proceed to the manual pre-upgrade checks.

Performing Manual Pre-Upgrade Checks

  1. Check for the presence of multiple tenant groups.

    On the master management node, run the command pca-admin list tenant-group. If the output indicates there are multiple tenant groups, upgrade the compute nodes in one tenant group at a time.

  2. Check that no external storage LUNs are connected to the management nodes.

    Verify that none of your external storage LUNs are visible from either management node. If there are no Fibre Channel cards installed in your Fabric Interconnects, you can skip this check. For more details, refer to the support note with Doc ID 2148589.1.

  3. Check for multipath.conf customizations in the compute nodes.

    In Release 2.3.x, the ZFS stanza in the multipath.conf file is overwritten. However, any other customizations will be left unchanged. A backup of the file is saved as /etc/multipath.conf.pca.<timestamp>.

  4. Check for customized inet settings on the management nodes.

    Depending on the exact upgrade path you are following, xinetd may be upgraded. In this case, modified settings are automatically reset to default. Make a note of your custom inet settings and verify them after the upgrade process has completed. These setting changes are stored in the file /etc/postfix/main.cf.

  5. Check the health status of each server pool / o2cb cluster.

    In the Oracle VM Manager web UI, verify that none of the compute nodes or server pools – Rack1_ServerPool and any custom tenant groups – report a warning (yellow icon) or critical error (red icon).

  6. Register the number of objects in the MySQL database.

    As the root user on the master management node, download and run the script number_of_jobs_and_objects.sh. It is attached to the support note with Doc ID 2442664.1. It returns the number of objects and the number of jobs in the database. Make a note of these numbers.

  7. Verify management node failover.

    Reboot the master management node to ensure that the standby management node is capable of taking over the master role.

  8. Check the NFS protocol used for the internal ZFS Storage Appliance.

    On both management nodes, run the command nfsstat -m. Each mounted share should use the NFSv4 protocol.

When you have submitted your system to all pre-upgrade checks and you have verified that it is ready for upgrade, execute the controller software update. The steps are described in Section 3.3.4, “Executing a Controller Software Update”. After successfully upgrading the controller software, proceed to the manual post-upgrade checks for management nodes and compute nodes.

Performing Manual Post-Upgrade Checks on the Management Nodes

  1. Check the names of the Unmanaged Storage Arrays.

    If the names of the Unmanaged Storage Arrays are no longer displayed correctly after the upgrade, follow the workaround documented in the support note with Doc ID 2244130.1.

  2. Check for errors and warnings in Oracle VM.

    In the Oracle VM Manager web UI, verify that none of these occur:

    • Padlock icons against compute nodes or storage servers

    • Red error icons against compute nodes, repositories or storage servers

    • Yellow warning icons against compute nodes, repositories or storage servers

  3. Check the status of all components in the Oracle PCA Dashboard.

    Verify that a green check mark appears to the right of each hardware component in the Hardware View, and that no red error icons are present.

  4. Check networks.

    Verify that all networks – factory default and custom – are present and correctly configured.

Performing Manual Post-Upgrade Checks on the Compute Nodes

  1. Change the min_free_kbytes setting on all compute nodes.

    Refer to the support note with Doc ID 2314504.1. Apply the corresponding steps and reboot the compute node after the change has been made permanent.

  2. Check that the fm package is installed on all compute nodes.

    Run the command rpm -q fm. If the package is not installed, run the following command:

    # chkconfig ipmi on; service ipmi start; LFMA_UPDATE=1 /usr/bin/yum install fm -q -y --nogpgcheck
  3. Perform a virtual machine test.

    Start a test virtual machine and verify that networks are functioning. Migrate the virtual machine to a compatible compute node to make sure that live migration works correctly.