7.5 Running Manual Pre- and Post-Upgrade Checks in Combination with Oracle Private Cloud Appliance Upgrader

Controller software updates must be installed using the Oracle Private Cloud Appliance Upgrader. While the Upgrader tool automates a large number of prerequisite checks, there are still some tasks that must be performed manually before and after the upgrade process. The manual tasks are listed in this section. For more detailed information, please refer to the support note with Doc ID 2442664.1 for Controller Software release 2.3.4, or support note Doc ID 2605884.1 for Controller Software release 2.4.2.

Start by running the Oracle Private Cloud Appliance Upgrader in verify-only mode. The steps are described in Section 3.2.3, “Verifying Upgrade Readiness”. Fix any issues reported by the Upgrader and repeat the verification procedure until all checks complete without errors. Then, proceed to the manual pre-upgrade checks.

Performing Manual Pre-Upgrade Checks

  1. Verify the WebLogic password.

    On the master Management Node, run the following commands:

    # cd /u01/app/oracle/ovm-manager-3/bin
    # ./ovm_admin --listusers

    Enter the WebLogic password when prompted. If the password is incorrect, the ovm_admin command fails and exits with return code 1. If the password is correct, the command lists the users and exits with return code of 0. In the event of an incorrect password, login to the Oracle Private Cloud Appliance web interface and change the wls-weblogic password to the expected password.

  2. Check that no external storage LUNs are connected to the management nodes.

    Verify that none of your external storage LUNs are visible from either management node. For more details, refer to the support note with Doc ID 2148589.1.

    If your system is InfiniBand-based and there are no Fibre Channel cards installed in the Fabric Interconnects , you can skip this check.

  3. Check for customized inet settings on the management nodes.

    Depending on the exact upgrade path you are following, xinetd may be upgraded. In this case, modified settings are automatically reset to default. Make a note of your custom inet settings and verify them after the upgrade process has completed. These setting changes are stored in the file /etc/postfix/main.cf.

  4. Register the number of objects in the MySQL database.

    As the root user on the master management node, download and run the script number_of_jobs_and_objects.sh. It is attached to the support note with Doc ID 2442664.1 for Controller Software release 2.3.4, or support note Doc ID 2605884.1 for Controller Software release 2.4.2. It returns the number of objects and the number of jobs in the database. Make a note of these numbers.

  5. Verify management node failover.

    Reboot the master management node to ensure that the standby management node is capable of taking over the master role.

  6. Check the NFS protocol used for the internal ZFS Storage Appliance.

    On both management nodes, run the command nfsstat -m. Each mounted share should use the NFSv4 protocol.

  7. Check the file /etc/yum.conf on both management nodes.

    If a proxy is configured for YUM, comment out or remove that line from the file.

When you have submitted your system to all pre-upgrade checks and you have verified that it is ready for upgrade, execute the controller software update. The steps are described in Section 3.2.4, “Executing a Controller Software Update”. After successfully upgrading the controller software, proceed to the manual post-upgrade checks for management nodes and compute nodes.

Performing Manual Post-Upgrade Checks on the Management Nodes

  1. Check the names of the Unmanaged Storage Arrays.

    If the names of the Unmanaged Storage Arrays are no longer displayed correctly after the upgrade, follow the workaround documented in the support note with Doc ID 2244130.1.

  2. Check for errors and warnings in Oracle VM.

    In the Oracle VM Manager web UI, verify that none of these occur:

    • Padlock icons against compute nodes or storage servers

    • Red error icons against compute nodes, repositories or storage servers

    • Yellow warning icons against compute nodes, repositories or storage servers

  3. Check the status of all components in the Oracle Private Cloud Appliance Dashboard.

    Verify that a green check mark appears to the right of each hardware component in the Hardware View, and that no red error icons are present.

  4. Check networks.

    Verify that all networks – factory default and custom – are present and correctly configured.

Performing Manual Post-Upgrade Checks on the Compute Nodes

  1. Change the min_free_kbytes setting on all compute nodes.

    Refer to the support note with Doc ID 2314504.1. Apply the corresponding steps and reboot the compute node after the change has been made permanent.

  2. Check that the fm package is installed on all compute nodes.

    Run the command rpm -q fm. If the package is not installed, run the following command:

    # chkconfig ipmi on; service ipmi start; LFMA_UPDATE=1 /usr/bin/yum install fm -q -y -\-nogpgcheck
  3. Perform a virtual machine test.

    Start a test virtual machine and verify that networks are functioning. Migrate the virtual machine to a compatible compute node to make sure that live migration works correctly.