3.4 Updating the Oracle PCA Controller Software Through the Oracle PCA CLI


If you are updating the Oracle PCA Controller Software to Release 2.3.4 or newer, you must use the Oracle PCA Upgrader. The CLI instructions in this section do not apply to Release 2.3.4 and newer. Please refer to Section 3.3, “Oracle PCA 2.3 – Using the Oracle PCA Upgrader”.


On Oracle PCA management nodes the YUM repositories have been intentionally disabled and should not be enabled by the customer. Updates and upgrades of the management node operating system and software components must only be applied through the update mechanism described in this section.

Updates of the Oracle PCA controller software are performed from the Command Line Interface of the master management node. Software updates are a three-phase process. First, a zipped ISO containing the updated software must be downloaded from My Oracle Support and made available on an HTTP or FTP server. From there, the ISO is downloaded to the Oracle PCA internal storage appliance. When the download is complete and the software is unpacked in the appropriate directories, the update is activated and applied to each affected component.


If direct public access is not available within your data center and you make use of proxy servers to facilitate HTTP, HTTPS and FTP traffic, it may be necessary to edit the Oracle PCA system properties, using the CLI on each management node, to ensure that the correct proxy settings are specified for a download to succeed from the Internet. This depends on the network location from where the download is served. See Section 8.1, “Adding Proxy Settings for Oracle Private Cloud Appliance Updates” for more information.


If the internal ZFS Storage Appliance contains customer-created LUNs, make sure they are not mapped to the default initiator group. See Customer Created LUNs Are Mapped to the Wrong Initiator Group in the Oracle Private Cloud Appliance Release Notes.

3.4.1 Optional: Rebooting the Management Node Cluster

Although not strictly necessary, it may be useful to reboot both management nodes before starting the appliance software update. This leaves the management node cluster in the cleanest possible state, ensures that no system resources are occupied unnecessarily, and eliminates potential interference from processes that have not completed properly.

Rebooting the Management Node Cluster

  1. Using SSH and an account with superuser privileges, log into both management nodes using the IP addresses you configured in the Network Setup tab of the Oracle PCA Dashboard. If you use two separate consoles you can view both side by side.


    The default root password is Welcome1. For security reasons, you must set a new password at your earliest convenience.

  2. Run the command pca-check-master on both management nodes to verify which node owns the master role.

  3. Reboot the management node that is NOT currently the master. Enter init 6 at the prompt.

  4. Ping the machine you rebooted. When it comes back online, reconnect using SSH and monitor system activity to determine when the secondary management node takes over the master role. Enter this command at the prompt: tail -f /var/log/messages. New system activity notifications will be output to the screen as they are logged.

  5. In the other SSH console, which is connected to the current active management node, enter init 6 to reboot the machine and initiate management node failover.

    The log messages in the other SSH console should now indicate when the secondary management node takes over the master role.

  6. Verify that both management nodes have come back online after reboot and that the master role has been transferred to the other manager. Run the command pca-check-master on both management nodes.

    If this is the case, proceed with the software update steps below.

3.4.2 Prerequisites for Software Update to Release 2.3.x


You must NEVER attempt to run the UPDATE TO RELEASE 2.3.x if the currently installed controller software is Release 2.0.5 or earlier.

These earlier releases do not have the necessary mechanisms to verify that the update path is valid. Consequently, the update process will start, and make both management nodes inaccessible. There may be significant downtime and data loss or corruption.

Release 2.3.x of the Oracle Private Cloud Appliance Controller Software adds specific requirements to the update procedure because it involves upgrading Oracle VM from Release 3.2.x to Release 3.4.x. When you start the software update, a script validates the current appliance configuration and status. If the prerequisites are not met, the software update will not be executed.

To assist you in preparing for the software update and working through potential issues, a pre-upgrade script is shipped as part of the Oracle Private Cloud Appliance Release 2.3.x *.iso file. The checks performed by this script are in addition to the checks built into the software update code itself. You must run this script first, and make sure it completes without any failures. Only then should you proceed with the update procedure. For detailed instructions, refer to Section 8.8, “Environment Pre-Upgrade Validation and Software Update to Release 2.3.1-2.3.3”.

A critical requirement is that all compute nodes listed in the node database must be running Oracle VM Server 3.2.10 or 3.2.11. That means the requirement applies not only to the active compute nodes but also to those that are powered off, non-functional or even no longer installed in the rack. Any compute node registered in the node database that cannot be brought online and upgraded to run the correct version of Oracle VM Server, must be decommissioned first. See Section 8.9, “Upgrading to Oracle Private Cloud Appliance Release 2.3.x with Non-Functional Compute Nodes” for additional information.


If you execute the software update with one or more compute nodes still running Oracle VM Server 3.2.9 or older, these can only be recovered afterwards through a manual upgrade or by means of reprovisioning. Guest VMs must be backed up in advance.

Before starting the Oracle PCA Release 2.3.x Controller Software update, also make sure that the ILOM firmware on all Oracle Server X5-2 compute nodes has been upgraded to version, or a newer supported version. If after the controller software update these compute nodes with an older installed ILOM firmware are provisioned, failures will occur. For more information, see Compute Node ILOM Firmware Causes Provisioning Failure and OSA Image Corruption with Oracle PCA Release 2.3.x Controller Software in the Oracle Private Cloud Appliance Release Notes.

3.4.3 Monitoring the Update Process

Actively monitoring the appliance update process is not strictly necessary. However, it is useful in order to estimate time remaining or detect delays and potential problems. Apart from the command line method described in the update procedure, you can use additional terminal windows to monitor the progress of the software update. Specifically, if you have an active ssh connection to management node 1, you can watch the update run on the ILOM of management node 2, and vice versa.

Monitoring the Update Process

  1. Open a terminal session and ssh into the active management node. From there, open another terminal session and connect to the secondary management node ILOM, which is updated first when you start the update. You can access the ILOMs over the internal appliance Ethernet network (


    The internal IP addresses are assigned as follows:

    • The internal host IP and ILOM IP of management node 1 are: and

    • The internal host IP and ILOM IP of management node 2 are: and

    ssh root@
    root@'s password:
    root@ovcamn05r1 ~]# ssh
    Oracle(R) Integrated Lights Out Manager
    Version r94217
    Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved.
    Hostname: ilom-ovcamn06r1
    -> start /SP/console
    Are you sure you want to start /SP/console (y/n)? y
    Serial console started.  To stop, type ESC (
  2. Start the ILOM console.

    -> start /SP/console
    Are you sure you want to start /SP/console (y/n)? y
    Serial console started.  To stop, type ESC (

    Messages from the BIOS and from the Oracle Linux and Oracle PCA installations appear in the console. Several reboots occur during the update process. Toward the end of the process a message appears that indicates the system is ready for customer use on the next reboot. At this point your terminal sessions are disconnected. You can log on to the other management node, which has taken over the master role, and follow the second management node update by connecting to its ILOM.

    If you were connected to and at first, then connect to and Depending on which management node held the master role before the update, you may need to connect in reverse order.

3.4.4 Executing the Software Update

After you have carefully reviewed all the guidelines preceding this section, and you have confirmed that your environment is in the appropriate condition for the software update, you may begin updating the controller software. Follow the steps in this procedure to update the Oracle PCA Controller Software.

Updating the Controller Software

  1. Log into My Oracle Support and download the required Oracle PCA software update.

    You can find the update by searching for the product name Oracle Private Cloud Appliance, or for the Patch or Bug Number associated with the update you need.


    Read the information and follow the instructions in the readme file very carefully. It is crucial for a successful Oracle PCA Controller Software update and Oracle VM upgrade.

  2. Make the update, a zipped ISO, available on an HTTP or FTP server that is reachable from your Oracle PCA.

  3. Using SSH and an account with superuser privileges, log into the master management node.


    The default root password is Welcome1. For security reasons, you must set a new password at your earliest convenience.

  4. Connect to the management node using its IP address in the data center network, as you configured it in the Network Setup tab of the Oracle PCA Dashboard. For details, see Section 2.5, “Network Settings”.


    The data center IP address used in this procedure is an example.

    # ssh root@
    root@'s password:
    root@ovcamn05r1 ~]#
  5. Launch the Oracle PCA command line interface.

    # pca-admin
    Welcome to PCA! Release: 2.2.1
  6. Download the ISO to your Oracle PCA. Confirm that you want to start the download.

    PCA> update appliance get_image http://myserver.org/images/pca-2.3.3-b999.iso.zip
    Are you sure [y/N]:y
    The update job has been submitted. Use "show task <task id>" to monitor the progress.
    Task_ID         Status  Progress Start_Time           Task_Name
    -------         ------  -------- ----------           ---------
    333dcc8b617f74  RUNNING None     01-17-2018 09:06:29  update_download_image
    1 row displayed
    Status: Success
  7. Check the progress of the ISO download. When the download is finished, proceed with the next step.

    PCA> show task 333dcc8b617f74
    Task_Name            update_download_image
    Status               SUCCESS
    Progress             100
    Start_Time           01-17-2018 09:06:29
    End_Time             01-17-2018 09:13:11
    Pid                  459257
    Result               None
    Status: Success

    After download, the image is unpacked and the files are copied to the /nfs/shared_storage directory, which is an NFS mount from the appliance internal storage on both management nodes.

  8. When the download has completed successfully, activate it by launching the update process.

    If your environment is very large or has a complex setup with many configuration object stored in the management database, make sure that the terminal session can retain its connection for many hours or is protected against inadvertent interruptions. Ask Oracle for guidance, and refer to the readme file and Oracle Private Cloud Appliance Release Notes for timing details.

    PCA> update appliance install_image
    Are you sure [y/N]:y
    Status: Success

    When updating the Oracle PCA Controller Software to Release 2.3.x, Oracle VM Manager is unavailable for the entire duration of the update. The virtualized environment remains functional, but configuration changes and other management operations are not possible.

    Depending on the size of the Oracle VM installation and the number of installed compute nodes, it can take up to several hours for the Status: Success message to appear. Particularly in the case of the Release 2.3.x update, prerequisite checks must be performed, and the Oracle VM database must be exported in preparation of the upgrade.

    Once you issue this command, the update process is initiated as described in Section 1.7, “Oracle Private Cloud Appliance Software Update”.

  9. Check the progress of the software update.

    PCA> list update-task
    Mgmt_Node_IP    Update_Started       Update_Ended         Elapsed    Update status
    ------------    --------------       ------------         -------    -------------     01-17-2018 13:08:09  01-17-2018 13:49:04  0:40:55    Succeeded     01-17-2018 13:55:41  01-17-2018 14:37:10  0:41:29    Succeeded 
    2 rows displayed 
    Status: Success 

    Due to the database transformation for the Oracle VM upgrade, the first management node can take several hours to complete the process. During this time, you can monitor progress by entering this command at the prompt: tail -f /tmp/install.log.


    At a certain point during the update process, the active management node is rebooted. As a result, the SSH connection is lost. In addition, this may cause the Dashboard to become unresponsive temporarily, and you may be required to log back in.

    When the master management node reboots, the secondary (updated) management node assumes the master role. The original master management node is then also updated and becomes the backup management node.

The software update process is automated to a certain degree, triggering a number of configuration tasks as it progresses. If any of these tasks should fail, the system writes entries in the error database and attempts to restart them every five minutes over a period of 30 minutes. At any given time the administrator can use the CLI to check for configuration task errors and rerun them if necessary. For details about these particular commands, see Section 4.2.24, “rerun”.


Once you have confirmed that the update process has completed, it is advised that you wait a further 30 minutes before starting another compute node or management node software update. This allows the necessary synchronization tasks to complete.

If you ignore the recommended delay between these update procedures there could be issues with further updating as a result of interference between existing and new tasks.