3 Upgrading Oracle Private Cloud Appliance

Due to the nature of the Private Cloud Appliance – where the term appliance is key – an upgrade is a delicate and complicated procedure that deals with different hardware and software components at the same time. It would be undesirable to take the appliance and the virtual environment it hosts out of service entirely for upgrading. Instead, upgrades can be executed in phases and scheduled for minimal downtime. The following table describes the sequence to perform Private Cloud Appliance upgrades.

For additional information about using the Upgrader Tool with software release 2.4.4, see support note Upgrader Tool - Prechecks and Postchecks (Doc ID 2785963.1).

Table 3-1 Sequential Break-Down of an Appliance Upgrade

Order Component Description

1.

storage network upgrade

You must perform this upgrade before you move to controller software version 2.4.4. You can skip this step if you have already performed this upgrade on your 2.4.3 system, or if your system was shipped from the factory with 2.4.3 software pre-installed. See Upgrading the Storage Network.

2.

management nodes software

Install upgraded management software on both management nodes (mn05 and mn06). See Upgrading the Management Node Controller Software.

3.

all firmware, in this order

  1. management node firmware

  2. compute node firmware

  3. internal ZFS storage firmware

  4. switch firmware

    • Firmware for Cisco leaf, spine, and management switches

    • EPLD image for Cisco leaf, spine, and management switches

See Upgrading Component Firmware.

4.

compute node software upgrades

See Upgrading the Compute Node Software.

Before You Start Upgrading

Read and observe the critical information in this section before you begin any procedure to upgrade your Private Cloud Appliance.

All the software included in a given release of the Private Cloud Appliance software is tested to work together and should be treated as one package. Consequently, no appliance component should be upgraded individually, unless Oracle provides specific instructions to do so. All Private Cloud Appliance software releases are downloaded as a single large .iso file, which includes the items listed above.

Do not install any additional packages on your system besides those included in the Private Cloud Appliance .iso file, or packages recommended by your Oracle representative.

Note:

The appliance upgrade process must always be initiated from the active management node.

To view supported firmware versions for all releases of Private Cloud Appliance, see support note Firmware Policy for Private Cloud Appliance (PCA) (Doc ID 1610373.1).

Warnings and Cautions

Read and understand these warnings and cautions before you start the appliance update procedure. They help you avoid operational issues including data loss and significant downtime.

Attention:

Minimum Release

In this version of the Oracle Private Cloud Appliance Administration Guide, it is assumed that your system is currently running Controller Software release 2.4.3 prior to this software update.

If your system is currently running an earlier version, refer to the Updating Oracle Private Cloud Appliance chapter of the Oracle Private Cloud Appliance Administrator's Guide for Release 2.4.3. Follow the appropriate procedures and make sure that your appliance configuration is valid for the release 2.4.3 update before you continue.

Attention:

No Critical Operations

When updating the Private Cloud Appliance software, make sure that no provisioning operations occur and that any externally scheduled backups are suspended. Such operations could cause a software update or component firmware upgrade to fail and lead to system downtime.

Attention:

YUM Disabled

On Private Cloud Appliance management nodes, the YUM repositories have been intentionally disabled and should not be enabled by the customer. Updates and upgrades of the management node operating system and software components must be applied only through the update mechanism described in the documentation.

Caution:

Firmware Policy

To ensure that your Private Cloud Appliance configuration remains in a qualified state, take the required firmware upgrades into account when planning the controller software update. For more information, refer to Firmware Policy.

Caution:

No Backup

During controller software updates, backup operations must be prevented. The Oracle Private Cloud Appliance Upgrader disables crond and blocks backups.

Caution:

CA Certificate and Keystore

If you have generated custom keys using ovmkeytool.sh in a previous version of the Private Cloud Appliance software, you must regenerate the keys prior to updating the Controller Software. For instructions, refer to the support note PCA 2.3.x/2.4.x Upgrade not allowed if Certificates have been regenerated using ovmkeytool.sh. (Doc ID 2597439.1). See also Creating a Keystore.

Caution:

Proxy Settings

If direct public access is not available within your data center and you make use of proxy servers to facilitate HTTP, HTTPS and FTP traffic, it may be necessary to edit the Private Cloud Appliance system properties, using the CLI on each management node, to ensure that the correct proxy settings are specified for a download to succeed from the Internet. This depends on the network location from where the download is served. See Adding Proxy Settings for Oracle Private Cloud Appliance Updates for more information.

Caution:

Custom LUNs on Internal Storage

If the internal ZFS Storage Appliance contains customer-created LUNs, make sure they are not mapped to the default initiator group.

See "Customer Created LUNs Are Mapped to the Wrong Initiator Group" in Known Limitations and Workarounds in the Oracle Private Cloud Appliance Release Notes.

Caution:

Oracle VM Availability During Update to Release 2.4.x

When updating the Oracle Private Cloud Appliance Controller Software to Release 2.4.x, Oracle VM Manager is unavailable for the entire duration of the update. The virtualized environment remains functional, but configuration changes and other management operations are not possible.

Attention:

Compute Node Upgrade ONLY Through Oracle Private Cloud Appliance

Compute nodes cannot be upgraded to the appropriate Oracle VM Server Release 3.4.x with the Oracle VM Manager web UI. You must upgrade them using the pca_upgrader tool within the Private Cloud Appliance.

To perform this upgrade procedure, follow the specific instructions in Upgrading the Compute Node Software.

Attention:

Do Not Override Oracle VM Global Update Settings

As stated in Guidelines, at the start of Managing the Oracle VM Virtual Infrastructure, the settings of the default server pool and custom tenant groups must not be modified through Oracle VM Manager. For compute node upgrade specifically, it is critical that the server pool option "Override Global Server Update Group" remains deselected. The compute node update process must use the repository defined globally. Overriding this will cause the update to fail.

Caution:

Post-Update Synchronization

Once you have confirmed that the update process has completed, it is advised that you wait a further 30 minutes before starting another compute node or management node software update. This allows the necessary synchronization tasks to complete.

If you ignore the recommended delay between these update procedures, there could be issues with further updating as a result of interference between existing and new tasks.

Backup Local Customizations

An update of the Private Cloud Appliance software stack may involve a complete re-imaging of the management nodes. Any customer-installed agents or customizations are overwritten in the process. Before applying new appliance software, back up all local customizations and prepare to re-apply them after the update has completed successfully.

Oracle Enterprise Manager Plug-in Users

If you use Oracle Enterprise Manager and the Oracle Enterprise Manager Plug-in to monitor your Private Cloud Appliance environment, always back up the oraInventory Agent data to /nfs/shared_storage before updating the controller software.

For detailed instructions, refer to Enterprise Manager Agent Recovery After Oracle PCA Upgrade in Oracle Enterprise Manager: Monitoring an Oracle Private Cloud Appliance.

Oracle Auto Service Request (ASR) Users

If you use Oracle Auto Service Request (ASR) in the Private Cloud Appliance environment, backup your ASR configuration according to ASR Backup and Restore in Oracle Auto Service Request: ASR Manager User's Guide for Linux and Solaris.

You can restore the data after the Private Cloud Appliance software update is complete.

Determine Firmware Versions

Use the following commands to determine the current version of firmware installed on a component.

  1. Using an account with superuser privileges, log in to the component.

    For Cisco switches you must log in as admin.

  2. Use the appropriate command to find the current firmware version of each component.

    • compute/management nodes

      > fwupdate list all

      To find the CX5 card firmware version, query the PCI bus:

      [root@ovcamn05r1 ~]# lspci | grep Mell
      3b:00.0 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
      3b:00.1 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]

      Use the mstflint command on the appropriate PCI bus to determine the card firmware version:

      [root@ovcamn05r1 ~]# mstflint -d /proc/bus/pci/3b/00.0 q
      Image type: FS4
      FW Version: 16.28.1002
      FW Release Date: 23.7.2020
      Product Version: 16.28.1002
    • ZFS Storage Appliances

      ovcasn02r1:> configuration version show
    • Cisco switches

      ovcasw21r1# show version 

Upgrading the Storage Network

The Oracle Private Cloud Appliance Storage Network is available since Controller Software release 2.4.3. This feature enables access for virtual machines to the internal ZFS Storage Appliance, and requires 60TB of space on the ZFS Storage Appliance.

Functionality is built in to the storage network upgrade to ensure the upgrade process works properly. This includes three lock files that are set during the storage network upgrade and are designed to prevent specific behaviors that can interrupt the upgrade. The all_provisioning.LOCK prevents provisioning actions on compute nodes during upgrade. The fw_upgrade.LOCK is placed immediately following the successful completion of the first management node upgrade, and prevents the use of CLI commands before the storage network upgrade is complete. The storage_network_upgrade.LOCK prevents any customer-initiated changes to the spine or leaf switches while the upgrade is taking place. The locks are removed at the completion of the storage network upgrade, regardless of success or failure.

Caution:

This is a disruptive upgrade during which the Cisco spine switches are reconfigured and the network interfaces on the ZFS Storage Appliance will be reconfigured.

Performing Pre-checks and Upgrading the Oracle Private Cloud Appliance Storage Network

Before you begin, review all related My Oracle Support documents. See [PCA 2.4.4] Upgrader Tool - Prechecks and Postchecks (Doc ID 2785963.1).

  1. Check whether storage-network has already been upgraded.

    Caution:

    Do not perform storage-network upgrade more than one time. If storage-network upgrade was completed for Private Cloud Appliance 2.4.3, do not upgrade storage-network again now.

    See [PCA 2.4.4] Check Whether the Storage Network Upgrade Has Already Been Completed (Doc ID 2848319.1). Failure to follow these instructions could result in data loss.

    1. On the master management node, run the pca_upgrader check.

      # pca_upgrader -V -t management -c ovcamnXXr1 -g 2.4.4 -l http://path-to-iso/ovca-2.4.4-b000.iso.zip
    2. Check the pca_upgrader log file.

      Search for "Storage Network Upgrade" in /nfs/shared_storage/pca_upgrader/log/pca_upgrader_yyyy_mm_dd-hh.mm.ss.log.

      If you see a line like the following that says the Storage Network Upgrade Check Passed, stop: do not perform the remainder of this procedure.

      [2022-01-22 14:31:09 1963446] INFO (precheck:116) [Storage Network Upgrade Check 
      (Check that the storage network upgrade has been completed.)] Passed

      Caution:

      Continue with this procedure only if the Storage Network Upgrade Check is Failed.

  2. Perform the procedure given in [PCA 2.x] Upgrading Storage Network Umount Failure (Doc ID 2878677.1).

    Storage network upgrade can fail, and data can be lost, when the upgrade attempts to umount /nfs/shared_storage from the management IP address. To avoid this failure, apply the patch provided in the above referenced My Oracle Support document. This patch enables upgrade to succeed even if the upgrade encounters umount failures.

    Caution:

    Failure to perform this procedure can result in rollback of configurations of spine switches and ZFSSA, causing failure of storage network upgrade and possible data loss.

  3. Log in to the management node and run the verify command.

    pca_upgrader -V -t storage-network
  4. If you use the optional ASRM and OEM agents, stop them.

    service asrm stop
    service gcstartup stop
  5. Upgrade the storage network.

    pca_upgrader -U -t storage-network
    PCA Rack Type: PCA X8_BASE.
      Please refer to log file /var/log/pca_upgrader_<date>-<time>.log for more details.
    
    Beginning PCA Storage Network Pre-Upgrade Checks...
    
    Rack Type Check                                                        1/14
    PCA Version Check                                                      2/14
    Upgrade Locks Check                                                    3/14
    Backup Tasks Check                                                     4/14
    Storage Port Channel Status Check                                      5/14
    Spine Switch Firmware Check                                            6/14
    Check ZFSSA MGMT Network                                               7/14
    Check Cluster Status                                                   8/14
    AK Firmware Version Check                                              9/14
    ZFSSA Resilvering Jobs Check                                          10/14
    iSCSI Target Check                                                    11/14
    ZFSSA Hardware Error Check                                            12/14
    Check ZFSSA Default Shares                                            13/14
    ZFSSA Network Configuration Check                                     14/14
    
    PCA Storage Network Pre-Upgrade Checks completed after 0 minutes
    
    Beginning PCA Storage Network Upgrade
    Disable PCA Backups                                                     1/6
    Take Spine Switch Backup                                                2/6
    Take ZFSSA Configuration Backup                                         3/6
    Place Storage Network Upgrade Locks                                     4/6
    Perform Storage Network Upgrade                                         5/6
    Remove Firmware Upgrade Lock                                            6/6
    Remove PCA Upgrade Locks                                                  1
    Re-enable PCA Backups                                                     2
    PCA Storage Network Upgrade completed after 6 minutes
    
    Beginning PCA Storage Network Post-Upgrade Checks...
    
    PCA Storage Network Post-Upgrade Checks completed after 0 minutes
    
    ---------------------------------------------------------------------------
    PCA Storage Network Pre-Upgrade Checks                               Passed
    ---------------------------------------------------------------------------
    Rack Type Check                                                      Passed
    PCA Version Check                                                    Passed
    Upgrade Locks Check                                                  Passed
    Backup Tasks Check                                                   Passed
    Storage Port Channel Status Check                                    Passed
    Spine Switch Firmware Check                                          Passed
    Check ZFSSA MGMT Network                                             Passed
    Check Cluster Status                                                 Passed
    AK Firmware Version Check                                            Passed
    ZFSSA Resilvering Jobs Check                                         Passed
    iSCSI Target Check                                                   Passed
    ZFSSA Hardware Error Check                                           Passed
    Check ZFSSA Default Shares                                           Passed
    ZFSSA Network Configuration Check                                    Passed
    ---------------------------------------------------------------------------
    PCA Storage Network Upgrade                                          Passed
    ---------------------------------------------------------------------------
    Disable PCA Backups                                                  Passed
    Take Spine Switch Backup                                             Passed
    Take ZFSSA Configuration Backup                                      Passed
    Place Storage Network Upgrade Locks                                  Passed
    Perform Storage Network Upgrade                                      Passed
    Remove Firmware Upgrade Lock                                         Passed
    ---------------------------------------------------------------------------
    PCA Storage Network Post-Upgrade Checks                              Passed
    ---------------------------------------------------------------------------
    ---------------------------------------------------------------------------
    Overall Status                                                       Passed
    ---------------------------------------------------------------------------
    PCA Storage Network Pre-Upgrade Checks                               Passed
    PCA Storage Network Upgrade                                          Passed
    PCA Storage Network Post-Upgrade Checks                              Passed
      Please refer to log file /var/log/pca_upgrader_<date>-<time>.log for more details.

    Note:

    After the successful upgrade of management nodes, an upgrade lock is left in place. This lock is intentional to ensure that the storage network upgrade is performed before attempting to upgrade the compute nodes.

  6. If you use the optional the ASRM and OEM agents, restart them.

    service asrm start
    service gcstartup start

Upgrading the Management Node Controller Software

Attention:

UPGRADE BOTH MANAGEMENT NODES CONSECUTIVELY

With the Oracle Private Cloud Appliance Upgrader, the two management node upgrade processes are theoretically separated. Each management node upgrade is initiated by a single command and managed through the Upgrader, which invokes the native Oracle VM Manager upgrade mechanisms. However, you must treat the upgrade of the two management nodes as a single operation.

During the management node upgrade, the high-availability (HA) configuration of the management node cluster is temporarily broken. To restore HA management functionality and mitigate the risk of data corruption, it is critical that you start the upgrade of the second management node immediately after a successful upgrade of the first management node.

Attention:

NO MANAGEMENT OPERATIONS DURING UPGRADE

The Oracle Private Cloud Appliance Upgrader manages the entire process to upgrade both management nodes in the appliance. Under no circumstances should you perform any management operations – through the Oracle Private Cloud Appliance Dashboard or CLI, or Oracle VM Manager – while the Upgrader process is running, and until both management nodes have been successfully upgraded through the Upgrader. Although certain management functions cannot be programmatically locked during the upgrade, they are not supported, and are likely to cause configuration inconsistencies and considerable repair downtime.

Once the upgrade has been successfully completed on both management nodes, you can safely execute appliance management tasks and configuration of the virtualized environment.

As of Release 2.3.4, a separate command line tool is provided to manage the Controller Software upgrade process. The Oracle Private Cloud Appliance Upgrader requires only a couple of commands to execute several sets of tasks, which were scripted or manual steps in previous releases. The Upgrader is more robust and easily extensible, and provides a much better overall upgrade experience.

A more detailed description of the Oracle Private Cloud Appliance Upgrader is included in the introductory chapter of this book. Refer to Oracle Private Cloud Appliance Upgrader.

Rebooting the Management Node Cluster

It is advised to reboot both management nodes before starting the appliance software upgrade. This leaves the management node cluster in the cleanest possible state, ensures that no system resources are occupied unnecessarily, and eliminates potential interference from processes that have not completed properly.

Rebooting the Management Node Cluster

  1. Using SSH and an account with superuser privileges, log into both management nodes using the IP addresses you configured in the Network Setup tab of the Oracle Private Cloud Appliance Dashboard. If you use two separate consoles you can view both side by side.

  2. Run the command pca-check-master on both management nodes to verify which node owns the active role.

  3. Reboot the management node that is NOT currently the active node. Enter init 6 at the prompt.

  4. Ping the machine you rebooted. When it comes back online, reconnect using SSH and monitor system activity to determine when the secondary management node takes over the active role. Enter this command at the prompt: tail -f /var/log/messages. New system activity notifications will be output to the screen as they are logged.

  5. In the other SSH console, which is connected to the current active management node, enter init 6 to reboot the machine and initiate management node failover.

    The log messages in the other SSH console should now indicate when the secondary management node takes over the active role.

  6. Verify that both management nodes have come back online after reboot and that the active role has been transferred to the other manager. Run the command pca-check-master on both management nodes.

    If this is the case, proceed with the software upgrade steps below.

Installing the Oracle Private Cloud Appliance Upgrader

Always download and install the latest version of the Oracle Private Cloud Appliance Upgrader before you execute any verification or upgrade procedures.

Downloading and Installing the Latest Version of the Oracle Private Cloud Appliance Upgrader

  1. Confirm which version of the Oracle Private Cloud Appliance Upgrader is already on your system.

    [root@ovcamn05r1 ~]# yum search pca_upgrader
    pca_upgrader-version.el6.noarch
  2. Log into My Oracle Support and download the latest version of the Oracle Private Cloud Appliance Upgrader to a secure location, if it is newer than the version on your system.

    The Upgrader can be found under patch ID 32982108, and is included in part 1 of a series of downloadable zip files. Any updated versions of the Upgrader will be made available in the same location.

    To obtain the Upgrader package, download this zip file and extract the file pca_upgrader-version.el6.noarch.rpm.

    Once you have downloaded and extracted the series of Upgrader zip files, execute the RUN_ME_FIRST.sh script to assemble the ISO image from the zip files.

  3. If you downloaded a newer version of the Oracle Private Cloud Appliance Upgrader, you must upgrade to the newer version. From the directory where the *.rpm package was saved, run the command yum update pca_upgrader-version.el6.noarch.rpm.

    Verify the new version was installed using the yum search pca_upgrader command.

  4. Copy the downloaded *.rpm package to the active management node and install it.

    Note:

    Install the pca_upgrader version that matches the Oracle release that is currently running on the management node. For example, if you are upgrading the first management node to version 2.4.4, the current management node is running pca_upgrader-version.el6.noarch.rpm, so you must use the .el6 version for upgrade. If you are upgrading the second management node, the first management node has been upgraded to version pca_upgrader-version.el7.noarch.rpm, so you must use the .el7 version for this upgrade.

    [root@ovcamn05r1 ~]# pca-check-master
    NODE: 192.168.4.3  MASTER: True
    root@ovcamn05r1 tmp]# yum install pca_upgrader-2.0.eln.noarch.rpm
    Preparing..########################################### [100]
    1:pca_upgrader########################################### [100%]

    Caution:

    Always download and use the latest available version of the Oracle Private Cloud Appliance Upgrader.

  5. Install the *.rpm upgrade on the second management node.

Verifying Upgrade Readiness

The Oracle Private Cloud Appliance Upgrader has a verify-only mode. It allows you to run all the pre-checks defined for a management node upgrade without proceeding to the actual upgrade steps. The terminal output and log file report any issues you need to fix before the system is eligible for the next Controller Software upgrade.

Note:

The Oracle Private Cloud Appliance Upgrader cannot be stopped by means of a keyboard interrupt or by closing the terminal session. After a keyboard interrupt (Ctrl+C) the Upgrader continues to execute all pre-checks. If the terminal session is closed, the Upgrader continues as a background process.

If the Upgrader process needs to be terminated, enter the command pca_upgrader --kill.

Verifying the Upgrade Readiness of the Oracle Private Cloud Appliance

  1. Go to Oracle VM Manager and make sure that all compute nodes are in Running status. If any server is not in Running status, resolve the issue before proceeding.

    For help resolving issues to correct the compute node status, refer to the support note OVM 3.4.1: OVM Manager showing the Oracle VM Server status as "Starting" although it is running (Doc ID 2245197.1) or contact Oracle Support.

  2. Perform the required manual pre-upgrade checks. Refer to Running Manual Pre- and Post-Upgrade Checks in Combination with Oracle Private Cloud Appliance Upgrader for instructions.

  3. Log in to My Oracle Support and download the required Private Cloud Appliance software update.

    You can find the update by searching for the product name "Oracle Private Cloud Appliance", or for the Patch or Bug Number associated with the update you need.

    Caution:

    Read the information and follow the instructions in the readme file very carefully. It is crucial for a successful Oracle Private Cloud Appliance Controller Software upgrade and Oracle VM upgrade.

  4. Make the update, a zipped ISO, available on an HTTP or FTP server that is reachable from your Private Cloud Appliance. Alternatively, if upgrade time is a major concern, you can download the ISO file to the local file system on both management nodes. This reduces the upgrade time for the management nodes, but has no effect on the time required to upgrade the compute nodes or the Oracle VM database.

    The Oracle Private Cloud Appliance Upgrader downloads the ISO from the specified location and unpacks it on the management node automatically at runtime.

  5. Using SSH and an account with superuser privileges, log in to the active management node through its individually assigned IP address, not the shared virtual IP.

    Note:

    During the upgrade process, the interface with the shared virtual IP address is shut down. Therefore, you must log in using the individually assigned IP address of the management node.

  6. From the active management node, run the Oracle Private Cloud Appliance Upgrader in verify-only mode. The target of the command must be the stand-by management node.

    Note:

    The console output below is an example. You may see a different output, depending on the specific architecture and configuration of your appliance.

    [root@ovcamn05r1 ~]# pca-check-master
    NODE: 192.168.4.3  MASTER: True
    
    root@ovcamn05r1 ~]# pca_upgrader -V -t management -c ovcamn06r1 -g 2.4.4 \
    -l http://path-to-iso/ovca-2.4.4-b000.iso.zip
    
    PCA Rack Type: PCA X8_BASE.
    
    Please refer to log file
    /nfs/shared_storage/pca_upgrader/log/pca_upgrader_date-time.log
    for more details.
    
    Beginning PCA Management Node Pre-Upgrade Checks...
    
    Validate the Image Provided                                            1/44
    Internal ZFSSA Available Space Check                                   2/44
    MN Disk and Shared Storage Space Check                                 3/44
    [...]
    Oracle VM Minimum Version Check                                       41/44
    OS Check                                                              42/44
    OSA Disabled Check                                                    43/44
    ZFSSA Network Configuration Check                                     44/44
    
    PCA Management Node Pre-Upgrade Checks completed after 0 minutes
    
    ---------------------------------------------------------------------------
    PCA Management Node Pre-Upgrade Checks                               Passed
    ---------------------------------------------------------------------------
    Validate the Image Provided                                          Passed
    Internal ZFSSA Available Space Check                                 Passed
    [...]
    OS Check                                                             Passed
    Password Check                                                       Passed
    OSA Disabled Check                                                   Passed
    ---------------------------------------------------------------------------
    Overall Status                                                       Passed
    ---------------------------------------------------------------------------
  7. As the verification process runs, check the console output for test progress. When all pre-checks have been completed, a summary is displayed. A complete overview of the verification process is saved in the file /nfs/shared_storage/pca_upgrader/log/pca_upgrader_date-time.log.

    Some pre-checks may result in a warning. These warnings are unlikely to cause issues, and therefore do not prevent you from executing the upgrade, but they do indicate a situation that should be investigated. When an upgrade command is issued, warnings cause the administrator to be prompted whether to proceed with the upgrade, or quit and investigate the warnings first.

  8. If pre-checks have failed, consult the log file for details. Fix the reported problems, then execute the verify command again.

    Note:

    If errors related to SSL certificates are reported, check whether these have been re-generated using ovmkeytool.sh. This can cause inconsistencies between the information stored in the Wallet and the actual location of your certificate. For detailed information and instructions to resolve the issue, refer to the support note Upgrade not allowed if Certificates have been regenerated using ovmkeytool.sh. (Doc ID 2597439.1).

  9. Repeat this process until no more pre-check failures are reported. When the system passes all pre-checks, it is ready for the Controller Software upgrade.

Executing a Controller Software Upgrade

During a Controller Software upgrade, the virtualized environment does not accept any management operations. Ensure the storage network upgrade has been completed, then upgrade the management node cluster, followed by the firmware upgrade on rack components, and finally, upgrade the compute nodes in phases. When you have planned all these upgrade tasks, and when you have successfully completed the upgrade readiness verification, your environment is ready for a Controller Software upgrade and any additional upgrades.

No upgrade procedure can be executed without completing the pre-checks. Therefore, the upgrade command first executes the same steps as in Verifying Upgrade Readiness. After successful verification, the upgrade steps are started.

Note:

The console output shown throughout this section is an example. You may see different output, depending on the specific architecture and configuration of your appliance.

Note:

The Oracle Private Cloud Appliance Upgrader cannot be stopped by means of a keyboard interrupt or by closing the terminal session.

After a keyboard interrupt (Ctrl+C) the Upgrader continues the current phase of the process. If pre-checks are in progress, they are all completed, but the upgrade phase does not start automatically after successful completion of all pre-checks. If the upgrade phase is in progress at the time of the keyboard interrupt, it continues until upgrade either completes successfully or fails.

If the terminal session is closed, the Upgrader continues as a background process.

If the Upgrader process needs to be terminated, enter this command: pca_upgrader --kill.

Upgrading the Oracle Private Cloud Appliance Controller Software

  1. Using SSH and an account with superuser privileges, log in to the active management node through its individually assigned IP address, not the shared virtual IP.

    Note:

    During the upgrade process, the interface with the shared virtual IP address is shut down. Therefore, you must log in using the individually assigned IP address of the management node.

    Attention:

    NO MANAGEMENT OPERATIONS DURING UPGRADE

    Under no circumstances should you perform any management operations – through the Oracle Private Cloud Appliance Dashboard or CLI, or Oracle VM Manager – while the Upgrader process is running, and until both management nodes have been successfully upgraded through the Upgrader.

  2. From the active management node, run the Oracle Private Cloud Appliance Upgrader with the required upgrade parameters. The target of the command must be the stand-by management node.

    [root@ovcamn05r1 ~]# pca-check-master
    NODE: 192.168.4.3  MASTER: True
    
    root@ovcamn05r1 ~]# pca_upgrader -U -t management -c ovcamn06r1 -g 2.4.4 \
    -l http://path-to-iso/ovca-2.4.4-b301.iso.zip
    
    PCA Rack Type: PCA X8_BASE.
    
    Please refer to log file
    /nfs/shared_storage/pca_upgrader/log/pca_upgrader_date-time.log
    for more details.
    
    Beginning PCA Management Node Pre-Upgrade Checks...
    [...]
    
    ***********************************************************************
    Warning: The management precheck completed with warnings. 
    It is safe to continue with the management upgrade from this point 
    or the upgrade can be halted to investigate the warnings. 
    ************************************************************************
    Do you want to continue? [y/n]: y

    After successfully completing the pre-checks, the Upgrader initiates the Controller Software upgrade on the other management node. If errors occur during the upgrade phase, tasks are rolled back and the system is returned to its original state from before the upgrade command.

    Rollback works for errors that occur during these steps:

    Table 3-2 Steps That Support Upgrader Rollback

    Upgrading From Release 2.4.3 Upgrading From Release 2.4.4
    • setting up the PXE directories

    • disabling the Oracle Private Cloud Appliance backups

    • downloading the ISO

    • generating PXE and Kickstart files

    • taking an Oracle VM backup

    • breaking the Oracle Private Cloud Appliance HA model

    • downloading the ISO

    • setting up the YUM repository

    • taking an Oracle VM backup

    • breaking the Oracle Private Cloud Appliance HA model

    Beginning PCA Management Node upgrade for ovcamn06r1
    Setup PXE Directories                                                  1/19
    Disable PCA Backups                                                    2/19
    Download ISO                                                           3/19
    Generate PXE and Kickstart Files                                       4/19
    Take OVM Backup                                                        5/19
    ...
    
    PCA Management Node upgrade of ovcamn06r1 completed after 83 minutes
    
    Beginning PCA Post-Upgrade Checks...
    OVM Manager Cache Size Check                                            1/1
    PCA Post-Upgrade Checks completed after 1 minutes
    
    ---------------------------------------------------------------------------
    PCA Management Node Pre-Upgrade Checks                               Passed
    ---------------------------------------------------------------------------
    Validate the Image Provided                                          Passed
    Rack Type Check                                                      Passed
    Internal ZFS 2T Space Check                                          Passed
    [...]
    
    ---------------------------------------------------------------------------
    PCA Management Node Upgrade                                          Passed
    ---------------------------------------------------------------------------
    Setup PXE Directories                                                Passed
    [...]
    Restore PCA Backups                                                  Passed
    Upgrade is complete                                                  Passed
    [...]
    ---------------------------------------------------------------------------
    PCA Post-Upgrade Checks                                              Passed
    ---------------------------------------------------------------------------
    OVM Manager Cache Size Check                                         Passed
    ---------------------------------------------------------------------------
    Overall Status                                                       Passed
    ---------------------------------------------------------------------------
    PCA Management Node Pre-Upgrade Checks                               Passed
    PCA Management Node Upgrade                                          Passed
    PCA Post-Upgrade Checks                                              Passed

    Tip:

    When the ISO is copied to the local file system of both management nodes, the management node upgrade time is shortened. The duration of the entire upgrade process depends heavily on the size of the environment: the number of compute nodes and their configuration, the size of the Oracle VM database, etc, and can take from 1.5 to several hours.

    If you choose to copy the ISO locally, replace the location URL in the pca_upgrader command with -l file:///path-to-iso/ovca-2.4.4-b301.iso.zip.

  3. Monitor the progress of the upgrade tasks. The console output provides a summary of each executed task. If you need more details on a task, or if an error occurs, consult the log file. You can track the logging activity in a separate console window by entering the command tail -f /nfs/shared_storage/pca_upgrader/log/pca_upgrader_date-time.log.

    Note:

    Once the upgrade tasks have started, it is no longer possible to perform a rollback to the previous state.

    # tail -f /nfs/shared_storage/pca_upgrader/log/pca_upgrader_date-time.log

    When the upgrade tasks have been completed successfully, the active management node is rebooted, and the upgraded management node assumes the active role. The new active management node's operating system is now up-to-date, and it runs the new Controller Software version and upgraded Oracle VM Manager installation.

    Tip:

    Rebooting the management node is expected to take up to 10 minutes.

    To monitor the reboot process and make sure the node comes back online as expected, log in to the rebooting management node ILOM.

    Broadcast message from root@ovcamn05r1 (pts/0) (Fri May  7 00:23:25 2021):
    Management Node upgrade succeeded. The master manager will be rebooted to initiate failover in one minute.

    Caution:

    Do not remove any files created during the upgrade process until completion of the second management node upgrade.

  4. Log into the upgraded management node, which has now become the active management node. Use its individually assigned IP address, not the shared virtual IP.

    [root@ovcamn06r1 ~]# pca-check-master
    NODE: 192.168.4.3  MASTER: True
    
    [root@ovcamn06r1 ~]# head /etc/ovca-info
    ==== Begin build info ====
    date: 2019-09-30
    release: 2.4.4
    build: 301
    === Begin compute node info ===
    compute_ovm_server_version: 3.4.7
    compute_ovm_server_build: 2.4.4-150
    compute_rpms_added:
      osc-oracle-s7k-2.1.2-4.el7.noarch.rpm
      ovca-support-2.4.4-97.el7.noarch.rpm

    Attention:

    NO MANAGEMENT OPERATIONS DURING UPGRADE

    Under no circumstances should you perform any management operations – through the Oracle Private Cloud Appliance Dashboard or CLI, or Oracle VM Manager – while the Upgrader process is running, and until both management nodes have been successfully upgraded through the Upgrader.

  5. From the new active management node, run the Oracle Private Cloud Appliance Upgrader command again. The target of the command must be the stand-by management node, which is the original active management node from where you executed the command for the first run.

    root@ovcamn06r1 ~]# pca_upgrader -U -t management -c ovcamn05r1 -g 2.4.4 \
    -l http://path-to-iso/ovca-2.4.4-b301.iso.zip
    
    PCA Rack Type: PCA X8_BASE.
    
    Please refer to log file
    /nfs/shared_storage/pca_upgrader/log/pca_upgrader_date-time.log
    for more details.
    
    Beginning PCA Management Node Pre-Upgrade Checks...
    [...]
    
    ***********************************************************************
    Warning: The management precheck completed with warnings. 
    It is safe to continue with the management upgrade from this point 
    or the upgrade can be halted to investigate the warnings. 
    ************************************************************************
    Do you want to continue? [y/n]: y
    
    Beginning PCA Management Node upgrade for ovcamn05r1
    [...]
    
    ---------------------------------------------------------------------------
    Overall Status                                                       Passed
    ---------------------------------------------------------------------------
    PCA Management Node Pre-Upgrade Checks                               Passed
    PCA Management Node Upgrade                                          Passed
    PCA Post-Upgrade Checks                                              Passed
    
    Broadcast message from root@ovcamn05r1 (pts/3) (Mon Apr 19 18:07:54 2021):
    Management Node upgrade succeeded. The master manager will be rebooted to initiate failover in one minute.

    The upgrade steps are executed the same way as during the first run. When the second management node is rebooted, the process is complete. At this point, both management nodes run the updated Oracle Linux operating system, Oracle Private Cloud Appliance Controller Software, and Oracle VM Manager. The high-availability cluster configuration of the management nodes is restored, and all Oracle Private Cloud Appliance and Oracle VM Manager management functionality is operational again. However, do not perform any management operations until you have completed the required manual post-upgrade checks.

    Tip:

    If the first management node is inadvertently rebooted at this point, the upgrade fails on the second management node. For more information, see "Inadvertent Reboot of Stand-by Management Node During Upgrade Suspends Upgrade" in Known Limitations and Workarounds in the Oracle Private Cloud Appliance Release Notes.

  6. Perform the required manual post-upgrade checks on management nodes and compute nodes. Refer to Running Manual Pre- and Post-Upgrade Checks in Combination with Oracle Private Cloud Appliance Upgrader for instructions.

Upgrading Component Firmware

All the software components in a given Private Cloud Appliance release are designed to work together. As a general rule, no individual appliance component should be upgraded. If a firmware upgrade is required for one or more components, the correct version is distributed inside the Private Cloud Appliance .iso file you downloaded from My Oracle Support. When the image file is unpacked on the appliance internal shared storage, the firmware is located on the management nodes at /nfs/shared_storage/mgmt_image/firmware/.

Attention:

Do not perform any compute node provisioning operations during firmware upgrades.

Caution:

For certain services it is necessary to upgrade the Hardware Management Pack after a Controller Software upgrade. For additional information, refer to "Some Services Require an Upgrade of Hardware Management Pack" in Known Limitations and Workarounds in the Oracle Private Cloud Appliance Release Notes.

If a specific or additional procedure to upgrade the firmware of a Private Cloud Appliance hardware component is available, it appears in this section. For components not listed here, you may follow the instructions provided in the product documentation of the subcomponent. An overview of the documentation for appliance components can be found in the Preface of this book and on Oracle Private Cloud Appliance Documentation for Release 2.4.x.

Firmware Policy

To improve Oracle Private Cloud Appliance supportability, reliability and security, Oracle has introduced a standardized approach to component firmware. The general rule remains unchanged: Components and their respective firmware are designed to work together, and therefore should not be upgraded separately. However, the firmware upgrades, which are provided as part of the .iso file of a given controller software release, are no longer optional.

As part of the test process prior to a software release, combinations of component firmware are tested on all applicable hardware platforms. This allows Oracle to deliver a fully qualified set of firmware for the appliance as a whole, corresponding to a software release. In order to maintain their Private Cloud Appliance in a qualified state, customers who upgrade to a particular software release are expected to also install all the qualified firmware upgrades delivered as part of the controller software.

The firmware versions that have been qualified by Oracle for a given release are listed in the release notes for that release. Refer to the Firmware Qualification chapter.

Note that the file names shown in the procedures below may not exactly match the file names in the .iso image on your system.

Caution:

Interim Firmware Patches

Oracle periodically releases firmware patches for many products, for example to limit security vulnerabilities. It may occur that an important firmware patch is released for a component of Private Cloud Appliance outside of the normal Controller Software release schedule. When this occurs, the patches go through the same testing as all other appliance firmware, but they are not added to the qualified firmware list or the installation .iso for the affected Controller Software release.

After thorough testing, important firmware patches that cannot be included in the Controller Software .iso image are made available to Private Cloud Appliance users through My Oracle Support.

Install the Current Firmware on the Management Nodes

To avoid compatibility issues with newer Oracle Private Cloud Appliance Controller Software and Oracle VM upgrades, you should always install the server ILOM firmware included in the ISO image of the current Oracle Private Cloud Appliance software release. When the ISO image is unpacked on the appliance internal storage, the firmware directory can be reached from the management nodes at this location: /nfs/shared_storage/mgmt_image/firmware/.

Installing the Current Firmware on the Management Nodes

  1. To prepare the firmware, log in to the stand-by management node as root and extract the management node firmware from the /nfs/shared_storage/mgmt_image/firmware/ directory.

    1. Confirm that the management node that you are logged in to is the stand-by management node.

      [root@ovcamn06r1 ~]# pca-check-master
      NODE: 192.168.4.4  MASTER: False
    2. Extract the management node firmware.

      [root@ovcamn06r1 ~]# cd /nfs/shared_storage/mgmt_image/firmware/compute/X8-2_FIRMWARE/
      [root@ovcamn06r1 X8-2_FIRMWARE]# unzip pversion_Generic.zip
  2. Log in as root to the stand-by management node’s ILOM.

    [root@ovcamn06r1 ~]# ssh ilom-ovcamn06r1
    Password: 
    Oracle(R) Integrated Lights Out Manager
    Version 5.1.0.20 r145377
    Copyright (c) 2022, Oracle and/or its affiliates. All rights reserved.
    Warning: HTTPS certificate is set to factory default.
    Hostname: ilom-ovcamn06r1
    ->
  3. Upgrade the firmware using the load -source command. For example:

    -> load -source sftp://root@192.168.4.4/nfs/shared_storage/mgmt_image/firmware/compute/X8-2_FIRMWARE/
    Oracle_Server_X8-2-version-FIRMWARE_PACK/Firmware/service-processor/ILOM-version-ORACLE_SERVER_X8-2-rom.pkg
  4. Respond yes to load the file, and to preserve the existing SP and BIOS configurations, then respond no to delay the BIOS upgrade, which will trigger the management node reboot.

    The reboot should take about 10 minutes.

  5. If the prechecks indicate you need to update the firmware on your RAID card, do so now.

    Navigate to the location of the extracted firmware and install the card firmware. For example:

    # cd /nfs/shared_storage/mgmt_image/firmware/compute/X8-2_FIRMWARE/ \
    Oracle_Server_X8-2-version-FIRMWARE_PACK/Firmware/SAS9361-16i
    
    # fwupdate update controller -x metadata.xml
  6. Once the stand-by management node comes back online, reboot the active management node and initiate failover to the newly-upgraded management node.

  7. Log in to the newly-upgraded management node and run the pca-check-master command to confirm it is now the active node.

  8. Now that one management node is upgraded and has assumed the active role, repeat this procedure to upgrade the firmware on the other management node.

Upgrading the Operating Software on the Oracle ZFS Storage Appliance

The instructions in this section are specific for a component firmware upgrade of the Oracle Private Cloud Appliance.

Caution:

During this procedure, the Private Cloud Appliance services on the management nodes must be halted for a period of time. Plan this upgrade carefully, so that no compute node provisioning, Private Cloud Appliance configuration changes, or Oracle VM Manager operations are taking place at the same time.

Attention:

The statement below regarding the two-phased procedure does not apply to X8-2 or newer systems. The Oracle ZFS Storage Appliance ZS7-2 comes with a more recent firmware version that is not affected by the issue described.

If the ZFS Storage Appliance is running a firmware version older than 8.7.14, an intermediate upgrade to version 8.7.14 is required. Version 8.7.14 can then be upgraded to the intended newer version. For additional information, refer to "Oracle ZFS Storage Appliance Firmware Upgrade 8.7.20 Requires A Two-Phased Procedure" in Known Limitations and Workarounds in the Oracle Private Cloud Appliance Release Notes.

Note:

For detailed information about software upgrades, see Upgrading the Software in the Oracle ZFS Storage Appliance Customer Service Manual, Release OS8.8.x.

The Private Cloud Appliance internal ZFS Storage Appliance contains two clustered controllers in an active/passive configuration. You can disregard the upgrade information for standalone controllers.

Upgrading the ZFS Storage Appliance Operating Software

  1. Before initiating the upgrade on the storage controllers, follow the instructions in "Preparing for a Software Upgrade" in Upgrading the Software in the Oracle ZFS Storage Appliance Customer Service Manual, Release OS8.8.x.

  2. Log on to the active management node using SSH and an account with superuser privileges.

  3. If you are upgrading a ZFS Storage Appliance running firmware version 8.7.14 or newer, skip this step and proceed to the next step.

    If you are upgrading a ZFS Storage Appliancee running a firmware version older than 8.7.14, unzip the firmware package p27357887_20131_Generic.zip included in the Private Cloud Appliance software image.

    [root@ovcamn05r1 ~]# mkdir /nfs/shared_storage/yum/ak
    [root@ovcamn05r1 ~]# cd /nfs/shared_storage/yum/ak
    [root@ovcamn05r1 ak]# unzip /nfs/shared_storage/mgmt_image/firmware/storage/AK_NAS/p27357887_20131_Generic.zip
    Archive: /nfs/shared_storage/mgmt_image/firmware/storage/AK_NAS/p27357887_20131_Generic.zip
     extracting: ak-nas-2013-06-05-7-14-1-1-1-nd.pkg.gz
      inflating: OS8714_Readme.html	
  4. Select the appropriate software update package:

    Caution:

    The procedure shows the upgrade to version 8.8.28 IDR. For an upgrade to version 8.7.14, substitute the file name in the commands as shown here.

    • Version 8.8.28 IDR - ak-nas-2013.06.05.8.28-1.2.20.4392.1x-nondebug.pkg

    • Version 8.7.14 - ak-nas-2013-06-05-7-14-1-1-1-nd.pkg.gz

    Download the software update package to both storage controllers. Their management IP addresses are 192.168.4.1 and 192.168.4.2.

    1. Log on to one of the storage controllers using SSH and an account with superuser privileges.

      [root@ovcamn05r1 ~]# ssh root@192.168.4.1
      Password:
      ovcasn01r1:>
    2. Enter the following series of commands to download the software update package from the shared storage directory to the controller.

      ovcasn01r1:> maintenance system updates
      ovcasn01r1:maintenance system updates> download
      ovcasn01r1:maintenance system updates download (uncommitted)> \ 
      set url=http://192.168.4.199/storage//FW_Updates/release_folders/2.4.4/zfs/8.8.28/ak-nas-2013.06.05.8.28-1.1.3x-nondebug.pkg
                  url = http://192.168.4.199/storage//FW_Updates/release_folders/2.4.4/zfs/8.8.28/ak-nas-2013.06.05.8.28-1.1.3x-nondebug.pkg
      ovcasn01r1:maintenance system updates download (uncommitted)> commit
      Transferred 1.86G of 1.86G (100%) ... done
      Unpacking ... done
    3. Wait for the package to fully download and unpack before proceeding.

    4. Repeat these steps for the second storage controller.

  5. Ensure that you are logged in to the standby controller.

    Check the storage cluster configuration. If the storage cluster state is stripped, as shown in the following example, then you are logged in to the standby controller. Go to the next step in this procedure to begin upgrading the standby controller. In the following example, controller ovcasn02r1 is the standby controller. Upgrade ovcasn02r1 first.

    ovcasn02r1:> configuration cluster show
    Properties:
                             state = AKCS_STRIPPED
                       description = Ready (waiting for failback)
                          peer_asn = 80e4823f-1573-caaa-8c44-fb3c8af3b921
                     peer_hostname = ovcasn01r1
                        peer_state = AKCS_OWNER
                  peer_description = Active (takeover completed)
    Children:
                            resources => Configure resources

    If the storage cluster state is clustered, as shown in the following example while logged in to ovcasn01r1, continue with this step to confirm which node is the standby node.

    ovcasn01r1:> configuration cluster show
    Properties:
                             state = AKCS_CLUSTERED
                       description = Active
                          peer_asn = b4213dea-8829-4a97-8039-86cbe919f373
                     peer_hostname = ovcasn02r1
                        peer_state = AKCS_CLUSTERED
                  peer_description = Active
    
    Children:
                            resources => Configure resources

    To confirm which node in a clustered configuration is the standby node, show cluster resources on each node.

    In an active/passive configuration, all singleton resources have a single owner. The standby node is the node with no cluster/singleton resources when in clustered state.

    The active controller owns all cluster/singleton resources and reports the size of the storage resource as shown in zfs/OVCA_POOL and zfs/pool_name in the following example on ovcasn01r1:

    ovcasn01r1:> configuration cluster resources show
    Resources:
    
    RESOURCE        OWNER           TYPE       LABEL         CHANGES  DETAILS
    net/i40e4       ovcasn01r1      private    i40e4         no       192.168.4.1
    net/ipmp1       ovcasn01r1      singleton  Management... no       192.168.4.100
    net/vnic3       ovcasn01r1      singleton  Storage_In... no       192.168.40.1
    net/vnic4       ovcasn01r1      singleton  Storage_In... no       192.168.235.200
    net/vnic5       ovcasn01r1      singleton  Storage_In... no       192.168.236.200
    zfs/OVCA_POOL   ovcasn01r1      singleton                no       110T
    zfs/PCC_internal_pool_HC_01 ovcasn01r1      singleton                no       110T

    The standby controller does not own any cluster/singleton resource. The standby controller reports only the name of the storage resource, not the size of the storage resource, as shown in the following example on ovcasn02r1. In the following example, controller ovcasn02r1 is the standby controller. Upgrade ovcasn02r1 first.

    ovcasn02r1:> configuration cluster resources show
    aksh: warning: terminal type "xterm-256color" unknown; using "vt100"
    Resources:
    
    RESOURCE        OWNER           TYPE       LABEL         CHANGES  DETAILS
    net/i40e5       ovcasn02r1      private    i40e5         no       192.168.4.2
    net/ipmp1       ovcasn01r1      singleton  Management... no       192.168.4.100
    net/vnic3       ovcasn01r1      singleton  Storage_In... no       192.168.40.1
    net/vnic4       ovcasn01r1      singleton  Storage_In... no       192.168.235.200
    net/vnic5       ovcasn01r1      singleton  Storage_In... no       192.168.236.200
    zfs/OVCA_POOL   ovcasn01r1      singleton                no       
    zfs/PCC_internal_pool_HC_01 ovcasn01r1      singleton                no
  6. Always upgrade the operating software first on the standby controller.

    1. Display the available operating software versions and select the version you downloaded.

      ovcasn02r1:> maintenance system updates
      ovcasn02r1:maintenance system updates> ls
      Updates:
      
      UPDATE                                   RELEASE DATE              STATUS
      ak-nas@2013.06.05.4.2,1-1.1              2015-6-16 17:03:41        previous
      ak-nas@2013.06.05.7.14,1-1.1             2018-1-6 17:16:42         previous
      ak-nas@2013.06.05.7.20,1-1.4             2018-7-17 02:27:51        current
      ak-nas@2013.06.05.8.20,1-1.3             2020-3-23 16:13:25        waiting
      * ak-nas@2013.06.05.8.28,1-1.3           2021-4-22 00:35:42        waiting
      [*] : Interim Diagnostics and Relief (IDR)
      
      Deferred updates:
      
      The appliance is currently configured as part of a cluster. The cluster peer may have shared resources for which deferred updates are available. After all
      updates are completed, check both cluster peers for any deferred updates.
      
      ovcasn01r1:maintenance system updates> select ak-nas@2013.06.05.8.28,1-1.3                              
    2. Run health checks on the system before you upgrade.

      ovcasn01r1:maintenance system updates ak-nas@2013.06.05.8.28,1-1.3> check
      You have requested to run checks associated with waiting upgrade media. This will execute the same
      set of checks as will be performed as part of any upgrade attempt to this media, and will
      highlight conditions that would prevent successful upgrade. No actual upgrade will be attempted,
      and the checks performed are of static system state and non-invasive. Do you wish to continue?
      
      Are you sure? (Y/N) y
      Healthcheck running ... -
      
      Healthcheck completed.  There are no issues at this time which would cause an upgrade to this media to be aborted.
    3. Launch the upgrade process with the selected software version.

      ovcasn01r1:maintenance system updates ak-nas@2013.06.05.8.28,1-1.3> upgrade
      The system is currently running an IDR. You might negate the fix provided with the current IDR.
      You may still continue. This procedure will consume several minutes and requires a system reboot
      upon successful update, but can be aborted at any time prior to reboot. A health check will
      validate system readiness before an update is attempted, and may also be executed independently by
      clicking the Check button.
      
      Are you sure? (Y/N) y
      Updating from ... ak/nas@2013.06.05.8.20,1-2.20.4392.1
      Loading media metadata ... done.
      Selecting alternate product ... SUNW,maguroZ7
      Installing Oracle ZFS Storage ZS7-2 2013.06.05.8.28,1-1.3
      pkg://sun.com/ak/SUNW,maguroZ7@2013.06.05.8.28,1-1.3:20201215T215649Z
      Creating system/ak-nas-2013.06.05.8.28_1-1.3 ... done.
      Creating system/ak-nas-2013.06.05.8.28_1-1.3/install ... done.
      Creating system/ak-nas-2013.06.05.8.28_1-1.3/boot ... done.
      Creating system/ak-nas-2013.06.05.8.28_1-1.3/root ... done.
      Creating system/ak-nas-2013.06.05.8.28_1-1.3/install/svc ... done.
      Creating system/ak-nas-2013.06.05.8.28_1-1.3/install/var ... done.
      Creating system/ak-nas-2013.06.05.8.28_1-1.3/install/home ... done.
      Creating system/ak-nas-2013.06.05.8.28_1-1.3/install/stash ... done.
      Creating system/ak-nas-2013.06.05.8.28_1-1.3/wiki ... done.
      Extracting os image ... done.
      Customizing Solaris ... done.
      Creating driver_aliases.addendum... done.
      Updating vfstab ... done.
      Generating usr/man windex ... done.
      Generating usr/gnu/share/man windex ... done.
      Generating usr/perl5/man windex ... done.
      Preserving ssh keys ... done.
      Configuring smf(5) ... done.
      Extracting appliance kit ... Creating private passwd and shadow files ... done.
      Creating private smbshadow file ... done.
      Creating product symlink ... done.
      Registering update job eb3a1611-8d7c-4347-b899-b8b8d3e8a863 ... done.
      Creating install profile ... done.
      Assigning appliance serial number ... 8ac3f3e4-4b92-4848-9d2b-ee33fc76c0f4
      Determining chassis serial number ... 1909XD2001
      Setting appliance product string ... SUNW,maguroZ7
      Setting appliance product class ... nas
      Setting install timestamp ... done.
      Setting virtualization status ... done.
      Saving SSL keys ... done.
      Updating phone-home key ... done.
      Saving currently running profile ... done.
      Installing firmware ... done.
      Installing device links ... done.
      Installing device files ... done.
      Updating device links ... done.
      Updating /etc ... done.
      Creating /.domainroot ... done.
      Installing boot amd64/unix ... done.
      Assembling etc/system.d ... done.
      Creating factory reset boot archive ... done.
      Generating GRUB2 configuration ...  done.
      Installing GRUB2 configuration ... done.
      Snapshotting zfs filesystems ...  done.
      Installation complete - unmounting datasets ...
      Creating boot archive ...
      done.
       done
      Update completed; rebooting.
      Connection to 192.168.4.1 closed.
    4. Update the Oracle ILOM version on the controller.

      ovcasn01r1:> maintenance system reboot
      Upgrading both the Service Processor and host firmware and rebooting.
      This process will take several minutes. During this process the service processor will reboot and
      access to the console via the Net MGMT port will be interrupted. After the service processor
      upgrade is complete the host will power down for five to ten minutes in order to apply the new
      host firmware. When the host firmware upgrade is complete the host will power on and boot
      automatically.
      
      DO NOT INTERRUPT THIS PROCESS.
      
      Are you sure? (Y/N) y
      Connection to 192.168.4.1 closed.
    5. At the end of the upgrade, when the controller has fully rebooted and rejoined the cluster, log back in and check the cluster configuration. The upgraded controller must still be in the state "Ready (waiting for failback)".

      ovcasn01r1:> configuration version show
      Appliance Name: ovcasn01r1
      Appliance Product: Oracle ZFS Storage ZS7-2
      Appliance Type: Sun ZFS Storage 7370
      Appliance Version: 2013.06.05.8.28,1-1.3
      First Installed: Sun Mar 03 2019 03:35:34 GMT+0000 (UTC)
      Last Updated: Fri Apr 23 2021 09:25:34 GMT+0000 (UTC)
      Last Booted: Fri Apr 23 2021 09:27:35 GMT+0000 (UTC)
      Appliance Serial Number: a3xxxxx-1x60-7x0x-bxx0-84defg34714
      Chassis Serial Number: 1XXXXXXX
      Software Part Number: Oracle 000-0000-00
      Vendor Product ID: ORACLE-ZFS-ZS7-2
      Browser Name: aksh 1.0
      Browser Details: aksh
      HTTP Server: Apache/2.4.46 (Unix)
      SSL Version: OpenSSL 1.0.2u-fips  20 Dec 2019
      Appliance Kit: ak/SUNW,maguroZ7@2013.06.05.8.28,1-1.3
      Release Name: OS8.8.28
      Operating System: SunOS 5.11 11.4.28.82.3 64-bit
      BIOS: American Megatrends Inc. (BIOS)42040200 (BIOS)08.17.2018
      Service Processor: 4.0.4.52 r133103
      ovcasn01r1:>
  7. From the Private Cloud Appliance active management node, stop the Private Cloud Appliance services.

    Caution:

    You must perform this step if you are upgrading to Controller Software versions 2.3.1, 2.3.2, or 2.3.3. Is not required when upgrading to Controller Software version 2.3.4 or later. Executing the storage controller operating software upgrade while the Private Cloud Appliance services are running, will result in errors and possible downtime.

    [root@ovcamn05r1 ~]# service ovca stop
  8. Upgrade the operating software on the second storage controller.

    1. Check the storage cluster configuration. Make sure you are logged on to the active controller.

      ovcasn01r1:> configuration cluster show
      Properties:
                               state = AKCS_OWNER
                         description = Active (takeover completed)
                            peer_asn = 34e4292a-71ae-6ce1-e26c-cc38c2af9719
                       peer_hostname = ovcasn02r1
                          peer_state = AKCS_STRIPPED
                    peer_description = Ready (waiting for failback)
    2. Display the available operating software versions and select the version you downloaded.

      ovcasn01r1:> maintenance system updates
      ovcasn01r1:maintenance system updates> show
      Updates:
      
      UPDATE                              RELEASE DATE         RELEASE NAME    STATUS
      ak-nas@2013.06.05.8.5,1-1.3         2019-3-30 07:27:20   OS8.8.5         previous
      ak-nas@2013.06.05.8.6,1-1.4         2019-6-21 20:56:45   OS8.8.6         current
      ak-nas@2013.06.05.8.28,1-1.3        2021-4-16 09:57:19   OS8.8.28        waiting
      
      ovcasn01r1:maintenance system updates> select ak-nas@2013.06.05.8.28,1-1.3
    3. Launch the upgrade process with the selected software version.

      ovcasn01r1:maintenance system updates> upgrade
      This procedure will consume several minutes and requires a system reboot upon
      successful update, but can be aborted with [Control-C] at any time prior to 
      reboot. A health check will validate system readiness before an update is
      attempted, and may also be executed independently using the check command.
      
      Are you sure? (Y/N) Y
    4. At the end of the upgrade, when the controller has fully rebooted and rejoined the cluster, log back in and check the cluster configuration.

      ovcasn01r1:> configuration cluster show
      Properties:
                               state = AKCS_STRIPPED
                         description = Ready (waiting for failback)
                            peer_asn = 34e4292a-71ae-6ce1-e26c-cc38c2af9719
                       peer_hostname = ovcasn02r1
                          peer_state = AKCS_OWNER
                    peer_description = Active (takeover completed)

      The last upgraded controller must now be in the state "Ready (waiting for failback)". The controller that was upgraded first, took over the active role during the upgrade and reboot of the second controller, which held the active role originally.

  9. Now that both controllers have been upgraded, verify that all disks are online.

    ovcasn01r1:> maintenance hardware show
    [...]
                 NAME        STATE   MANUFACTURER   MODEL                    SERIAL                  RPM    TYPE
    chassis-000  1906NMQ803  ok      Oracle         Oracle Storage DE3-24C   1906NMQ803              7200   hdd
    disk-000     HDD 0       ok      WDC            W7214A520ORA014T         001851N3VKLT 9JG3VKLT   7200   data
    disk-001     HDD 1       ok      WDC            W7214A520ORA014T         001851N5K85T 9JG5K85T   7200   data
    disk-002     HDD 2       ok      WDC            W7214A520ORA014T         001851N5MPXT 9JG5MPXT   7200   data
    disk-003     HDD 3       ok      WDC            W7214A520ORA014T         001851N5L08T 9JG5L08T   7200   data
    disk-004     HDD 4       ok      WDC            W7214A520ORA014T         001851N42KNT 9JG42KNT   7200   data
    [...]
  10. Initiate a Private Cloud Appliance management node failover and wait until all services are restored on the other management node. This helps prevent connection issues between Oracle VM and the ZFS storage.

    1. Log on to the active management node using SSH and an account with superuser privileges.

    2. Reboot the active management node.

      [root@ovcamn05r1 ~]# pca-check-master
      NODE: 192.168.4.3  MASTER: True
      [root@ovcamn05r1 ~]# shutdown -r now
    3. Log on to the other management node and wait until the necessary services are running.

      Note:

      Enter this command at the prompt: tail -f /var/log/messages. The log messages should indicate when the management node takes over the active role.

      Verify the status of the services:

      [root@ovcamn06r1 ~]# service ovca status
      Checking Oracle Fabric Manager: Running
      MySQL running (70254)                                      [  OK  ]
      Oracle VM Manager is running...
      Oracle VM Manager CLI is running...
      tinyproxy (pid 71315 71314 71313 71312 71310 71309 71308 71307 71306 71305 71301) is running...
      dhcpd (pid  71333) is running...
      snmptrapd (pid  71349) is running...
      log server (pid 6359) is running...
      remaster server (pid 6361) is running...
      http server (pid 71352) is running...
      taskmonitor server (pid 71356) is running...
      xmlrpc server (pid 71354) is running...
      nodestate server (pid 71358) is running...
      sync server (pid 71360) is running...
      monitor server (pid 71363) is running...
  11. When the storage controller cluster has been upgraded, remove the shared storage directory you created to make the unzipped package available.

    # cd /nfs/shared_storage/yum/ak
    # ls ak-nas@2013.06.05.8.28,1-1.3x-nondebug.pkg OS8.8.28_Readme.html
    # rm ak-nas@2013.06.05.8.28,1-1.3x-nondebug.pkg OS8.8.28_Readme.html
    rm: remove regular file `ak-nas-2013.06.05.8.28-1.1.3x-nondebug.pkg'? yes
    rm: remove regular file `OS8.8.28_Readme.html'? yes
    # cd ..
    # rmdir ak

Upgrading the Cisco Switch Firmware

The instructions in this section are specific for a component firmware upgrade of the Oracle Private Cloud Appliance. The Cisco switches require two upgrade procedures: upgrading the Cisco NX-OS software and upgrading the electronic programmable logic device (EPLD). Perform both procedures on each of the switches.

Caution:

When upgrading to Controller Software release 2.4.4, it is critical that you perform the upgrade operations in the correct order. This means the Cisco switch firmware must be upgraded after the management node upgrade, but before the storage network upgrade.

Do not make any spine switch configuration changes until you have completed all the upgrade operations. If you make any spine switch configuration changes before all upgrade operations are complete, you could lose access to the storage network. See "Loading Incompatible Spine Switch Configuration Causes Storage Network Outage" in Known Limitations and Workarounds in the Oracle Private Cloud Appliance Release Notes.

Upgrading the Cisco NX-OS Software of All Cisco Leaf, Spine, and Management Switches

  1. Log on to the active management node using SSH and an account with superuser privileges.

  2. Verify that the new Cisco NX-OS software image is available on the appliance shared storage. During the Controller Software upgrade, the Oracle Private Cloud Appliance Upgrader copies the file to the following location:

    /nfs/shared_storage/mgmt_image/firmware/ethernet/Cisco/nxos.7.0.3.I7.9.bin
  3. Upgrade the switches, one at a time, in the following order.

    1. Leaf Cisco Nexus 9336C-FX2 Switches: ovcasw15r1, ovcasw16r1

      First upgrade the switch that has the Primary or Operational Primary vPC role. Then upgrade the switch that has the Secondary or Operational Secondary vPC role. See Determining Which Switch is Primary or Operational Primary.

    2. Spine Cisco Nexus 9336C-FX2 Switches: ovcasw22r1, ovcasw23r1

      First upgrade the switch that has the Primary or Operational Primary vPC role. Then upgrade the switch that has the Secondary or Operational Secondary vPC role. See Determining Which Switch is Primary or Operational Primary.

    3. Management Cisco Nexus 9348GC-FXP Switch: ovcasw21r1

    After you determine the order in which the switches must be upgraded, follow the procedure Upgrading the Cisco NX-OS Software of Each Cisco Leaf, Spine, and Management Switch to upgrade each switch.

Determining Which Switch is Primary or Operational Primary

Use this procedure to determine which switch is Primary or Operational Primary and which switch is Secondary or Operational Secondary in your environment. Leaf switches are shown in this example. Use the same procedure for spine switches.

  1. Log on as admin to the first switch in the pair.

    root@ovcamn05r1 ~]# ssh admin@ovcasw15r1
    User Access Verification
    Password:
    ovcasw15r1#
  2. Determine the vPC role for this switch. In the following example, the vPC role shows that this switch is primary. Upgrade this switch first.

    ovcasw15r1(config)# show vpc role
    
    vPC Role status
    ----------------------------------------------------
    vPC role                        : primary
    Dual Active Detection Status    : 0
    vPC system-mac                  : 00:23:04:ee:be:01             
    vPC system-priority             : 32667
    vPC local system-mac            : bc:5a:56:aa:a6:1b             
    vPC local role-priority         : 1   
    vPC local config role-priority  : 1   
    vPC peer system-mac             : d4:e8:80:87:e0:ef             
    vPC peer role-priority          : 2   
    vPC peer config role-priority   : 2   
    ovcasw15r1# exit
    Connection to ovcasw15r1 closed
  3. Log on as admin to the second switch in the pair.

    root@ovcamn05r1 ~]# ssh admin@ovcasw16r1
    User Access Verification
    Password:
    ovcasw16r1#
  4. Confirm the vPC role for the second switch. In the following example, the vPC role shows that this switch is secondary. Upgrade this switch second.

    ovcasw16r1# show vpc role
    
    vPC Role status
    ----------------------------------------------------
    vPC role                        : secondary
    Dual Active Detection Status    : 0
    vPC system-mac                  : 00:23:04:ee:be:02             
    vPC system-priority             : 32667
    vPC local system-mac            : d4:c9:3c:77:06:8f             
    vPC local role-priority         : 2   
    vPC local config role-priority  : 2   
    vPC peer system-mac             : d4:e8:80:87:eb:4b             
    vPC peer role-priority          : 1   
    vPC peer config role-priority   : 1  
    ovcasw16r1# exit
    Connection to ovcasw16r1 closed.

In the preceding example, upgrade switch ovcasw15r1 first, and then upgrade switch ovcasw16r1. This situation might be reversed in your environment. Your output might look like the following example. In the following example, the vPC role shows that the ovcasw16r1 switch is operational primary. Upgrade this switch first.

ovcasw16r1# show vpc role

vPC Role status
----------------------------------------------------
vPC role                        : secondary, operational primary
Dual Active Detection Status    : 0
vPC system-mac                  : 00:23:04:ee:be:02             
vPC system-priority             : 32667
vPC local system-mac            : d4:c9:3c:77:06:8f             
vPC local role-priority         : 2   
vPC local config role-priority  : 2   
vPC peer system-mac             : d4:e8:80:87:eb:4b             
vPC peer role-priority          : 1   
vPC peer config role-priority   : 1

In this example, the vPC role shows that the ovcasw15r1 switch is operational secondary. Upgrade this switch second.

ovcasw15r1# show vpc role

vPC Role status
----------------------------------------------------
vPC role                        : primary, operational secondary
Dual Active Detection Status    : 0
vPC system-mac                  : 00:23:04:ee:be:02             
vPC system-priority             : 32667
vPC local system-mac            : d4:e8:80:87:eb:4b             
vPC local role-priority         : 1   
vPC local config role-priority  : 1   
vPC peer system-mac             : d4:c9:3c:77:06:8f             
vPC peer role-priority          : 2   
vPC peer config role-priority   : 2

Use this same procedure for the spine switch pair: ovcasw22r1 and ovcasw23r1.

Upgrading the Cisco NX-OS Software of Each Cisco Leaf, Spine, and Management Switch

Note:

Each pair of leaf and spine switches operates in a vPC topology. Note the vPC roles and upgrade order before you begin the upgrade: The roles will change temporarily during the upgrade process. See Determining Which Switch is Primary or Operational Primary. Leaf and spine switches must be upgraded in the prescribed order.

Caution:

Once an upgrade to Controller Software release 2.4.3 or later is complete on the spine switches, do not attempt to reload a spine switch backup from a prior software release. Loading a spine switch backup from a release older than Controller Software release 2.4.3 could cause the management nodes to lose access to the storage network.

Note:

Upgrading the management switch causes network disruption between compute nodes, management nodes, the storage node, and leaf and spine switch management connections. This network disruption is due to the reboot of the switch as part of the upgrade process.

  1. Log on to the active management node using SSH and an account with superuser privileges.

  2. Log on as admin to the first switch to be upgraded.

    Switches must be upgraded in the following order: Primary or Operating Primary leaf switch, Secondary or Operating Secondary leaf switch, Primary or Operating Primary spine switch, Secondary or Operating Secondary spine switch, management switch. See Determining Which Switch is Primary or Operational Primary. The switch shown in this example might not be the first switch to be upgraded in your environment.

    root@ovcamn05r1 ~]# ssh admin@ovcasw15r1
    User Access Verification
    Password:
    ovcasw15r1#

    Complete this entire procedure for one switch, then return to this step to upgrade the next switch.

  3. Copy the Cisco NX-OS software file to the bootflash location on the switch.

    Note that the copy command for the management switch ovcasw21r1 is different from the copy command for the leaf and spine switches. Specify the appropriate parameter, management or default, as shown in the examples.

    • Leaf and spine switches.

      Execute the following command for each leaf and spine switch, specifying the management parameter as shown:

      ovcasw15r1# copy scp://root@192.168.4.216//nfs/shared_storage/mgmt_image/firmware/ethernet/Cisco/nxos.7.0.3.I7.9.bin \
      bootflash:nxos.7.0.3.I7.9.bin vrf management
      root@192.168.4.216's password:
      nxos.7.0.3.I7.9.bin                         100%  937MB  16.2MB/s   00:58
      Copy complete, now saving to disk (please wait)...
      Copy complete.
    • Management switch.

      Execute the following command for the management switch, specifying the default parameter as shown:

      ovcasw21r1# copy scp://root@192.168.4.216//nfs/shared_storage/mgmt_image/firmware/ethernet/Cisco/nxos.7.0.3.I7.9.bin \
      bootflash:nxos.7.0.3.I7.9.bin vrf default
      root@192.168.4.216's password:
      nxos.7.0.3.I7.9.bin                         100%  937MB  16.2MB/s   00:58
      Copy complete, now saving to disk (please wait)...
      Copy complete.
  4. Verify the impact of the software upgrade.

    ovcasw15r1# show install all impact nxos bootflash:nxos.7.0.3.I7.9.bin
    Installer will perform impact only check. Please wait.
    
    Verifying image bootflash:/nxos.7.0.3.I7.9.bin for boot variable "nxos".
    [####################] 100% -- SUCCESS
     
    Verifying image type.
    [####################] 100% -- SUCCESS
     
    Preparing "nxos" version info using image bootflash:/nxos.7.0.3.I7.9.bin.
    [####################] 100% -- SUCCESS
     
    Preparing "bios" version info using image bootflash:/nxos.7.0.3.I7.9.bin.
    [####################] 100% -- SUCCESS
     
    Performing module support checks.
    
    Notifying services about system upgrade.
    
    Compatibility check is done:
    Module  bootable          Impact  Install-type  Reason
    ------  --------  --------------  ------------  ------
         1       yes      disruptive         reset  default upgrade is not hitless
     
    Images will be upgraded according to the following table:
    Module       Image                  Running-Version(pri:alt)           New-Version  Upg-Required
    ------  ----------  ----------------------------------------  --------------------  ------------
         1        nxos                               7.0(3)I7(8)           7.0(3)I7(9)           yes
         1        bios     v05.44(04/02/2021):v05.39(08/30/2019)    v05.38(06/12/2019)            no
  5. Save the current running configuration as the startup configuration.

    ovcasw15r1# copy running-config startup-config
    [########################################] 100%
    Copy complete, now saving to disk (please wait)...
    Copy complete.
  6. Install the Cisco NX-OS software that was copied to the bootflash location. When prompted about the disruptive upgrade, enter y to continue with the installation.

    ovcasw15r1# install all nxos bootflash:nxos.7.0.3.I7.9.bin
    Installer will perform compatibility check first. Please wait.
    Installer is forced disruptive
    
    Verifying image bootflash:/nxos.7.0.3.I7.9.bin for boot variable "nxos".
    [####################] 100% -- SUCCESS
     
    Verifying image type.
    [####################] 100% -- SUCCESS
     
    Preparing "nxos" version info using image bootflash:/nxos.7.0.3.I7.9.bin.
    [####################] 100% -- SUCCESS
     
    Preparing "bios" version info using image bootflash:/nxos.7.0.3.I7.9.bin.
    [####################] 100% -- SUCCESS
     
    Performing module support checks.
    
    Notifying services about system upgrade.
    
    Compatibility check is done:
    Module  bootable          Impact  Install-type  Reason
    ------  --------  --------------  ------------  ------
         1       yes      disruptive         reset  default upgrade is not hitless
     
    Images will be upgraded according to the following table:
    Module       Image                  Running-Version(pri:alt)           New-Version  Upg-Required
    ------  ----------  ----------------------------------------  --------------------  ------------
         1        nxos                               7.0(3)I7(8)           7.0(3)I7(9)           yes
         1        bios     v05.44(04/02/2021):v05.39(08/30/2019)    v05.38(06/12/2019)            no
     
    Switch will be reloaded for disruptive upgrade.
    Do you want to continue with the installation (y/n)?  [n] y
     
    Install is in progress, please wait.
    
    Performing runtime checks.
    [####################] 100% -- SUCCESS
     
    Setting boot variables.
    [####################] 100% -- SUCCESS
     
    Performing configuration copy.
    [####################] 100% -- SUCCESS
     
    Module 1: Refreshing compact flash and upgrading bios/loader/bootrom.
    Warning: please do not remove or power off the module at this time.
    [####################] 100% -- SUCCESS
     
    Finishing the upgrade, switch will reboot in 10 seconds.
  7. After switch reboot, confirm the install succeeded.

    ovcasw15r1# show install all status
    This is the log of last installation.
    
    Verifying image bootflash:/nxos.7.0.3.I7.9.bin for boot variable "nxos".
     -- SUCCESS
     
    Verifying image type.
     -- SUCCESS
     
    Preparing "nxos" version info using image bootflash:/nxos.7.0.3.I7.9.bin.
     -- SUCCESS
     
    Preparing "bios" version info using image bootflash:/nxos.7.0.3.I7.9.bin.
     -- SUCCESS
     
    Performing module support checks.
     -- SUCCESS
    
    Notifying services about system upgrade.
     -- SUCCESS
    
    Compatibility check is done:
    Module  bootable          Impact  Install-type  Reason
    ------  --------  --------------  ------------  ------
         1       yes      disruptive         reset  default upgrade is not hitless
     
    Images will be upgraded according to the following table:
    Module       Image                  Running-Version(pri:alt)           New-Version  Upg-Required
    ------  ----------  ----------------------------------------  --------------------  ------------
         1        nxos                               7.0(3)I7(8)           7.0(3)I7(9)           yes
         1        bios     v05.44(04/02/2021):v05.39(08/30/2019)    v05.38(06/12/2019)            no
     
    Switch will be reloaded for disruptive upgrade.
     
    Install is in progress, please wait.
    
    Performing runtime checks.
     -- SUCCESS
     
    Setting boot variables.
     -- SUCCESS
     
    Performing configuration copy.
     -- SUCCESS
     
    Module 1: Refreshing compact flash and upgrading bios/loader/bootrom.
    Warning: please do not remove or power off the module at this time.
     -- SUCCESS
     
    Finishing the upgrade, switch will reboot in 10 seconds.
  8. Verify that the correct software version is active on the switch.

    ovcasw15r1# show version
    Cisco Nexus Operating System (NX-OS) Software
    TAC support: http://www.cisco.com/tac
    Copyright (C) 2002-2020, Cisco and/or its affiliates.
    All rights reserved.
    [...]
    
    Software
      BIOS: version 05.38
      NXOS: version 7.0(3)I7(9)
      BIOS compile time:  06/12/2019
      NXOS image file is: bootflash:///nxos.7.0.3.I7.9.bin
      NXOS compile time:  3/3/2020 20:00:00 [03/04/200 04:49:49]
    [...]
    
    ovcasw15r1#
  9. Verify the vPC status.

    Note:

    This step does not apply to the appliance internal management network switch (Cisco Nexus 9348GC-FXP Switch). Proceed to the next step.

    Use the command shown below. The output values should match this example.

    ovcasw15r1# show vpc brief
    Legend:
                    (*) - local vPC is down, forwarding via vPC peer-link
     
    vPC domain id                     : 2
    Peer status                       : peer adjacency formed ok <---- verify this field
    vPC keep-alive status             : peer is alive <----- verify this field
    Configuration consistency status  : success <----- verify this field
    Per-vlan consistency status       : success <----- verify this field
    Type-2 consistency status         : success <----- verify this field
    vPC role                          : primary, operational secondary
    Number of vPCs configured         : 27
    Peer Gateway                      : Enabled
    Dual-active excluded VLANs        : -
    Graceful Consistency Check        : Enabled
    Auto-recovery status              : Disabled
    Delay-restore status              : Timer is off.(timeout = 30s)
    Delay-restore SVI status          : Timer is off.(timeout = 10s)
    Operational Layer3 Peer-router    : Enabled
  10. Log out of the switch. The firmware has been upgraded successfully.

    ovcasw15r1# exit
    Connection to ovcasw15r1 closed

    Repeat this procedure for the next switch to be upgraded.

Upgrading the Electronic Programmable Logic Device (EPLD) of all Cisco Leaf, Spine, and Management Switches

The instructions in this section are specific for a component firmware upgrade as part of the Oracle Private Cloud Appliance.

Note:

Upgrading the management switch causes network disruption between compute nodes, management nodes, the storage node, and leaf and spine switch management connections. This network disruption is due to the reboot of the switch as part of the upgrade process.

  1. Log on to the active management node using SSH and an account with superuser privileges.

  2. Verify that the new Cisco NX-OS EPLD firmware image is available on the appliance shared storage. During the Controller Software upgrade, the Oracle Private Cloud Appliance Upgrader copies the file to the following location:

    /nfs/shared_storage/mgmt_image/firmware/ethernet/Cisco/n9000-epld.7.0.3.I7.9.img
  3. Log on as admin to the first switch to be upgraded.

    Switches must be upgraded in the following order: Primary or Operating Primary leaf switch, Secondary or Operating Secondary leaf switch, Primary or Operating Primary spine switch, Secondary or Operating Secondary spine switch, management switch. See Determining Which Switch is Primary or Operational Primary. The switch shown in this example might not be the first switch to be upgraded in your environment.

    root@ovcamn05r1 ~]# ssh admin@ovcasw15r1
    User Access Verification
    Password:
    ovcasw15r1#

    Complete this entire procedure for one switch, then return to this step to upgrade the next switch.

  4. Copy the firmware file to the bootflash location on the switch.

    Note that the copy command for the management switch ovcasw21r1 is different from the copy command for the leaf and spine switches. Specify the appropriate parameter, management or default, as shown in the examples.

    • Leaf and spine switches.

      Execute the following command for each leaf and spine switch, specifying the management parameter as shown:

      ovcasw15r1# copy scp://root@192.168.4.216//nfs/shared_storage/mgmt_image/firmware/ethernet/Cisco/n9000-epld.7.0.3.I7.9.img \
      bootflash:n9000-epld.7.0.3.I7.9.img vrf management
      root@192.168.4.216's password:
      n9000-epld.7.0.3.I7.9.img                   100%  142MB  15.8MB/s   00:09
      Copy complete, now saving to disk (please wait)...
      Copy complete.
    • Management switch.

      Execute the following command for the management switch, specifying the default parameter as shown:

      ovcasw21r1# copy scp://root@192.168.4.216//nfs/shared_storage/mgmt_image/firmware/ethernet/Cisco/n9000-epld.7.0.3.I7.9.img \
      bootflash:n9000-epld.7.0.3.I7.9.img vrf default
      root@192.168.4.216's password:
      n9000-epld.7.0.3.I7.9.img                   100%  142MB  15.8MB/s   00:09
      Copy complete, now saving to disk (please wait)...
      Copy complete.
  5. Verify the impact of the EPLD upgrade.

    ovcasw15r1# show install all impact epld bootflash:n9000-epld.7.0.3.I7.9.img
    
    Retrieving EPLD versions.... Please wait.
    Images will be upgraded according to the following table:
    Module  Type   EPLD              Running-Version   New-Version  Upg-Required
    ------  ----  -------------      ---------------   -----------  ------------
         1   SUP  MI FPGA                   0x04        0x05            Yes
         1   SUP  IO FPGA                   0x09        0x11            Yes
    
    Compatibility check:
    Module        Type         Upgradable        Impact   Reason
    -------------------------------------------------
         1            SUP           Yes       disruptive   Module Upgradable
  6. Save the current running configuration as the startup configuration.

    ovcasw15r1# copy running-config startup-config
    [########################################] 100%
    Copy complete, now saving to disk (please wait)...
    Copy complete.

    Note:

    You must upgrade both the primary and golden regions of the FPGA. However, only one upgrade is allowed per reload to avoid programming errors. The next steps describe how to upgrade both regions of the FPGA.

  7. Install the Cisco EPLD software that was copied to the bootflash location to the primary region of the FPGA. When prompted about the switch reload, enter y to continue with the installation.

    Caution:

    Do not interrupt, power cycle, or reload the switch during the upgrade.

    ovcasw15r1# install epld bootflash:n9000-epld.7.0.3.I7.9.img module 1
    Digital signature verification is successful
    Compatibility check:
    Module        Type         Upgradable       Impact   Reason
    ------  -----------------  ----------    ----------   ------
         1            SUP           Yes      disruptive   Module Upgradable
     
    Retrieving EPLD versions.... Please wait.
    Images will be upgraded according to the following table:
    Module  Type   EPLD              Running-Version   New-Version  Upg-Required
    ------  ----  -------------      ---------------   -----------  ------------
         1   SUP  MI FPGA                   0x04        0x05             No
         1   SUP  IO FPGA                   0x09        0x11            Yes
    The above modules require upgrade.
    The switch will be reloaded at the end of the upgrade
    Do you want to continue (y/n) ?  [n] y
     
    Proceeding to upgrade Modules.
     
     Starting Module 1 EPLD Upgrade
     
    Module 1 : IO FPGA [Programming] : 100.00% (     64 of      64 sectors)
    Module 1 EPLD upgrade is successful.
    Module        Type  Upgrade-Result
    ------  ------------------  --------------
         1         SUP         Success 
     
    EPLDs upgraded.
     
    Module 1 EPLD upgrade is successful. 
     
    Resetting Active SUP (Module 1) FPGAs. Please wait...
  8. The switch reloads automatically and boots from the backup FPGA. Confirm the primary module upgrade succeeded.

    ovcasw15r1# show version module 1 epld
    
    EPLD Device                     Version
    ---------------------------------------
    MI FPGA                          0x5
    IO FPGA                          0x9

    At this point, the MI FPGA version is upgraded, but the IO FPGA version is not upgraded.

  9. Install the Cisco EPLD software that was copied to the bootflash location to the golden region of the FPGA. When prompted about the switch reload, enter y to continue with the installation.

    Caution:

    Do not interrupt, power cycle, or reload the switch during the upgrade.

    ovcasw15r1# install epld bootflash:n9000-epld.7.0.3.I7.9.img module 1 golden
    Digital signature verification is successful
    Compatibility check:
    Module        Type         Upgradable       Impact   Reason
    ------  -----------------  ----------    ----------   ------
         1            SUP           Yes      disruptive   Module Upgradable
     
    Retrieving EPLD versions.... Please wait.
    The above modules require upgrade.
    The switch will be reloaded at the end of the upgrade
    Do you want to continue (y/n) ?  [n] y
     
    Proceeding to upgrade Modules.
     
     Starting Module 1 EPLD Upgrade
     
    Module 1 : MI FPGA [Programming] : 100.00% (     64 of      64 sectors)
    Module 1 : IO FPGA [Programming] : 100.00% (     64 of      64 sectors)
    Module 1 EPLD upgrade is successful.
    Module        Type  Upgrade-Result
    ------  ------------------  --------------
         1         SUP         Success
    
    EPLDs upgraded.
     
    Module 1 EPLD upgrade is successful. 
     
    Resetting Active SUP (Module 1) FPGAs. Please wait...
  10. The switch reloads automatically and boots from the backup FPGA. Confirm the both upgrades succeeded.

    ovcasw15r1# show version module 1 epld
    
    EPLD Device                     Version
    ---------------------------------------
    MI FPGA                          0x5
    IO FPGA                          0x11
  11. Verify the vPC status.

    Note:

    This step does not apply to the appliance internal management network switch (Cisco Nexus 9348GC-FXP Switch). Proceed to the next step.

    Use the command shown below. The output values should match this example.

    ovcasw15r1# show vpc brief
    Legend:
                    (*) - local vPC is down, forwarding via vPC peer-link
     
    vPC domain id                     : 2
    Peer status                       : peer adjacency formed ok <---- verify this field
    vPC keep-alive status             : peer is alive <----- verify this field
    Configuration consistency status  : success <----- verify this field
    Per-vlan consistency status       : success <----- verify this field
    Type-2 consistency status         : success <----- verify this field
    vPC role                          : primary, operational secondary
    Number of vPCs configured         : 27
    Peer Gateway                      : Enabled
    Dual-active excluded VLANs        : -
    Graceful Consistency Check        : Enabled
    Auto-recovery status              : Disabled
    Delay-restore status              : Timer is off.(timeout = 30s)
    Delay-restore SVI status          : Timer is off.(timeout = 10s)
    Operational Layer3 Peer-router    : Enabled
  12. Log out of the switch. The firmware has been upgraded successfully.

    ovcasw15r1# exit
    Connection to ovcasw15r1 closed

    Repeat this procedure for the next switch to be upgraded.

Install the Current Firmware on All Compute Nodes

Always install the server ILOM and component firmware included in the ISO image of the current Oracle Private Cloud Appliance software release. When the image file is unpacked on the appliance internal shared storage, the firmware is located on the management nodes at /nfs/shared_storage/mgmt_image/firmware/.

For firmware upgrade instructions, refer to the Administration Guide of the server series installed in your appliance rack at Servers Documentation. Supported firmware versions are listed in the Oracle Private Cloud Appliance Release Notes.

The following task map outlines the steps you should consider as you update compute node firmware.

Table 3-3 Compute Node Firmware Upgrade Task Map

Component Resources

Migrate any VMs off of the compute nodes

See Migrate or Move Virtual Machines in the Oracle VM Manager User's Guide.

Update CX5 card firmware

For general card update instructions, see support document How to upgrade Private Cloud Appliance (PCA) compute or management node Sun InfiniBand Dual Port CX3 firmware (Doc ID 2399803.1).

  • For Oracle Server X9-2 CX5 upgrades, use the Oracle_Server_X9-2-version-FIRMWARE_PACK/Firmware/ Oracle_Dual_Port_CX5_100_Gb_OCP_Adapter file.

  • For Oracle Server X8-2 CX5 upgrades, use the firmware/compute/X8-2_FIRMWARE/Oracle_Dual_Port_100_Gb_RoCE_Adapter/ fw-ConnectX5-rel-version _MCX556A-EDAS_C14_OCI_Ax_Bx-UEFI-14.21.16.signed.bin file. For more information refer to the Read Me file at firmware/compute/X8-2_FIRMWARE/readme.txt.

    NOTE: Do not upgrade to the Mellanox firmware provided in p32629785_123_Generic.zip. That file contains an outdated CX5 firmware version.

Update RAID Controller firmware

For general card update instructions, see support document 2399803.1.

For Oracle Server X9-2, the RAID firmware is upgraded when the ILOM/BIOS is upgraded, so no further action is necessary.

Update server ILOM/BIOS firmware

For general ILOM/BIOS firmware update instructions, see Updating System Firmware Using Oracle ILOM.

Proceed to Upgrading the Compute Node Software

See Upgrading the Compute Node Software.

Upgrading the Compute Node Software

Starting with Oracle Private Cloud Appliance Controller Software release 2.4.4, the compute node upgrade procedure has been revamped and includes reprovisioning the servers to upgrade both the operating system and the Oracle VM virtualization platform using the pca_upgrader tool.

Caution:

Do not make any management changes during the entire compute node upgrade procedure.

Note:

Prior to this release, compute nodes were updated using the update compute-node command. This update method is not supported for version 2.4.4.

To successfully upgrade a compute node the pca_upgrader tool performs several pre-checks to ensure the node is ready for upgrade. After successfully completing the pre-checks, the upgrader proceeds to these tasks:

  1. Backs up the compute node configuration information: including network, repository, and storage information. Backups are stored here: /nfs/shared_storage/pca_upgrader/cn_upgrade

  2. Reprovisions the compute node with the latest operating system and Oracle VM Server version.

  3. Restores the compute node configuration information.

Essentially, the upgrade process involves a complete re-imaging of the compute node to reinstall the latest operating system and Oracle VM server versions, and this process can take around an hour. Any other applications installed on your compute nodes will be removed during the upgrade procedure. Make sure to back up any applications you want to restore to the compute nodes after upgrade.

Upgrading a Compute Node

Caution:

Execute this procedure on each compute node after the software upgrade on the management nodes has completed successfully. The management node upgrade creates the compute node image required for the following procedure.

Caution:

If compute nodes are running other packages that are not part of Oracle Private Cloud Appliance, these must be backed up before upgrade.

Caution:

Do not use the add network-to-tenant-group, add compute-node, and reprovision CLI commands during the compute node upgrade procedure.

  1. Make sure that the management nodes are upgraded to the new release.

    You can verify this by logging into the active management node and entering the following command in the Oracle Private Cloud Appliance CLI:

    # pca-admin
    Welcome to PCA! Release: 2.4.4
    PCA> show version
    
    ----------------------------------------
    Version              2.4.4
    Build                000
    Date                 2021-03-22
    ----------------------------------------
    
    Status: Success

    Leave the console and CLI connection open. You need to run the upgrade command later in this procedure.

  2. Migrate all virtual machines away from the compute node you want to upgrade.

    Information on migrating virtual machines is provided in Migrate or Move Virtual Machines in the Oracle VM Manager User's Guide.

  3. Remove any Repository Exports on the compute node being upgraded.

    For more information, see Delete Repository Export in the Oracle VM Manager User's Guide.

  4. Back up any other installed packages that are not part of Oracle Private Cloud Appliance.

    If you have installed any other software on the compute node, back it up now; you will need to restore it once the upgrade is finished. See Backup Local Customizations.

  5. From the active management node, run the Oracle Private Cloud Appliance Upgrader in verify-only mode on the compute node.

    [root@ovcamn05r1 ~]# pca_upgrader -V -t compute -c ovcacn07r1
    PCA Rack Type: PCA X8_BASE.
    Please refer to log file /nfs/shared_storage/pca_upgrader/log/pca_upgrader_2021_07_25-04.34.22.log for more details.
    Beginning PCA Compute Node Pre-Upgrade Checks...
    Check target Compute Node exists 1/10
    Check the provisioning lock is not set 2/10
    Check GRUB customizations 3/10
    Check LUN and LUN path counts for FC HBA cards 4/10
    Check OVCA release on Management Nodes 5/10
    Check target Compute Node has no VMs 6/10
    Check local repository of target Compute Node is empty 7/10
    Check no physical disks on target Compute Node have repositories 8/10
    Check Compute Node's Tenant matches Server Pool 9/10
    Check target Compute Node has no local networks VNICs 10/10
    PCA Compute Node Pre-Upgrade Checks completed after 0 minutes
    ---------------------------------------------------------------------------
    PCA Compute Node Pre-Upgrade Checks Passed
    ---------------------------------------------------------------------------
    Check target Compute Node exists Passed
    Check the provisioning lock is not set Passed
    Check GRUB customizations Passed
    Check LUN and LUN path counts for FC HBA cards Passed
    Check OVCA release on Management Nodes Passed
    Check target Compute Node has no VMs Passed
    Check local repository of target Compute Node is empty Passed
    Check no physical disks ontarget Compute Node have repositories Passed
    Check Compute Node's Tenant matches Server Pool Passed
    Check target Compute Node has no local networks VNICs Passed
    ---------------------------------------------------------------------------
    Overall Status Passed
    ---------------------------------------------------------------------------
    PCA Compute Node Pre-Upgrade Checks Passed
      Please refer to log file /nfs/shared_storage/pca_upgrader/log/pca_upgrader_date-04.34.22.log for more details.

    The upgrader performs several pre-checks. If any pre-check fails, refer to the log file for details on any failures and fix those failures before running the upgrade. Find the compute node log files in this directory on the shared storage: /nfs/shared_storage/pca_upgrader/log.

    If your pre-checks fail, ensure the compute node local repository is empty. For details, see "Pre-Upgrade Check of Local Repository to Ensure Target Compute Node is Empty" in Known Limitations and Workarounds in the Oracle Private Cloud Appliance Release Notes.

    If you encounter FC HBA LUN errors, apply the service multipathd restart command to resolve the issue. If that does not resolve the issues, see the Oracle Private Cloud Appliance Release Notes for more information about LUN errors.

  6. From the active management node, run the upgrade for the designated compute node.

    [root@ovcamn05r1 ~]# pca_upgrader -U -t compute -c ovcacn07r1
    PCA Rack Type: PCA X8_BASE.
    Please refer to log file /nfs/shared_storage/pca_upgrader/log/pca_upgrader_2021_07_25-04.36.30.log for more details.
    Beginning PCA Compute Node Pre-Upgrade Checks...
    Check target Compute Node exists 1/10
    Check the provisioning lock is not set 2/10
    Check LUN and LUN path counts for FC HBA cards 3/10
    Check GRUB customizations 4/10
    Check OVCA release on Management Nodes 5/10
    Check Compute Node's Tenant matches Server Pool 6/10
    Check target Compute Node has no local networks VNICs 7/10
    Check no physical disks on target Compute Node have repositories 8/10
    Check local repository of target Compute Node is empty 9/10
    Check target Compute Node has no VMs 10/10
    PCA Compute Node Pre-Upgrade Checks completed after 0 minutes
    Beginning PCA Compute Node Upgrade
    Place Compute Upgrade Locks 1/6
    Backup Compute Node customizations 2/6
    Unpresent repositories from Compute Node 3/6
    Upgrade the Compute Node 4/6
    Restore Compute Node customizations 5/6
    Remove Compute Upgrade Locks 6/6
    PCA Compute Node upgrade completed after 42 minutes
    Beginning PCA Compute Node Post-Upgrade Checks...
    PCA Compute Node Post-Upgrade Checks completed after 0 minutes
    ---------------------------------------------------------------------------
    PCA Compute Node Pre-Upgrade Checks Passed
    ---------------------------------------------------------------------------
    Check target Compute Node exists Passed
    Check the provisioning lock is not set Passed
    Check LUN and LUN path counts for FC HBA cards Passed
    Check GRUB customizations Passed
    Check OVCA release on Management Nodes Passed
    Check Compute Node's Tenant matches Server Pool Passed
    Check target Compute Node has no local networks VNICs Passed
    Check no physical disks on target Compute Node have repositories Passed
    Check local repository of target Compute Node is empty Passed
    Check target Compute Node has no VMs Passed
    ---------------------------------------------------------------------------
    PCA Compute Node Upgrade Passed
    ---------------------------------------------------------------------------
    Place Compute Upgrade Locks Passed
    Backup Compute Node customizations Passed
    Unpresent repositories from Compute Node Passed
    Upgrade the Compute Node Passed
    Restore Compute Node customizations Passed
    Remove Compute Upgrade Locks Passed
    ---------------------------------------------------------------------------
    PCA Compute Node Post-Upgrade Checks Passed
    ---------------------------------------------------------------------------
    ---------------------------------------------------------------------------
    Overall Status Passed
    ---------------------------------------------------------------------------
    PCA Compute Node Pre-Upgrade Checks Passed
    PCA Compute Node Upgrade Passed
    PCA Compute Node Post-Upgrade Checks Passed
      Please refer to log file /nfs/shared_storage/pca_upgrader/log/pca_upgrader_date-04.36.30.log for more details.
  7. Once the pca_upgrader tool completes the upgrade, log in to the Oracle VM Manager and confirm the compute node has been returned to the correct server pool.

  8. Restore any other software that was on the compute node.

  9. Repeat this procedure for each compute node in your Oracle Private Cloud Appliance.

After successful completion of the upgrade, the Oracle Private Cloud Appliance is ready to resume all normal operations.