Active Version 3.0.1-b741265
If your Private Cloud Appliance is running software version with build number 3.0.1-b741265, follow the instructions in this section to upgrade to appliance software version 3.0.2-b799577.
Download and Unpack the ISO Image
Software versions and upgrades for Oracle Private Cloud Appliance are made available for download through My Oracle Support. The ISO file contains all the files and packages required to upgrade the appliance hardware and software components to a given release. All the items within the ISO file have been tested to work with each other and qualified for installation on your rack system. Do not install or upgrade individual packages on the appliance components.
- Preparing the ISO Image
-
To be able to use an ISO file to upgrade your appliance, you need to download the file to a location from where a web server can make it available to the Private Cloud Appliance management nodes. If you have set up a bastion host connected to the internal administration network of the appliance, it is convenient to store the ISO file on that machine and run a web server to make the ISO file accessible over http.
When you run the first upgrade command on the appliance, you provide the path to the ISO file as a parameter. At that point, the ISO file is copied to the shared storage mounted on all three management nodes, and unpacked into a well-defined directory structure. You do not need to perform these steps manually in advance.
- Ensuring the System Is In Ready State
-
Upgrades can be performed with limited impact on the system. No downtime is required, and user workloads continue to run while the underlying infrastructure is being upgraded in stages. However, it is considered good practice to ensure that backups are created of the system and the resources in your environment.
Every upgrade operation is preceded by a set of pre-checks. The upgrade will only begin if all pre-checks are passed. Concurrent upgrade operations are not supported. An upgrade job must be completed before a new one can be started.
Upgrade the Compute Nodes
The compute node upgrade ensures that the latest Oracle Linux kernel and user space packages are installed,
as well as the ovm-agent
package with appliance-specific optimizations.
Compute nodes must be locked and upgraded one at a time; concurrent upgrades are not
supported. After successful upgrade, when a compute node has rebooted, the administrator must
manually remove the locks to allow the node to return to normal operation.
Obtaining a Host IP Address
From the Service CLI, compute nodes are upgraded one at a time, using each one's internal IP address as a command parameter. However, the locking commands use the compute node ID instead. To run all commands for a compute node upgrade you need both identifiers.
To obtain the host IP address and ID, as well as other information relevant to the upgrade procedure, use the Service CLI command provided in the following example. You can run the command as often as needed to check and confirm status as you proceed through the upgrade of all compute nodes.
PCA-ADMIN> list computeNode fields hostname,ipAddress,ilomIp,state,firmwareVersion,provisioningLocked,maintenanceLocked orderby hostname ASCENDING Data: id Hostname Ip Address ILOM Ip Address State Firmware Version Provisioning Locked Maintenance Locked -- -------- ---------- --------------- ----- ---------------- ------------------- ------------------ cf488903-fef8-4a51-8a41-c6990e4755c5 pcacn001 100.96.2.64 100.96.0.64 On PCA Hypervisor:3.0.1-615 false false 42a7594d-1173-4dbd-4755-07810cc2d527 pcacn002 100.96.2.65 100.96.0.65 On PCA Hypervisor:3.0.1-615 false false bc0f37d5-ba77-423e-bc11-017704b47e59 pcacn003 100.96.2.66 100.96.0.66 On PCA Hypervisor:3.0.1-615 false false 2e5ac527-01f5-4230-ae41-0522fcb57c9a pcacn004 100.96.2.67 100.96.0.67 On PCA Hypervisor:3.0.1-615 false false 5a6b61cf-7e99-4df2-87e4-b37c5fb0bfb8 pcacn005 100.96.2.68 100.96.0.68 On PCA Hypervisor:3.0.1-615 false false 885f2aa4-f017-41e8-b2bc-e588cc0c6162 pcacn006 100.96.2.69 100.96.0.69 On PCA Hypervisor:3.0.1-615 false false
Using the Service CLI
-
Gather the information that you need to run the compute node upgrade command:
-
the location of the ISO image to upgrade from
-
the checksum used to verify that the ISO image is valid
-
-
From the output you obtained with the compute node list command earlier, get the ID and the IP address of the compute node you intend to upgrade.
-
Set the provisioning and maintenance locks for the compute node you are about to upgrade.
Note:
For more information about migrating instances and locking a compute node, refer to the section "Performing Compute Node Operations" in the Hardware Administration chapter of the Oracle Private Cloud Appliance Administrator Guide.
-
Disable provisioning for the compute node.
PCA-ADMIN> provisioningLock id=cf488903-fef8-4a51-8a41-c6990e4755c5 Status: Success JobId: 6ee78c8a-e227-4d31-a770-9b9c96085f3f
-
Evacuate the compute node. Wait for the migration job to finish before proceeding to the next step.
Note:
In case physical resources are limited, compute instances will be migrated to other fault domains during compute node evacuation.
PCA-ADMIN> migrateVm id=cf488903-fef8-4a51-8a41-c6990e4755c5 force=true Status: Running JobId: 6f1e94bc-7d5b-4002-ada9-7d4b504a2599 PCA-ADMIN> show Job id=6f1e94bc-7d5b-4002-ada9-7d4b504a2599 Run State = Succeeded
-
Lock the compute node for maintenance.
PCA-ADMIN> maintenanceLock id=cf488903-fef8-4a51-8a41-c6990e4755c5 Status: Success JobId: e46f6603-2af2-4df4-a0db-b15156491f88
-
Optionally, rerun the compute node list command to confirm lock status. For example:
PCA-ADMIN> list computeNode fields hostname,ipAddress,ilomIp,state,firmwareVersion,provisioningLocked,maintenanceLocked orderby hostname ASCENDING Data: id Hostname Ip Address ILOM Ip Address State Firmware Version Provisioning Locked Maintenance Locked -- -------- ---------- --------------- ----- ---------------- ------------------- ------------------ cf488903-fef8-4a51-8a41-c6990e4755c5 pcacn001 100.96.2.64 100.96.0.64 On PCA Hypervisor:3.0.1-615 true false 42a7594d-1173-4dbd-4755-07810cc2d527 pcacn002 100.96.2.65 100.96.0.65 On PCA Hypervisor:3.0.1-615 false false bc0f37d5-ba77-423e-bc11-017704b47e59 pcacn003 100.96.2.66 100.96.0.66 On PCA Hypervisor:3.0.1-615 false false 2e5ac527-01f5-4230-ae41-0522fcb57c9a pcacn004 100.96.2.67 100.96.0.67 On PCA Hypervisor:3.0.1-615 false false 5a6b61cf-7e99-4df2-87e4-b37c5fb0bfb8 pcacn005 100.96.2.68 100.96.0.68 On PCA Hypervisor:3.0.1-615 false false 885f2aa4-f017-41e8-b2bc-e588cc0c6162 pcacn006 100.96.2.69 100.96.0.69 On PCA Hypervisor:3.0.1-615 false false
-
-
Enter the compute node upgrade command.
Syntax (entered on a single line):
upgradeCN hostIp=<compute-node-ip> imageLocation=<path-to-iso> isoChecksum=<iso-file-checksum>
Example:
PCA-ADMIN> upgradeCN hostIp=100.96.2.64 \ imageLocation=http://host.example.com/pca-<version>-<build>.iso \ isoChecksum=240420cfb9478f6fd026f0a5fa0e998e086275fc45e207fb5631e2e99732e192e8e9d1b4c7f29026f0a5f58dadc4d792d0cfb0279962838e95a0f0a5fa31dca7 Status: Success Data: Service request has been submitted. Upgrade Job Id = 1630938939109-compute-7545 Upgrade Request Id = UWS-61736806-7e5a-4648-9259-07c54c39cacb
-
Use the request ID and the job ID to check the status of the upgrade process.
PCA-ADMIN> getUpgradeJobs id upgradeRequestId commandName result -- ---------------- ----------- ------ 1630938939109-compute-7545 UWS-61736806-7e5a-4648-9259-07c54c39cacb compute Passed PCA-ADMIN> getupgradejob upgradeJobId=1630938939109-compute-7545 Data: Upgrade Request Id = UWS-61736806-7e5a-4648-9259-07c54c39cacb Name = compute Pid = 7545 Host = pcamn02 Log File = /nfs/shared_storage/pca_upgrader/log/pca-upgrader_compute_2021_09_26-06.35.39.log [...]
-
When the compute node upgrade has completed successfully and the node has rebooted, release the locks.
For more information, refer to the section "Performing Compute Node Operations". It can be found in the chapter Hardware Administration of the Oracle Private Cloud Appliance Administrator Guide.
-
Release the maintenance lock.
PCA-ADMIN> maintenanceUnlock id=cf488903-fef8-4a51-8a41-c6990e4755c5 Status: Success JobId: 625af20e-4b49-4201-879f-41d4405314c7
-
Release the provisioning lock.
PCA-ADMIN> provisioningUnlock id=cf488903-fef8-4a51-8a41-c6990e4755c5 Status: Success JobId: 523892e8-c2d4-403c-9620-2f3e94015b46
-
-
Proceed to the next compute node and repeat this procedure.
The output from the compute node list command indicates the current status. For example:
PCA-ADMIN> list computeNode fields hostname,ipAddress,ilomIp,state,firmwareVersion,provisioningLocked,maintenanceLocked orderby hostname ASCENDING Data: id Hostname Ip Address ILOM Ip Address State Firmware Version Provisioning Locked Maintenance Locked -- -------- ---------- --------------- ----- ---------------- ------------------- ------------------ cf488903-fef8-4a51-8a41-c6990e4755c5 pcacn001 100.96.2.64 100.96.0.64 On PCA Hypervisor:3.0.2-640 false false 42a7594d-1173-4dbd-4755-07810cc2d527 pcacn002 100.96.2.65 100.96.0.65 On PCA Hypervisor:3.0.2-640 false false bc0f37d5-ba77-423e-bc11-017704b47e59 pcacn003 100.96.2.66 100.96.0.66 On PCA Hypervisor:3.0.2-640 false false 2e5ac527-01f5-4230-ae41-0522fcb57c9a pcacn004 100.96.2.67 100.96.0.67 On PCA Hypervisor:3.0.2-640 false false 5a6b61cf-7e99-4df2-87e4-b37c5fb0bfb8 pcacn005 100.96.2.68 100.96.0.68 On PCA Hypervisor:3.0.1-615 false false 885f2aa4-f017-41e8-b2bc-e588cc0c6162 pcacn006 100.96.2.69 100.96.0.69 On PCA Hypervisor:3.0.1-615 false false
Using the Service Web UI
-
Set the provisioning and maintenance locks for the compute node you are about to upgrade. Ensure that no active compute instances are present on the node.
Note:
For more information about migrating instances and locking a compute node, refer to the section "Performing Compute Node Operations" in the Hardware Administration chapter of the Oracle Private Cloud Appliance Administrator Guide.
-
In the navigation menu, click Rack Units. In the Rack Units table, click the name of the compute node you want to upgrade to display its detail page.
-
In the top-right corner of the compute node detail page, click Controls and select the Provisioning Lock command.
-
When the provisioning lock has been set, click Controls again and select the Migrate All Vms command. The Compute service evacuates the compute node, meaning it migrates the running instances to other compute nodes.
Note:
In case physical resources are limited, compute instances will be migrated to other fault domains during compute node evacuation.
-
When compute node evacuation is complete, click Controls again and select the Maintenance Lock command. This command might fail if instance migrations are in progress. Wait a few minutes and retry.
-
-
In the navigation menu, click Upgrade & Patching.
-
In the top-right corner of the Upgrade Jobs page, click Create Upgrade or Patch.
The Create Request window appears. Choose Upgrade as the Request Type.
-
Select the appropriate upgrade request type: Upgrade CN.
-
Fill out the upgrade request parameters:
-
Host IP: Enter the compute node's assigned IP address in the internal administration network. This is an IP address in the internal 100.96.2.0/23 range.
-
Image Location: Enter the path to the location where the ISO image is stored.
-
ISO Checksum: Enter the checksum to verify the ISO image. It is stored alongside the ISO file.
-
Log Level: Optionally, select a specific log level for the upgrade log file. The default log level is "Information". For maximum detail, select "Debug".
-
Advanced Options JSON: Optionally, add a JSON string to provide additional command parameters.
-
-
Click Create Request.
The new upgrade request appears in the Upgrade Jobs table.
-
When the compute node has been upgraded successfully, release the provisioning and maintenance locks.
For more information, refer to the section "Performing Compute Node Operations". It can be found in the chapter Hardware Administration of the Oracle Private Cloud Appliance Administrator Guide.
-
Open the compute node detail page.
-
In the top-right corner of the compute node detail page, click Controls and select the Maintenance Unlock command.
-
When the maintenance lock has been released, click Controls again and select the Provisioning Unlock command.
-
Upgrade the Management Node Cluster
Caution:
Ensure that all compute nodes have been upgraded.
A full management node cluster upgrade is a convenient way to upgrade all the required components on all three management nodes using just a single command. As part of this process, the following components are upgraded, in this specific order:
-
Host operating system
-
Clustered MySQL database
-
Secret service (including Etcd and Vault)
-
Kubernetes container orchestration packages
-
Containerized microservices
Using the Service CLI
-
Gather the information that you need to run the command:
-
the location of the ISO image to upgrade from
-
the checksum used to verify that the ISO image is valid
-
-
Enter the command to start the full management node cluster upgrade.
Syntax (entered on a single line):
PCA-ADMIN> upgradeFullMN imageLocation=<path-to-iso> isoChecksum=<iso-file-checksum>
PCA-ADMIN> upgradeFullMN imageLocation=http://host.example.com/pca-<version>-<build>.iso \ isoChecksum=240420cfb9478f6fd026f0a5fa0e998e086275fc45e207fb5631e2e99732e192e8e9d1b4c7f29026f0a5f58dadc4d792d0cfb0279962838e95a0f0a5fa31dca7 Status: Success Data: Service request has been submitted. Upgrade Request Id = UWS-39329657-1051-4267-8c5a-9314f8e63a64
-
Use the request ID to check the status of the upgrade process.
As the full management node upgrade is a multi-component upgrade process, there are multiple upgrade jobs associated with the upgrade request. You can filter for those jobs based on the request ID. Using the job ID, you can drill down into the details of each upgrade job.
PCA-ADMIN> getUpgradeJobs requestId=UWS-39329657-1051-4267-8c5a-9314f8e63a64 Data: id upgradeRequestId commandName result -- ---------------- ----------- ------ 1634578760906-platform-66082 UWS-39329657-1051-4267-8c5a-9314f8e63a64 platform Passed 1634578263434-kubernetes-63574 UWS-39329657-1051-4267-8c5a-9314f8e63a64 kubernetes Passed 1634578012353-vault-51696 UWS-39329657-1051-4267-8c5a-9314f8e63a64 vault Passed 1634577380954-etcd-46337 UWS-39329657-1051-4267-8c5a-9314f8e63a64 etcd Passed 1634577341291-mysql-40127 UWS-39329657-1051-4267-8c5a-9314f8e63a64 mysql Passed 1634576985926-host-36556 UWS-39329657-1051-4267-8c5a-9314f8e63a64 host Passed 1634576652071-host-27088 UWS-39329657-1051-4267-8c5a-9314f8e63a64 host Passed 1634576191050-host-24909 UWS-39329657-1051-4267-8c5a-9314f8e63a64 host Passed PCA-ADMIN> getUpgradeJob upgradeJobId=1634576652071-host-27088 Data: Upgrade Request Id = UWS-39329657-1051-4267-8c5a-9314f8e63a64 Composition Id = 1 Name = host Pid = 27088 Host = pcamn02 Log File = /nfs/shared_storage/pca_upgrader/log/pca-upgrader_host_os_2023_05_24-07.04.12.log Tasks 1 - Name = Validate Image Location Tasks 1 - Description = Verify that the image exists at the specified location and is correctly named [...]
The output of the
getUpgradeJob
command provides detailed information about the tasks performed during the upgrade procedure. It displays descriptions, time stamps, duration, and success or failure. Whenever an upgrade operation fails, the command output indicates which task has failed. For in-depth troubleshooting you can search the log file at the location provided near the start of the command output.Caution:
After upgrade, the management nodes must all be rebooted for the changes to take effect. This cannot be done from the Service CLI.
-
Reboot all three management nodes either from the Oracle Linux command line or through the ILOM.
-
Verify which management node owns the cluster virtual IP. Run this command from the command line of one of the management nodes:
# pcs status [...] Resource Group: mgmt-rg vip-mgmt-int (ocf::heartbeat:IPaddr2): Started pcamn02 vip-mgmt-host (ocf::heartbeat:IPaddr2): Started pcamn02 vip-mgmt-ilom (ocf::heartbeat:IPaddr2): Started pcamn02 vip-mgmt-lb (ocf::heartbeat:IPaddr2): Started pcamn02 vip-mgmt-ext (ocf::heartbeat:IPaddr2): Started pcamn02 [...]
-
Reboot the other two management nodes in the cluster. (In this example:
pcamn01
andpcamn03
.) -
Move the virtual IP to one of the rebooted management nodes. (In this example:
pcamn01
.)# pcs resource move mgmt-rg pcamn01 # pcs status Cluster name: mncluster Stack: corosync [...] scsi_fencing (stonith:fence_scsi): Stopped (disabled) Resource Group: mgmt-rg vip-mgmt-int (ocf::heartbeat:IPaddr2): Started pcamn01 vip-mgmt-host (ocf::heartbeat:IPaddr2): Started pcamn01 [...]
-
Reboot the last of the three upgraded management nodes. (In this example:
pcamn02
.)
-
Using the Service Web UI
-
In the navigation menu, click Upgrade & Patching.
-
In the top-right corner of the Upgrade Jobs page, click Create Upgrade or Patch.
The Create Request window appears. Choose Upgrade as the Request Type.
-
Select the appropriate upgrade request type.
For a full management node upgrade, select Upgrade MN.
-
Fill out the upgrade request parameters, if necessary:
-
Advanced Options JSON: Optionally, add a JSON string to provide additional command parameters.
-
Image Location: Enter the path to the location where the ISO image is stored.
-
ISO Checksum: Enter the checksum that allows the system to verify that the ISO image is valid for this upgrade. The checksum is provided alongside the ISO image; its file name is the ISO image name with
.sha512sum
appended.
-
-
Click Create Request.
The new upgrade request appears in the Upgrade Jobs table.
Caution:
After upgrade, the management nodes must all be rebooted for the changes to take effect. This cannot be done from the Service Web UI.
-
Reboot all three management nodes either from the Oracle Linux command line or through the ILOM. Refer to the final step of the management cluster upgrade instructions using the Service CLI.
Upgrade Individual Components
A full management node cluster upgrade is a convenient way to upgrade all the required components on all three management nodes using just a single command. However, the process might be interrupted or an error might occur along the way. Instead of repeating the full management cluster upgrade, it might be more efficient to perform individual component upgrades in this situation.
Check the upgrade command output and the logs for any warnings or error messages. Determine at which stage the management node cluster upgrade failed, and resolve any issue that appears to have caused the failure. Proceed with individual component upgrades, starting with the component that was not upgraded successfully. Upgrade operations in the sections that follow, are listed in the correct order, exactly as they are performed during a management node cluster upgrade.
Upgrade the Management Node Operating System
The Oracle Linux host operating system of the management nodes must be upgraded one node at a time; a rolling upgrade of all management nodes is not possible. This upgrade process, which involves updating the kernel and system packages, must always be initiated from the management node that holds the cluster virtual IP. Thus, in a three-management-node cluster, when you have upgraded two management nodes, you must reassign the cluster virtual IP to one of the upgraded management nodes and run the final upgrade command from that node.
You must upgrade management nodes one at a time, using each one's internal IP address as a
command parameter. To obtain the host IP addresses, use the Service CLI command show ManagementNode
name=<node_name>
and look for the Ip Address
in the
output.
-
Gather the information that you need to run the command:
-
the location of the ISO image to upgrade from
-
the checksum used to verify that the ISO image is valid
-
the IP address of the management node for which you intend to upgrade the host operating system
-
-
Run the Service CLI from the management node that holds the management cluster virtual IP.
-
Log on to one of the management nodes and check the status of the cluster.
# ssh root@pcamn01 # pcs status Cluster name: mncluster Stack: corosync Current DC: pcamn02 (version 1.1.23-1.0.1.el7-9acf116022) - partition with quorum Online: [ pcamn01 pcamn02 pcamn03 ] Full list of resources: scsi_fencing (stonith:fence_scsi): Stopped (disabled) Resource Group: mgmt-rg vip-mgmt-int (ocf::heartbeat:IPaddr2): Started pcamn02 vip-mgmt-host (ocf::heartbeat:IPaddr2): Started pcamn02 vip-mgmt-ilom (ocf::heartbeat:IPaddr2): Started pcamn02 vip-mgmt-lb (ocf::heartbeat:IPaddr2): Started pcamn02 vip-mgmt-ext (ocf::heartbeat:IPaddr2): Started pcamn02 l1api (systemd:l1api): Started pcamn02 haproxy (ocf::heartbeat:haproxy): Started pcamn02 pca-node-state (systemd:pca_node_state): Started pcamn02 dhcp (ocf::heartbeat:dhcpd): Started pcamn02 hw-monitor (systemd:hw_monitor): Started pcamn02 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
In this example, the command output indicates that the node with host name
pcamn02
currently holds the cluster virtual IP. -
Log in to the management node with the virtual IP and launch the Service CLI.
# ssh pcamn02 # ssh admin@localhost -p 30006 PCA-ADMIN>
-
-
Enter the upgrade command.
Syntax (entered on a single line):
upgradeHost imageLocation=<path-to-iso> isoChecksum=<iso-file-checksum> hostIp=<management-node-ip>
Example:
PCA-ADMIN> upgradeHost hostIp=100.96.2.35 \ imageLocation=http://host.example.com/pca-<version>-<build>.iso \ isoChecksum=240420cfb9478f6fd026f0a5fa0e998e086275fc45e207fb5631e2e99732e192e8e9d1b4c7f29026f0a5f58dadc4d792d0cfb0279962838e95a0f0a5fa31dca7 Status: Success Data: Service request has been submitted. Upgrade Job Id = 1632990827394-host-56156 Upgrade Request Id = UWS-1a97a8d9-54ef-478d-a0c0-348a17ba6755
-
Use the request ID and the job ID to check the status of the upgrade process.
PCA-ADMIN> getUpgradeJobs id upgradeRequestId commandName result -- ---------------- ----------- ------ 1632990827394-host-56156 UWS-1a97a8d9-54ef-478d-a0c0-348a17ba6755 host Passed PCA-ADMIN> getUpgradeJob upgradeJobId=1632990827394-host-56156 Status: Success Data: Upgrade Request Id = UWS-1a97a8d9-54ef-478d-a0c0-348a17ba6755 Composition Id = 1 Name = host Pid = 56156 Host = pcamn02 Log File = /nfs/shared_storage/pca_upgrader/log/pca-upgrader_host_os_2021_09_25-05.47.02.log [...]
-
When the first management node host operating system upgrade has completed successfully, execute the same command for the next management node.
PCA-ADMIN> upgradeHost hostIp=100.96.2.33 \ imageLocation=http://host.example.com/pca-<version>-<build>.iso \ isoChecksum=240420cfb9478f6fd026f0a5fa0e998e086275fc45e207fb5631e2e99732e192e8e9d1b4c7f29026f0a5f58dadc4d792d0cfb0279962838e95a0f0a5fa31dca7
-
When the second management node host operating system upgrade has completed successfully, exit the Service CLI and move the cluster virtual IP to one of the upgraded nodes.
PCA-ADMIN> exit Connection to localhost closed. # pcs resource move mgmt-rg pcamn01 # pcs status Cluster name: mncluster Stack: corosync [...] scsi_fencing (stonith:fence_scsi): Stopped (disabled) Resource Group: mgmt-rg vip-mgmt-int (ocf::heartbeat:IPaddr2): Started pcamn01 vip-mgmt-host (ocf::heartbeat:IPaddr2): Started pcamn01 [...]
Moving the cluster virtual IP to another management node should only take a number of seconds.
-
Log in to the management node with the virtual IP and launch the Service CLI to execute the host operating system upgrade for the final management node.
# ssh pcamn01 # ssh admin@localhost -p 30006 PCA-ADMIN> upgradeHost hostIp=100.96.2.34 \ imageLocation=http://host.example.com/pca-<version>-<build>.iso \ isoChecksum=240420cfb9478f6fd026f0a5fa0e998e086275fc45e207fb5631e2e99732e192e8e9d1b4c7f29026f0a5f58dadc4d792d0cfb0279962838e95a0f0a5fa31dca7
When this upgrade has completed successfully, the operating system on all management nodes is up-to-date.
Caution:
After upgrade, the management nodes must all be rebooted for the changes to take effect. This cannot be done from the Service CLI.
-
Reboot all three management nodes either from the Oracle Linux command line or through the ILOM.
-
Verify which management node owns the cluster virtual IP. Run this command from the command line of one of the management nodes:
# pcs status [...] Resource Group: mgmt-rg vip-mgmt-int (ocf::heartbeat:IPaddr2): Started pcamn01 vip-mgmt-host (ocf::heartbeat:IPaddr2): Started pcamn01 vip-mgmt-ilom (ocf::heartbeat:IPaddr2): Started pcamn01 vip-mgmt-lb (ocf::heartbeat:IPaddr2): Started pcamn01 vip-mgmt-ext (ocf::heartbeat:IPaddr2): Started pcamn01 [...]
-
Reboot the other two management nodes in the cluster. (In this example:
pcamn02
andpcamn03
.) -
Move the virtual IP to one of the rebooted management nodes. (In this example:
pcamn02
.)# pcs resource move mgmt-rg pcamn02 # pcs status Cluster name: mncluster Stack: corosync [...] scsi_fencing (stonith:fence_scsi): Stopped (disabled) Resource Group: mgmt-rg vip-mgmt-int (ocf::heartbeat:IPaddr2): Started pcamn02 vip-mgmt-host (ocf::heartbeat:IPaddr2): Started pcamn02 [...]
-
Reboot the last of the three upgraded management nodes. (In this example:
pcamn01
.)
-
Upgrade the MySQL Cluster Database
It is assumed that the database upgrade is performed after the management node host operating system upgrade. As the ISO image has already been unpacked on shared storage during the operating system upgrade, the ISO path and checksum are not considered mandatory parameters for the database upgrade command.
The MySQL Cluster database upgrade is a rolling upgrade: with one command the upgrade is executed on each of the three management nodes.
-
Enter the upgrade command.
PCA-ADMIN> upgradeMySQL Status: Success Data: Service request has been submitted. Upgrade Job Id = 1632995409822-mysql-83013 Upgrade Request Id = UWS-77bc0c30-7ff5-4c50-ad09-6f96907e22e1
-
Use the request ID and the job ID to check the status of the upgrade process.
PCA-ADMIN> getUpgradeJobs id upgradeRequestId commandName result -- ---------------- ----------- ------ 1632995409822-mysql-83013 UWS-77bc0c30-7ff5-4c50-ad09-6f96907e22e1 mysql Passed PCA-ADMIN> getUpgradeJob upgradeJobId=1632995409822-mysql-83013 Status: Success Data: Upgrade Request Id = UWS-77bc0c30-7ff5-4c50-ad09-6f96907e22e1 Name = mysql Pid = 83013 Host = pcamn01 Log File = /nfs/shared_storage/pca_upgrader/log/pca-upgrader_mysql_cluster_2021_09_25-09.21.16.log [...]
Upgrade the Secret Service
It is assumed that the secret service upgrade is performed after the management node host operating system upgrade. As the ISO image has already been unpacked on shared storage during the operating system upgrade, the ISO path and checksum are not considered mandatory parameters for the database upgrade command.
The secret service contains two components that need to be upgraded separately in this particular order: first Etcd, then Vault. The Etcd and Vault upgrades are rolling upgrades: each upgrade is executed on all three management nodes with one command.
-
Enter the two upgrade commands. Wait until one upgrade is finished before entering the second command.
PCA-ADMIN> upgradeEtcd Status: Success Data: Service request has been submitted. Upgrade Job Id = 1632826770954-etcd-26973 Upgrade Request Id = UWS-fec15d32-fc2b-48bd-9ae0-62f49587a284 PCA-ADMIN> upgradeVault Status: Success Data: Service request has been submitted. Upgrade Job Id = 1632850933353-vault-16966 Upgrade Request Id = UWS-352df3d1-c21f-441b-8f6e-9381ac075906
-
Use the request ID and the job ID to check the status of the upgrade process.
PCA-ADMIN> getUpgradeJobs id upgradeRequestId commandName result -- ---------------- ----------- ------ 1632850933353-vault-16966 UWS-352df3d1-c21f-441b-8f6e-9381ac075906 vault Passed 1632826770954-etcd-26973 UWS-fec15d32-fc2b-48bd-9ae0-62f49587a284 etcd Passed PCA-ADMIN> getUpgradeJob upgradeJobId=1632850933353-vault-16966 Status: Success Data: Upgrade Request Id = UWS-352df3d1-c21f-441b-8f6e-9381ac075906 Name = vault Pid = 16966 Host = pcamn02 Log File = /nfs/shared_storage/pca_upgrader/log/pca-upgrader_vault_2021_09_25-10.38.25.log [...]
Upgrade the Kubernetes Cluster
The target release of this upgrade path does not contain new versions of the Kubernetes container orchestration packages. You may skip this component.
Upgrade the Microservices
The microservices upgrade covers both the internal services of the platform layer, and the administrative and user-level services exposed through the infrastructure services layer.
The containerized microservices have their own separate upgrade mechanism. A service is upgraded if a new Helm deployment chart and container image are found in the ISO image. When a new deployment chart is detected during the upgrade process, the pods running the services are restarted with the new container image.
It is assumed that the microservices upgrade is performed after the management node host operating system upgrade. As the ISO image has already been unpacked on shared storage during the operating system upgrade, the ISO path and checksum are not considered mandatory parameters for the microservices upgrade command.
-
Enter the upgrade command.
PCA-ADMIN> upgradePlatform Status: Success Data: Service request has been submitted. Upgrade Job Id = 1632850650836-platform-68465 Upgrade Request Id = UWS-26dba234-9b52-426d-836c-ac11f37e717f
-
Use the request ID and the job ID to check the status of the upgrade process.
PCA-ADMIN> getUpgradeJobs id upgradeRequestId commandName result -- ---------------- ----------- ------ 1632850650836-platform-68465 UWS-26dba234-9b52-426d-836c-ac11f37e717f platform Passed PCA-ADMIN> getUpgradeJob upgradeJobId=1632850650836-platform-68465 Status: Success Data: Upgrade Request Id = UWS-26dba234-9b52-426d-836c-ac11f37e717f Name = kubernetes Pid = 68465 Host = pcamn02 Log File = /nfs/shared_storage/pca_upgrader/log/pca-upgrader_platform_services_2021_09_26-20.48.41.log [...]
Upgrade Component Firmware
Caution:
Ensure that all compute nodes and the management node cluster have been upgraded.
Firmware is included in the ISO image for all component ILOMs, for the ZFS Storage Appliance, and for the switches. Select the instructions below for the component type you want to upgrade.
Upgrade ZFS Storage Appliance Operating Software
To upgrade the operating software of the appliance's ZFS Storage Appliance, you need to provide the path to the firmware package in the unpacked ISO image. The IP addresses of the storage controllers are known, and a single upgrade command initiates a rolling upgrade of both controllers. If a new ILOM firmware version is included for the two controllers, it will be installed as part of the ZFS Storage Appliance upgrade process.
Caution:
Ensure that no users are logged in to the ZFS Storage Appliance or the storage controller ILOMs during the upgrade process.
Do not make storage configuration changes while an upgrade is in progress. While controllers are running different software versions, configuration changes made to one controller are not propagated to its peer controller.
During firmware upgrade the storage controllers are placed in active/passive mode. They automatically return to active/active after the upgrade is completed.
Before You Begin
- From a management node, set the provisioning lock by issuing this command:
pca-admin locks set system provisioning
- Perform the ZFS Storage Appliance upgrade using either the Service Web UI or the Service CLI procedure below.
- Release the provisioning
lock.
pca-admin locks unset system provisioning
- Confirm the lock
state.
pca-admin locks show system
Using the Service CLI
-
Gather the information that you need to run the command: the path to the AK-NAS firmware package in the unpacked ISO image.
-
Enter the upgrade command.
Syntax:
upgradeZfssa imageLocation=<path-to-firmware>
Example:
PCA-ADMIN> upgradeZfssa imageLocation="file:///nfs/shared_storage/pca_firmware/zfs/ak-nas-<version>.pkg" Status: Success Data: Service request has been submitted. Upgrade Job Id = 1632914107346-zfssa-83002 Upgrade Request Id = UWS-881af57f-5dfb-4c75-8026-9f00cf3eb7c9
-
Use the request ID and the job ID to check the status of the upgrade process.
PCA-ADMIN> getUpgradeJobs id upgradeRequestId commandName result -- ---------------- ----------- ------ 1632914107346-zfssa-83002 UWS-881af57f-5dfb-4c75-8026-9f00cf3eb7c9 zfssa Passed PCA-ADMIN> getUpgradeJob upgradeJobId=1632914107346-zfssa-83002 Data: Upgrade Request Id = UWS-881af57f-5dfb-4c75-8026-9f00cf3eb7c9 Name = zfssa Pid = 83002 Host = pcamn02 Log File = /nfs/shared_storage/pca_upgrader/log/pca-upgrader_zfssa_ak_2021_09_29-11.15.07.log [...]
Using the Service Web UI
-
In the navigation menu, click Upgrade & Patching.
-
In the top-right corner of the Upgrade Jobs page, click Create Upgrade or Patch.
The Create Request window appears. Choose Upgrade as the Request Type.
-
Select the appropriate upgrade request type: Upgrade Zfssa.
-
Fill out the upgrade request parameters:
-
Advanced Options JSON: Optionally, add a JSON string to provide additional command parameters.
-
Image Location: Enter the path to the location where the firmware package is stored. ZFS Storage Appliance operating software is stored as a
*.pkg
file in the/pca_firmware/zfs/
subdirectory of the unpacked ISO image. -
Log Level: Optionally, select a specific log level for the upgrade log file. The default log level is "Information". For maximum detail, select "Debug".
-
-
Click Create Request.
The new upgrade request appears in the Upgrade Jobs table.
Upgrade ILOMs
ILOM upgrades can be applied to management nodes and compute nodes. Firmware packages may be different per component type, so be sure to select the correct one from the firmware directory. You must upgrade ILOMs one at a time, using each one's internal IP address as a command parameter.
To obtain the ILOM IP addresses, use the Service CLI command show ComputeNode
name=<node_name>
or show ManagementNode
name=<node_name>
and look for the ILOM Ip Address
in the output.
Caution:
You must NOT upgrade the ILOM of the management node that holds the management virtual IP address, and thus the primary role in the cluster. To upgrade its ILOM, first reboot the management node in question so that another node in the cluster takes over the primary role. Once the node has rebooted completely, you can proceed with the ILOM upgrade.
To determine which management node has the primary role in the cluster, log in to any
management node and run the command pcs status
.
Using the Service CLI
-
Gather the information that you need to run the command:
-
the IP address of the ILOM for which you intend to upgrade the firmware
-
the path to the firmware package file in the unpacked ISO image
-
-
Enter the upgrade command.
Syntax (entered on a single line):
upgradeIlom imageLocation=<path-to-firmware> hostIp=<ilom-ip>
Example:
PCA-ADMIN> upgradeIlom \ imageLocation="file:///nfs/shared_storage/pca_firmware/X9-2/.../ILOM-<version>-ORACLE_SERVER_X9-2-rom.pkg" \ hostIp=100.96.0.66 Status: Success Data: Service request has been submitted. Upgrade Job Id = 1620921089806-ilom-21480 Upgrade Request Id = UWS-732d6fce-9f06-4329-b972-d093bee40010
-
Use the request ID and the job ID to check the status of the upgrade process.
PCA-ADMIN> getUpgradeJobs id upgradeRequestId commandName result -- ---------------- ----------- ------ 1620921089806-ilom-21480 UWS-732d6fce-9f06-4329-b972-d093bee40010 ilom Passed 1632926926773-host-32993 UWS-fef3b663-45b7-4177-a041-26f73e68848d host Passed 1632990827394-host-56156 UWS-1a97a8d9-54ef-478d-a0c0-348a17ba6755 host Passed 1632990493570-host-6646 UWS-4c78f3ef-ac42-4f32-9483-bb43a309faa3 host Passed PCA-ADMIN> getUpgradeJob upgradeJobId=1620921089806-ilom-21480 Status: Success Data: Upgrade Request Id = UWS-732d6fce-9f06-4329-b972-d093bee40010 Name = ilom Pid = 21480 Host = pcamn02 Log File = /nfs/shared_storage/pca_upgrader/log/pca-upgrader_ilom_firmware_2021_09_24-11.18.31.log [...]
Using the Service Web UI
-
In the navigation menu, click Upgrade & Patching.
-
In the top-right corner of the Upgrade Jobs page, click Create Upgrade or Patch.
The Create Request window appears. Choose Upgrade as the Request Type.
-
Select the appropriate upgrade request type: Upgrade ILOM.
-
Fill out the upgrade request parameters:
-
Advanced Options JSON: Optionally, add a JSON string to provide additional command parameters.
-
Host IP: Enter the component's assigned IP address in the ILOM network. This is an IP address in the internal 100.96.0.0/23 range.
-
Image Location: Enter the path to the location where the firmware package is stored. ILOM firmware is stored as a
*.pkg
file in the/pca_firmware/<component>/
subdirectory of the unpacked ISO image. -
Log Level: Optionally, select a specific log level for the upgrade log file. The default log level is "Information". For maximum detail, select "Debug".
-
-
Click Create Request.
The new upgrade request appears in the Upgrade Jobs table.
At the end of the upgrade, the ILOM itself is rebooted automatically. However, the server component also needs to be rebooted for all changes to take effect. It is the administrator's responsibility to manually reboot the management node or compute node after a successful ILOM upgrade.
Caution:
Always verify the cluster state before rebooting a management node. Consult the Oracle Private Cloud Appliance Release Notes for more information: refer to the known issue "Rebooting a Management Node while the Cluster State is Unhealthy Causes Platform Integrity Issues".
Upgrade Switch Software
The appliance rack contains three categories of Cisco Nexus switches: a management switch, two leaf switches, and two spine switches. They all run the same Cisco NX-OS network operating software. You must perform the upgrades in this order: leaf switches first, then spine switches, and finally the management switch.
When upgrading their firmware, use the same binary file with each upgrade command. Only one command per switch category is required, meaning that the leaf switches and the spine switches are upgraded in pairs.
Some versions of the network operating software consist of two files: a binary file and an additional EPLD (electronic programmable logic device) image. If the new firmware includes an EPLD file, upgrade the NX-OS software first, then update the EPLD image.
Using the Service CLI
-
Gather the information that you need to run the command:
-
the type of switch to upgrade (spine, leaf, management)
-
the path to the firmware binary file in the unpacked ISO image
-
if present with the new firmware version, the path to the EPLD upgrade file in the unpacked ISO image
-
-
Enter the upgrade command.
Syntax (entered on a single line):
upgradeSwitch switchType=[MGMT | SPINE | LEAF] imageLocation=<path-to-firmware> (epld=<path-to-epld-file>)
Example:
PCA-ADMIN> upgradeSwitch switchType=LEAF \ imageLocation="file:///nfs/shared_storage/pca_firmware/network/cisco/nxos.<version>.bin" \ epld="file:///nfs/shared_storage/pca_firmware/network/cisco/n9000-epld.<version>.img" Status: Success Data: Service request has been submitted. Upgrade Job Id = 1630511206512-cisco-20299 Upgrade Request Id = UWS-44688fe5-b4f8-407f-a1b5-8cd1b685c2c3
-
Use the request ID and the job ID to check the status of the upgrade process.
PCA-ADMIN> getUpgradeJobs id upgradeRequestId commandName result -- ---------------- ----------- ------ 1630511206512-cisco-20299 UWS-44688fe5-b4f8-407f-a1b5-8cd1b685c2c3 cisco Passed PCA-ADMIN> getupgradeJob upgradeJobId=1630511206512-cisco-20299 Data: Upgrade Request Id = UWS-44688fe5-b4f8-407f-a1b5-8cd1b685c2c3 Name = cisco Pid = 20299 Host = pcamn02 Log File = /nfs/shared_storage/pca_upgrader/log/pca-upgrader_cisco_firmware_2021_09_24-14.46.46.log [...]
Using the Service Web UI
-
In the navigation menu, click Upgrade & Patching.
-
In the top-right corner of the Upgrade Jobs page, click Create Upgrade or Patch.
The Create Request window appears. Choose Upgrade as the Request Type.
-
Select the appropriate upgrade request type: Upgrade Switch.
-
Fill out the upgrade request parameters:
-
Advanced Options JSON: Optionally, add a JSON string to provide additional command parameters.
-
EPLD: If required for this firmware version, enter the path to the location where the EPLD image file is stored. If present, an EPLD file is an
*.img
file stored alongside the NX-OS operating software in the/pca_firmware/network/cisco/
subdirectory of the unpacked ISO image. -
Image Location: Enter the path to the location where the firmware package is stored. Cisco NX-OS network operating software is stored as a
*.bin
file in the/pca_firmware/network/cisco/
subdirectory of the unpacked ISO image. -
Log Level: Optionally, select a specific log level for the upgrade log file. The default log level is "Information". For maximum detail, select "Debug".
-
Switch Type: Select the switch type you intend to upgrade. The preferred upgrade order is as follows: leaf switches first, then spine switches, and finally the management switch.
-
-
Click Create Request.
The new upgrade request appears in the Upgrade Jobs table.
-
When the upgrade has completed successfully, but other switches in the system still need to be upgraded, repeat this procedure for any other type of switch that requires upgrading.
Upgrade Oracle Cloud Infrastructure Images
Caution:
Ensure that all system components have been upgraded.
When new Oracle Cloud Infrastructure Images become available and
supported for Oracle Private Cloud Appliance, you can make them available
for use in all existing tenancies with a single command. The images are stored in the
/nfs/shared_storage/oci_compute_images
directory on the ZFS Storage Appliance.
An upgrade adds new Oracle Cloud Infrastructure Images to your environment, but it never removes any existing images. If you no longer need an image, you have the option to delete it.
Importing Oracle Cloud Infrastructure Images
Run the importPlatformImages
command to make all images that are in
/nfs/shared_storage/oci_compute_images
on the management node also
available in all compartments in all tenancies.
PCA-ADMIN> importPlatformImages Status: Running JobId: f21b9d86-ccf2-4bd3-bab9-04dc3adb2966
Use the JobId
to get more detailed information about the job. In the
following example, no new images have been delivered:
PCA-ADMIN> show job id=f21b9d86-ccf2-4bd3-bab9-04dc3adb2966 Status: Success Data: Id = f21b9d86-ccf2-4bd3-bab9-04dc3adb2966 Type = Job Done = true Name = OPERATION Progress Message = There are no new platform image files to import Run State = Succeeded
Listing Oracle Cloud Infrastructure Images
Use the listplatformImages
command to list all Oracle Cloud Infrastructure images that have been imported from the
management nodes' shared storage. If you performed an upgrade but did not yet run
importPlatformImages
, the list might not show all images that are in
shared storage.
PCA-ADMIN> listplatformImages Status: Success Data: id displayName lifecycleState -- ----------- -------------- ocid1.image.unique_ID_1 uln-pca-Oracle-Linux-7.9-2022.08.29_0... AVAILABLE ocid1.image.unique_ID_2 uln-pca-Oracle-Linux-8-2022.08.29_0.oci AVAILABLE ocid1.image.unique_ID_3 uln-pca-Oracle-Solaris-11.4.35-2021.0... AVAILABLE
Compute Enclave users see the same
lifecycleState
that listplatformImages
shows. Shortly
after running importPlatformImages
, both
listplatformImages
and the Compute Enclave might show new images with lifecycleState
IMPORTING
. When the importPlatformImages
job is complete,
both listplatformImages
and the Compute Enclave show the images as
AVAILABLE
.
If you delete an Oracle Cloud Infrastructure image, both
listplatformImages
and the Compute Enclave show the image as DELETING
or DELETED
.
Deleting Oracle Cloud Infrastructure Images
Use the following command to delete the specified Oracle Cloud Infrastructure image. The image shows as DELETING and then
DELETED in listplatformImages
output and in the Compute Enclave, and eventually is not listed at all.
However, the image file is not deleted from the management node, and running the
importPlatformImages
command re-imports the image so that the image is
again available in all compartments.
PCA-ADMIN> deleteplatformImage imageId=ocid1.image.unique_ID_3 Status: Running JobId: 401567c3-3662-46bb-89d2-b7ad1541fa2d PCA-ADMIN> listplatformImages Status: Success Data: id displayName lifecycleState -- ----------- -------------- ocid1.image.unique_ID_1 uln-pca-Oracle-Linux-7.9-2022.08.29_0... AVAILABLE ocid1.image.unique_ID_2 uln-pca-Oracle-Linux-8-2022.08.29_0.oci AVAILABLE ocid1.image.unique_ID_3 uln-pca-Oracle-Solaris-11.4.35-2021.0... DELETED