Upgrading the Management Node Operating System with Appliance Software 3.0.2-b852928 or Earlier

Caution:

Ensure that all preparation steps for system upgrade have been completed. For instructions, see Preparing the Upgrade Environment.

The Oracle Linux host operating system of the management nodes must be upgraded one node at a time; a rolling upgrade of all management nodes is not possible. This upgrade process, which involves updating the kernel and system packages, must always be initiated from the management node that holds the cluster virtual IP. Thus, in a three-management-node cluster, when you have upgraded two management nodes, you must reassign the cluster virtual IP to one of the upgraded management nodes and execute the final upgrade command from that node.

You must upgrade management nodes one at a time, using each one's internal IP address as a command parameter. To obtain the host IP addresses, use the Service CLI command show ManagementNode name=<node_name> and look for the Ip Address in the output.

Using the Service Web UI

  1. In the navigation menu, click Upgrade & Patching.

  2. In the top-right corner of the Upgrade Jobs page, click Create Upgrade or Patch.

    The Create Request window appears. Choose Upgrade as the Request Type.

  3. Select the appropriate upgrade request type: Upgrade Host.

  4. Fill out the upgrade request parameters:

    • Advanced Options JSON: Optionally, add a JSON string to provide additional command parameters.

    • Host IP: Enter the management node's assigned IP address in the internal administration network. This is an IP address in the internal 100.96.2.0/23 range.

    • Image Location: This parameter is deprecated.

    • ISO Checksum: This parameter is deprecated.

    • Log Level: Optionally, select a specific log level for the upgrade log file. The default log level is "Information". For maximum detail, select "Debug".

  5. Click Create Request.

    The new upgrade request appears in the Upgrade Jobs table.

    Note:

    After upgrade, the management nodes must all be rebooted for the changes to take effect. However, this step is part of the upgrade process, so no administrator action is required.

Using the Service CLI

  1. Get the IP address of the management node for which you intend to upgrade the host operating system.

  2. Run the Service CLI from the management node that holds the management cluster virtual IP.

    1. Log on to one of the management nodes and check the status of the cluster.

      # ssh root@pcamn01
      # pcs status
      Cluster name: mncluster
      Stack: corosync
      Current DC: pcamn02 (version 1.1.23-1.0.1.el7-9acf116022) - partition with quorum
      
      Online: [ pcamn01 pcamn02 pcamn03 ]
      
      Full list of resources:
      
       scsi_fencing         (stonith:fence_scsi):          Stopped (disabled)
       Resource Group: mgmt-rg
           vip-mgmt-int     (ocf::heartbeat:IPaddr2):      Started    pcamn02
           vip-mgmt-host    (ocf::heartbeat:IPaddr2):      Started    pcamn02
           vip-mgmt-ilom    (ocf::heartbeat:IPaddr2):      Started    pcamn02
           vip-mgmt-lb      (ocf::heartbeat:IPaddr2):      Started    pcamn02
           vip-mgmt-ext     (ocf::heartbeat:IPaddr2):      Started    pcamn02
           l1api            (systemd:l1api):               Started    pcamn02
           haproxy          (ocf::heartbeat:haproxy):      Started    pcamn02
           pca-node-state   (systemd:pca_node_state):      Started    pcamn02
           dhcp             (ocf::heartbeat:dhcpd):        Started    pcamn02
           hw-monitor       (systemd:hw_monitor):          Started    pcamn02
           healthcheck      (systemd:healthcheck):         Started    pcamn02
           rabbitmq-monitor (systemd:rabbitmq_monitor):    Started    pcamn02
      
      Daemon Status:
        corosync: active/enabled
        pacemaker: active/enabled
        pcsd: active/enabled

      In this example, the command output indicates that the node with host name pcamn02 currently holds the cluster virtual IP.

    2. Log in to the management node with the virtual IP and launch the Service CLI.

      # ssh pcamn02
      # ssh admin@localhost -p 30006
      PCA-ADMIN>
  3. Enter the upgrade command.

    Syntax (entered on a single line):

    upgradeHost 
    hostIp=<management-node-ip>

    Example:

    PCA-ADMIN> upgradeHost hostIp=100.96.2.35
    Command: upgradeHost hostIp=100.96.2.35 
    Status: Success
    Time: 2021-09-25 05:47:02,735 UTC
    Data:
      Service request has been submitted. Upgrade Job Id = 1632990827394-host-56156 Upgrade Request Id = UWS-1a97a8d9-54ef-478d-a0c0-348a17ba6755
  4. Use the request ID and the job ID to check the status of the upgrade process.

    PCA-ADMIN> getUpgradeJobs
      id                               upgradeRequestId                           commandName   result
      --                               ----------------                           -----------   ------
      1632990827394-host-56156         UWS-1a97a8d9-54ef-478d-a0c0-348a17ba6755   host          Passed
    
    PCA-ADMIN> getUpgradeJob upgradeJobId=1632990827394-host-56156
    Command: getUpgradeJob upgradeJobId=1632990827394-host-56156
    Status: Success
    Time: 2021-09-25 05:54:28,054 UTC
    Data:
      Upgrade Request Id = UWS-1a97a8d9-54ef-478d-a0c0-348a17ba6755
      Composition Id = 1
      Name = host
      Start Time = 2021-09-25T05:47:02
      End Time = 2021-09-25T05:48:38
      Pid = 56156
      Host = pcamn02
      Log File = /nfs/shared_storage/pca_upgrader/log/pca-upgrader_host_os_2021_09_25-05.47.02.log
      Arguments = {"verify_only":false,"upgrade":false,"diagnostics":false,"host_ip":"100.96.2.35","result_override":null,"log_level":null,"switch_type":null,"precheck_status":false,"task_time":0,"fail_halt":false,"fail_upgrade":null,"component_names":null,"upgrade_to":null,"image_location":"http://host.example.com/pca-3.0.1-b535176.iso","epld_image_location":null,"expected_iso_checksum":null,"checksum":"240420cfb9478f6fd026f0a5fa0e998e086275fc45e207fb5631e2e99732e192e8e9d1b4c7f29026f0a5f58dadc4d792d0cfb0279962838e95a0f0a5fa31dca7","composition_id":"1","request_id":"UWS-1a97a8d9-54ef-478d-a0c0-348a17ba6755","display_task_plan":false,"dry_run_tasks":false}
      Status = Passed
      Execution Time(sec) = 96
      Tasks 1 - Name = Validate Image Location
      Tasks 1 - Description = Verify that the image exists at the specified location and is correctly named
      Tasks 1 - Time = 2021-09-25T05:47:02
      Tasks 2 - Name = Validate Image Location
    [...]
  5. When the first management node host operating system upgrade has completed successfully, execute the same command for the next management node.

    PCA-ADMIN> upgradeHost hostIp=100.96.2.33
  6. When the second management node host operating system upgrade has completed successfully, exit the Service CLI and move the cluster virtual IP to one of the upgraded nodes.

    PCA-ADMIN> exit
    Connection to localhost closed.
    # pcs resource move mgmt-rg pcamn01
    # pcs status
    Cluster name: mncluster
    Stack: corosync
    [...]
     scsi_fencing   (stonith:fence_scsi):   Stopped (disabled)
     Resource Group: mgmt-rg
         vip-mgmt-int       (ocf::heartbeat:IPaddr2):       Started pcamn01
         vip-mgmt-host      (ocf::heartbeat:IPaddr2):       Started pcamn01
    [...]

    Moving the cluster virtual IP to another management node should only take a number of seconds.

  7. Log in to the management node with the virtual IP and launch the Service CLI to execute the host operating system upgrade for the final management node.

    # ssh pcamn01
    # ssh admin@localhost -p 30006
    PCA-ADMIN> upgradeHost hostIp=100.96.2.34

    When this upgrade has completed successfully, the operating system on all management nodes is up-to-date.

    Note:

    After upgrade, the management nodes must all be rebooted for the changes to take effect. However, this step is part of the upgrade process, so no administrator action is required.