Patching Management and Compute Nodes

After you have completed either Configuring Private Cloud Appliance Direct Access or Configuring a Mirror Server, you are ready to patch the appliance.

Run these commands from the master management node.

  1. Verify that the repositories are configured.

    # /usr/sbin/pca-admin show uln-repo

    If repositories are not configured, the show uln-repo command shows the message "No patch repositories setup. Run create uln-repo to create repositories." See Configuring Private Cloud Appliance Direct Access or Configuring a Mirror Server.

  2. Update the ULN repositories.

    1. If necessary, set the proxy as you did in Configuring Private Cloud Appliance Direct Access or Configuring a Mirror Server.

    2. Update local ULN repositories.

      # /usr/sbin/pca-admin update uln-repo
      ************************************************************
       WARNING !!! THIS IS A DESTRUCTIVE OPERATION.
      ************************************************************
      Are you sure [y/N]:y
       
      Status: Success

      The management node and compute node ULN local repositories are updated with the latest packages available.

      If any failure occurred, check the ovca.log file for messages.

    3. If a proxy is set, unset the proxy.

  3. Verify that the rack is in a stable state for patching.

    # /usr/sbin/pca_upgrader -V -t patch -c remote_mn_name

    The pca_upgrader pre-checks complete and report the status of the rack on the command line and to the log file:

    pca_upgrader_date_time_remote_mn_name_verify_patch.log
  4. Check whether a management node patch is available.

    # /usr/sbin/pca-admin show uln-repo
    ...
    Management Patch Repo Created Yes

    This example shows that a management node patch is available. If the value of Created is No, no management node patch is available.

  5. Patch the remote management node.

    # /usr/sbin/pca_upgrader -U -t patch -c remote_mn_name

    The non-master management node is updated with the packages in the management ULN repository. The result is reported in the log file:

    pca_upgrader_date_time_remote_mn_name_patch.log

    The appliance fails over to the patched management node.

  6. Verify that the appliance failed over to the patched management node.

    The previously patched manager should now be the master. The pca-check-master command shows True for the patched management node and False for the unpatched management node.

  7. Patch the second (new remote) management node.

    # /usr/sbin/pca_upgrader -U -t patch -c remote_mn_name
  8. Check whether a compute node patch is available.

    # /usr/sbin/pca-admin show uln-repo
    ...
    Compute Patch Repo Created Yes

    This example shows that a compute node patch is available. If the value of Created is No, no compute node patch is available.

  9. Patch the compute nodes.

    Note:

    Before patching a compute node, use Oracle VM Manager to place the compute node into maintenance mode.

    # /usr/sbin/pca-admin update compute-node cn_name

    The compute node is updated with the packages in the compute ULN repository.

    The compute node reboots to complete the patch.