Upgrading Oracle Linux on the VM

To upgrade Oracle Linux from Oracle Linux 7 to Oracle Linux 8 on the VM:

  1. Create a dummy Oracle Resource Manager stack as follows:
    1. Sign in to OCI Console.
    2. Open the navigation menu and click Marketplace.
    3. Under Marketplace, click All Applications.
    4. Enter "Siebel" in the search bar.
    5. Click Siebel Cloud Manager (SCM).
    6. Select SCM version 25.4 or later.
    7. Select the compartment to create the stack in.
    8. Select the Oracle Standard Terms and Restrictions checkbox.
    9. Click Launch Stack.
    10. Enter the stack name, description, and tags (you can retain the default values) and click Next.
    11. Configure the variables for the infrastructure resources. Use the same values as the existing stack and click Next.
    12. Deselect the Run apply checkbox.
    13. Click Create.
  2. Create the execution plan for the stack resources and download the Terraform configuration as follows:
    1. Go to the stack created in step 1.
    2. On the Stack Details page, click Plan.
    3. Optionally, update the plan job name.
    4. Click Plan. The Plan job takes a few minutes to complete.
    5. Once it succeeds, click the plan job name under the Jobs table. The plan job page appears.
    6. Click Download Terraform Configuration to download the configuration ZIP file.
  3. Edit the existing stack either through OCI Console or through OCI CLI:
    • Through OCI Console as follows:
      1. On the Stacks list page, click the existing stack that was used to deploy SCM. The stack page appears.
      2. Scroll down to the bottom left of the stack page, click the Variables tab under the Resources section, and make a note of all the variables and their values.
      3. Click Edit.
      4. Click Edit Stack. The Edit Stack page appears.
      5. Click Browse under the Terraform configuration source section.
      6. Browse and select the configuration ZIP file downloaded in Step 2.
      7. Click Open.
      8. Click Next.
      9. Click Browse under the OCI User private key section.
      10. Browse and upload the OCI private key file.
      11. Review all other variables for the infrastructure resources and ensure that their values match the values noted earlier in step 3.b of OCI console.
      12. Click Next.
      13. Do not select Run Apply.
      14. Click Save Changes.
      15. On the Stack Details page, click Plan. Wait for the Plan job to complete successfully.
      16. Verify the Plan job status in OCI Console. After the Plan job succeeds, review the Plan job logs to confirm that only 1 resource, the OCI instance, is marked for destruction. Ensure that no other resources are marked for deletion.
      17. On the Stack Details page, click Apply. Wait for the Apply job to complete successfully.
    • Through OCI CLI:
      1. Update the stack with the new terraform configuration ZIP file as follows:
        oci resource-manager stack update --stack-id <Stack OCID> --config-source <zipFilePath> --force

        The variables in the example have the following values:

        • <Stack OCID> is the OCID of the existing stack to update.
        • <zipFilePath> is the path of the Terraform configuration zip file downloaded in the previous step.
      2. Create the Plan job as follows:
        oci resource-manager job create-plan-job --stack-id <Stack OCID>

        The variables <Stack OCID> is the OCID of the existing stack to update.

        Wait for Plan job to complete successfully.

      3. Verify the Plan job status in OCI Console.
      4. Review the logs to confirm that only 1 resource, OL7 instance, is getting destroyed.
      5. Create Apply job:
        oci resource-manager job create-apply-job --execution-plan-strategy FROM_PLAN_JOB_ID --stack-id <Stack OCID> --execution-plan-job-id <Plan Job OCID>e
      6. Wait for few minutes for Apply job to complete.
      7. Verify status of Apply job in OCI Console.
    Note: The Oracle Linux 7 VM instance is destroyed and a new Oracle Linux 8 VM instance is created. SCM will be installed as a Pod man container service on the new Oracle Linux 8 VM instance.
  4. Verify the deployment as follows:
    1. SSH into the SCM instance:
      ssh -i <ssh Private Key> opc@<SCM Host IP>
    2. Verify the status of the SCM container:
      sudo podman ps
    3. Verify the status of cloudmanager container service:
      sudo systemctl status cloudmanager

      Ensure that the cloudmanager service is Active: active (running).

    4. Retrieve the provisioned environment details:
      curl --location --request GET 'https://<CM_Instance_IP>:<Port>/scm/api/v1.0/environment/<env_id>' \
            --header 'Content-Type: application/json' \
            --header 'Authorization: Basic <Base 64 encoded user name and api_key>'
      The SCM service is functioning correctly if the response includes the valid environment details.
      Note: If you are unable to retrieve the environment details, and the Apply job logs from the ORM stack indicate that a new File Storage System (FSS) is being created, see the troubleshooting section Handling Additional FSS created during OL8 Upgrade.
  5. Migrate the new SCM features as follows:
    sudo podman exec -it cloudmanager bash
    cd /home/opc
    bash siebel-cloud-manager/scripts/cmapp/migration.sh

    Choose one of the options presented by the migration.sh script. Run the script multiple times for the required options.

  6. Restart the SCM container as follows:
    cd /home/opc/cm_app/{CM_RESOURCE_PREFIX}/bash start_cmserver.sh <SCM VERSION>