Upgrading Oracle Linux on the VM

To upgrade Oracle Linux from Oracle Linux 7 to Oracle Linux 8 on the VM:

  1. Create a dummy Oracle Resource Manager stack as follows:
    1. Sign in to OCI Console.
    2. Open the navigation menu and click Marketplace.
    3. Under Marketplace, click All Applications.
    4. Enter "Siebel" in the search bar.
    5. Click Siebel Cloud Manager (SCM).
    6. Select SCM version 25.4 or later.
    7. Select the compartment to create the stack in.
    8. Select the Oracle Standard Terms and Restrictions checkbox.
    9. Click Launch Stack.
    10. Enter the stack name, description and tags (you can retain the default values) and click Next.
    11. Configure the variables for the infrastructure resources this stack will create and click Next.
    12. Deselect the Run apply checkbox.
    13. Click Create.
  2. Create the execution plan for the stack resources and download the Terraform configuration as follows:
    1. Go to the stack created in step 1.
    2. On the Stack Details page, click Plan.
    3. Optionally, update the plan job name.
    4. Click Plan. The Plan Job takes a few minutes to complete.
    5. Once it succeeds, click the plan job name under the Jobs table. The plan job page appears.
    6. Click Download Terraform Configuration to download the configuration ZIP file.
  3. Edit the existing stack either through OCI Console or through OCI CLI:
    • Through OCI Console as follows:
      1. On the Stacks list page, click the existing stack that was used to deploy SCM. The stack page appears.
      2. Note the values of the resource prefix (resource_prefix), mount target IP (mount_target_ip) and file system export path (export_path) parameters.
      3. Click Edit.
      4. Click Edit Stack. The Edit Stack page appears.
      5. Click Browse under the Terraform configuration source section.
      6. Browse and select the configuration ZIP file downloaded in Step 2.
      7. Click Open.
      8. Click Next.
      9. Click Browse under the OCI User private key section.
      10. Browse and upload the OCI private key file.
      11. Review the other variables for the infrastructure resources and update as required.
        Note: Do not update the values of the resource prefix, mount target IP and export path variables. Use the values noted in step b to ensure the values are the same after editing the stack.
      12. Click Next.
      13. Select Run Apply.
      14. Click Save Changes.
      15. Wait for few minutes for the Apply Job to succeed.
    • Through OCI CLI:
      1. Update the stack with the new terraform configuration ZIP file as follows:
        oci resource-manager stack update --stack-id <Stack OCID> --config-source <zipFilePath> --force

        The variables in the example have the following values:

        • <Stack OCID> is the OCID of the existing stack to update.
        • <zipFilePath> is the path of the Terraform configuration zip file downloaded in the previous step.
      2. Create the Plan Job as follows:
        oci resource-manager job create-plan-job --stack-id <Stack OCID>

        The variables <Stack OCID> is the OCID of the existing stack to update.

        Wait for Plan Job to complete successfully.

      3. Verify the Plan Job status in OCI Console.
      4. Review the logs to confirm that it shows the following message:

        'module.compute.oci_core_instance.SiebelCM_Bastion must be replaced' and Plan: 1 to add, 0 to change, 1 to destroy.

      5. Create Apply Job:
        oci resource-manager job create-apply-job --execution-plan-strategy FROM_PLAN_JOB_ID --stack-id <Stack OCID> --execution-plan-job-id <Plan Job OCID>e
      6. Wait for few minutes for Apply Job to complete.
      7. Verify status of Apply Job in OCI Console.
    Note: The Oracle Linux 7 VM instance is destroyed and a new Oracle Linux 8 VM instance is created. SCM will be installed as a Podman container service on the new Oracle Linux 8 VM instance.
  4. Verify the deployment as follows:
    1. SSH into the SCM instance:
      ssh -i <ssh Private Key> opc@<SCM Host IP>
    2. Verify the status of the SCM container:
      sudo podman ps
    3. Verify the status of cloudmanager container service:
      sudo systemctl status cloudmanager

      Ensure that the cloudmanager service is Active: active (running).

    4. Retreive the provisioned environment details:
      curl --location --request GET 'https://<CM_Instance_IP>:<Port>/scm/api/v1.0/environment/<env_id>' \
            --header 'Content-Type: application/json' \
            --header 'Authorization: Basic <Base 64 encoded user name and api_key>'

      The SCM service is functioning correctly if the response includes the valid environment details.

  5. Migrate the new SCM features as follows:
    sudo podman exec -it cloudmanager bash
    cd /home/opc
    bash siebel-cloud-manager/scripts/cmapp/migration.sh

    Choose one of the options presented by the migration.sh script. Run the script multiple times for the required options.

  6. Restart the SCM container as follows:
    cd /home/opc/cm_app/{CM_RESOURCE_PREFIX}/bash start_cmserver.sh <SCM VERSION>