Upgrading Oracle Linux on the VM
To upgrade Oracle Linux from Oracle Linux 7 to Oracle Linux 8 on the VM:
- Create a dummy Oracle Resource Manager stack as follows:
- Sign in to OCI Console.
- Open the navigation menu and click Marketplace.
- Under Marketplace, click All Applications.
- Enter "Siebel" in the search bar.
- Click Siebel Cloud Manager (SCM).
- Select SCM version 25.4 or later.
- Select the compartment to create the stack in.
- Select the Oracle Standard Terms and Restrictions checkbox.
- Click Launch Stack.
- Enter the stack name, description, and tags (you can retain the default values) and click Next.
- Configure the variables for the infrastructure resources. Use the same values as the existing stack and click Next.
- Deselect the Run apply checkbox.
- Click Create.
- Create the execution plan for the stack resources and download the Terraform
configuration as follows:
- Go to the stack created in step 1.
- On the Stack Details page, click Plan.
- Optionally, update the plan job name.
- Click Plan. The Plan job takes a few minutes to complete.
- Once it succeeds, click the plan job name under the Jobs table. The plan job page appears.
- Click Download Terraform Configuration to download the configuration ZIP file.
- Edit the existing stack either through OCI Console or through OCI CLI:
- Through OCI Console as follows:
- On the Stacks list page, click the existing stack that was used to deploy SCM. The stack page appears.
- Scroll down to the bottom left of the stack page, click the Variables tab under the Resources section, and make a note of all the variables and their values.
- Click Edit.
- Click Edit Stack. The Edit Stack page appears.
- Click Browse under the Terraform configuration source section.
- Browse and select the configuration ZIP file downloaded in Step 2.
- Click Open.
- Click Next.
- Click Browse under the OCI User private key section.
- Browse and upload the OCI private key file.
- Review all other variables for the infrastructure resources and ensure that their values match the values noted earlier in step 3.b of OCI console.
- Click Next.
- Do not select Run Apply.
- Click Save Changes.
- On the Stack Details page, click Plan. Wait for the Plan job to complete successfully.
- Verify the Plan job status in OCI Console. After the Plan job succeeds, review the Plan job logs to confirm that only 1 resource, the OCI instance, is marked for destruction. Ensure that no other resources are marked for deletion.
- On the Stack Details page, click Apply. Wait for the Apply job to complete successfully.
- Through OCI CLI:
- Update the stack with the new terraform configuration ZIP file as
follows:
oci resource-manager stack update --stack-id <Stack OCID> --config-source <zipFilePath> --force
The variables in the example have the following values:
<Stack OCID>
is the OCID of the existing stack to update.<zipFilePath>
is the path of the Terraform configuration zip file downloaded in the previous step.
- Create the Plan job as
follows:
oci resource-manager job create-plan-job --stack-id <Stack OCID>
The variables
<Stack OCID>
is the OCID of the existing stack to update.Wait for Plan job to complete successfully.
- Verify the Plan job status in OCI Console.
- Review the logs to confirm that only 1 resource, OL7 instance, is getting destroyed.
- Create Apply
job:
oci resource-manager job create-apply-job --execution-plan-strategy FROM_PLAN_JOB_ID --stack-id <Stack OCID> --execution-plan-job-id <Plan Job OCID>e
- Wait for few minutes for Apply job to complete.
- Verify status of Apply job in OCI Console.
- Update the stack with the new terraform configuration ZIP file as
follows:
Note: The Oracle Linux 7 VM instance is destroyed and a new Oracle Linux 8 VM instance is created. SCM will be installed as a Pod man container service on the new Oracle Linux 8 VM instance. - Through OCI Console as follows:
- Verify the deployment as follows:
- SSH into the SCM
instance:
ssh -i <ssh Private Key> opc@<SCM Host IP>
- Verify the status of the SCM
container:
sudo podman ps
- Verify the status of
cloudmanager
container service:sudo systemctl status cloudmanager
Ensure that the
cloudmanager
service is Active: active (running). - Retrieve the provisioned environment
details:
curl --location --request GET 'https://<CM_Instance_IP>:<Port>/scm/api/v1.0/environment/<env_id>' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic <Base 64 encoded user name and api_key>'
The SCM service is functioning correctly if the response includes the valid environment details.Note: If you are unable to retrieve the environment details, and the Apply job logs from the ORM stack indicate that a new File Storage System (FSS) is being created, see the troubleshooting section Handling Additional FSS created during OL8 Upgrade.
- SSH into the SCM
instance:
- Migrate the new SCM features as
follows:
sudo podman exec -it cloudmanager bash cd /home/opc bash siebel-cloud-manager/scripts/cmapp/migration.sh
Choose one of the options presented by the
migration.sh
script. Run the script multiple times for the required options. - Restart the SCM container as
follows:
cd /home/opc/cm_app/{CM_RESOURCE_PREFIX}/bash start_cmserver.sh <SCM VERSION>