Upgrading Oracle Linux on the VM
To upgrade Oracle Linux from Oracle Linux 7 to Oracle Linux 8 on the VM:
- Create a dummy Oracle Resource Manager stack as follows:
- Sign in to OCI Console.
- Open the navigation menu and click Marketplace.
- Under Marketplace, click All Applications.
- Enter "Siebel" in the search bar.
- Click Siebel Cloud Manager (SCM).
- Select SCM version 25.4 or later.
- Select the compartment to create the stack in.
- Select the Oracle Standard Terms and Restrictions checkbox.
- Click Launch Stack.
- Enter the stack name, description and tags (you can retain the default values) and click Next.
- Configure the variables for the infrastructure resources this stack will create and click Next.
- Deselect the Run apply checkbox.
- Click Create.
- Create the execution plan for the stack resources and download the Terraform
configuration as follows:
- Go to the stack created in step 1.
- On the Stack Details page, click Plan.
- Optionally, update the plan job name.
- Click Plan. The Plan Job takes a few minutes to complete.
- Once it succeeds, click the plan job name under the Jobs table. The plan job page appears.
- Click Download Terraform Configuration to download the configuration ZIP file.
- Edit the existing stack either through OCI Console or through OCI CLI:
- Through OCI Console as follows:
- On the Stacks list page, click the existing stack that was used to deploy SCM. The stack page appears.
- Note the values of the resource prefix (resource_prefix), mount target IP (mount_target_ip) and file system export path (export_path) parameters.
- Click Edit.
- Click Edit Stack. The Edit Stack page appears.
- Click Browse under the Terraform configuration source section.
- Browse and select the configuration ZIP file downloaded in Step 2.
- Click Open.
- Click Next.
- Click Browse under the OCI User private key section.
- Browse and upload the OCI private key file.
- Review the other variables for the infrastructure resources and
update as required. Note: Do not update the values of the resource prefix, mount target IP and export path variables. Use the values noted in step b to ensure the values are the same after editing the stack.
- Click Next.
- Select Run Apply.
- Click Save Changes.
- Wait for few minutes for the Apply Job to succeed.
- Through OCI CLI:
- Update the stack with the new terraform configuration ZIP file as
follows:
oci resource-manager stack update --stack-id <Stack OCID> --config-source <zipFilePath> --force
The variables in the example have the following values:
<Stack OCID>
is the OCID of the existing stack to update.<zipFilePath>
is the path of the Terraform configuration zip file downloaded in the previous step.
- Create the Plan Job as
follows:
oci resource-manager job create-plan-job --stack-id <Stack OCID>
The variables
<Stack OCID>
is the OCID of the existing stack to update.Wait for Plan Job to complete successfully.
- Verify the Plan Job status in OCI Console.
- Review the logs to confirm that it shows the following message:
'module.compute.oci_core_instance.SiebelCM_Bastion must be replaced' and Plan: 1 to add, 0 to change, 1 to destroy.
- Create Apply
Job:
oci resource-manager job create-apply-job --execution-plan-strategy FROM_PLAN_JOB_ID --stack-id <Stack OCID> --execution-plan-job-id <Plan Job OCID>e
- Wait for few minutes for Apply Job to complete.
- Verify status of Apply Job in OCI Console.
- Update the stack with the new terraform configuration ZIP file as
follows:
Note: The Oracle Linux 7 VM instance is destroyed and a new Oracle Linux 8 VM instance is created. SCM will be installed as a Podman container service on the new Oracle Linux 8 VM instance. - Through OCI Console as follows:
- Verify the deployment as follows:
- SSH into the SCM
instance:
ssh -i <ssh Private Key> opc@<SCM Host IP>
- Verify the status of the SCM
container:
sudo podman ps
- Verify the status of
cloudmanager
container service:sudo systemctl status cloudmanager
Ensure that the
cloudmanager
service is Active: active (running). - Retreive the provisioned environment
details:
curl --location --request GET 'https://<CM_Instance_IP>:<Port>/scm/api/v1.0/environment/<env_id>' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic <Base 64 encoded user name and api_key>'
The SCM service is functioning correctly if the response includes the valid environment details.
- SSH into the SCM
instance:
- Migrate the new SCM features as
follows:
sudo podman exec -it cloudmanager bash cd /home/opc bash siebel-cloud-manager/scripts/cmapp/migration.sh
Choose one of the options presented by the
migration.sh
script. Run the script multiple times for the required options. - Restart the SCM container as
follows:
cd /home/opc/cm_app/{CM_RESOURCE_PREFIX}/bash start_cmserver.sh <SCM VERSION>