Installing Siebel Monthly Update in a Siebel CRM on OKE Environment Deployed by SCM
You can use these steps to install latest monthly updates in a Siebel CRM on OKE environment deployed by SCM. These steps do not include repository upgrade steps, which are optional and identical to those relevant for on-premises Siebel CRM deployments.
kubectl get pods
" may throw error.
Sourcing of virtual environment and k8sprofile can be done by running the following
commands:docker exec -it cloudmanager bash
bash-4.4$ source /home/opc/venv/bin/activate
source /home/opc/siebel/<env_id>/k8sprofile
-
Back up the database
The first step of the upgrade would be to back up the database. Preferably, it has to be a full backup.
-
Back up the SCM provided files for the Siebel environment
The files in the current environment should be backed up to make sure we have a working version of the required set of files, in case of any issues with new upgrade or to rollback to the previous version.
Run the following steps in the given sequence to create a backup:
ssh -i <private_key> opc@<cm_instance> mkdir /home/opc/cm_app/{CM_RESOURCE_PREFIX}/siebel/<env_id>/<backup_dir_name> docker exec -it cloudmanager bash cd /home/opc/siebel/<env_id>/<backup_dir_name> cp -R /home/opc/siebel/<env_id>/<env_namespace>-siebfs0/<ENV_NAMESPACE>/CGW/ /home/opc/siebel/<env_id>/<env_namespace>-siebfs0/<ENV_NAMESPACE>/SES/ /home/opc/siebel/<env_id>/<env_namespace>-siebfs0/<ENV_NAMESPACE>/edge/ /home/opc/siebel/<env_id>/<env_namespace>-siebfs0/<ENV_NAMESPACE>/quantum/ /home/opc/siebel/<env_id>/<backup_dir_name> exit
Note: The following legend describes the meaning and provides the values to be used for the variables used in the above commands:- <private_key>: The key used in SCM stack creation.
- <cm_instance>: The SCM instance IP address.
- <backup_dir_name>: The name of the backup directory.
- <env_id>: The six characters long environment ID.
- <env_namespace>: The name of the environment given in the payload.
- <ENV_NAMESPACE>: The name of the environment given in the payload, in upper case.
- edge: The Siebel CRM server name.
- quantum: The ai server name.
-
Upgrade the SCM instance
SCM instance has to be upgraded for its version to match the target Siebel CRM version. For example, if the target Siebel version has to be upgraded to 23.5, first upgrade the SCM instance to CM_23.5.0.
Run the following commands to upgrade the SCM instance:
bash start_cmserver.sh <CM_IMAGE_VERSION> # Example: bash start_cmserver.sh CM_23.5.0
-
Build and push the new Siebel Custom image for the target version
Run the following steps to pull the target version base image from Oracle Cloud Container registry, re-tag it, and push it to the registry specific to the user's environment.
-
Pull the target Siebel CRM version base image from Oracle Cloud Container registry (for example: PHX):
export target_version=<target_siebel_version> # Example: export target_version="23.5" export source_base_image="phx.ocir.io/siebeldev/cm/siebel:$target_version-full" docker pull $source_base_image
-
Re-tag the pulled image above to the user's environment registry:
export target_base_image="<registry_url>/<registry_namespace>/<env_namespace>/siebel:$target_version-full" # Example: export target_base_image="hyd.ocir.io/siebeldev/testenv/siebel:$target_version-full" docker tag $source_base_image $target_base_image
-
Login to the docker registry to push the target_base_image to user's registry URL:
docker login <user_region>.ocir.io # Example: docker login hyd.ocir.io docker push $target_base_image
-
Exec into the SCM container and sync up the local helm charts git repository with the remote repository for the custom artifacts changes:
docker exec -it cloudmanager bash # Go to artifact folder and reset git repository cd /home/opc/siebel/<ENV_ID>/<Helm charts repository name>/ git clean -d -x -f git pull exit
-
Build a new custom image for the Siebel web artifacts and push it to the customer's registry:
cd /home/opc/siebel/2INE9M/<Helm charts repository name>/ cd siebel-artifacts/build/ # build a new custom image and push to customer registry export target_image=<registry_url>/<registry_namespace>/<env_namespace>/siebel:$target_version.CUSTOM.1 # Example: export target_image="hyd.ocir.io/siebeldev/testenv/siebel:$target_version.CUSTOM.1" docker build --build-arg BASE_IMAGE=${target_base_image} -t ${target_image} ./ docker push $target_image
-
-
Tagging git repositories before moving to the latest Siebel CRM version
-
Create a tag in the SCM Git repository:
docker exec -it cloudmanager bash # Go to helmcharts git repository cd /home/opc/siebel/<ENV_ID>/<Cloud manager repository name>/ git pull git tag <Tag_Name> Example: git tag 23.8 git push origin --tags exit
in the example,
Tag_Name
can be any marker to identify the source Siebel version changes. -
Similarly, creat a tag in the Helm charts Git repository:
docker exec -it cloudmanager bash # Go to helmcharts git repository cd /home/opc/siebel/<ENV_ID>/<Helm charts repository name>/ git pull git tag <Tag_Name> Example: git tag 23.8 git push origin --tags exit
where
Tag_Name
can be any marker to identify the source Siebel version changes.
-
-
Update the SCM git repository files with the newly built target Siebel CRM image
-
Update the new base_image value in
<git_url>/root/<Cloud manager repository name>/-/blob/master/flux-crm/apps/base/siebel/siebel-artifacts.yaml
file in SCM Git repository as:- base_image:
<region>.ocir.io/siebeldev/<env_namespace>/siebel:$target_version-full
- Example:
lhr.ocir.io/siebeldev/testenv/siebel:23.5-full
- base_image:
-
Update the
siebel-artifacts/Chart.yaml
file in the in Helm charts Git repository as follows:- Increment the version (Example: version: 0.1.1)
- Update the appVersion to the new Siebel CRM version. (Example: appVersion: "23.5")
-
-
Watch out for the successful completion of postinstalldb Kubernetes job
For more information, see Reviewing the PostInstallDBSetup Execution Status.
-
The new image updates will trigger postinstalldb update through flux-crm sync up.
-
Wait for the Kubernetes job completion.
-
Manually verify the postinstalldb job reports and exit code from the logs.
-
In case of errors, take corrective actions and rerun postinstalldb Kubernetes job by updating the version in
chart.yaml
file as required for an incremental run.For more information, see Making Incremental Changes.
docker exec -it cloudmanager bash source /home/opc/siebel/<env_id>/k8sprofile kubectl -n <env_namespace> get pods NAME READY STATUS RESTARTS AGE postinstalldb-***** 0/1 Completed 0 40h
-
-
Configuration instructions specific to a release
-
For any configuration instructions specific to a release, refer to Siebel Upgrade Guide and Siebel Release Notes.
-
Migrate the persistent volume content. Refer to Deploying Siebel CRM Containers Guide.
-
-
Upgrading the repository
If any new features require repository upgrade, then upgrade the repository. Refer to Using Siebel Tools Guide.
-
Troubleshooting
-
In any of the above steps during the Siebel new image rollout and flux sync-up, verify the Helm Release deployment status.
-
If HelmRelease is in failed state, rollback is required and increment the version in Chart.yaml for the helm upgrade.
To verify the helm release status (READY column values should be "True" for all the helm releases):
bash-4.2$ kubectl get hr -n <env_namespace> NAME AGE READY STATUS kube-state-metrics 4h56m True Release reconciliation succeeded metacontroller 4h57m True Release reconciliation succeeded nginx 4h58m True Release reconciliation succeeded node-exporter 4h56m True Release reconciliation succeeded prometheus 4h56m True Release reconciliation succeeded siebel 4h56m True Release reconciliation succeeded siebel-artifacts 4h56m True Release reconciliation succeeded siebel-config 4h56m True Release reconciliation succeeded siebel-gateway 4h56m True Release reconciliation succeeded
To verify the deployment status of helm charts (STATUS column values should be "deployed" for all the helm charts):
bash-4.2$ helm ls -n <env_namespace> NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION kube-state-metrics test133 1 2023-05-05 05:57:26.399381012 +0000 UTC deployed kube-state-metrics-0.1.0 2.8.2 node-exporter test133 1 2023-05-05 05:57:26.486354004 +0000 UTC deployed node-exporter-0.1.0 1.5.0 prometheus test133 1 2023-05-05 05:57:26.6308481 +0000 UTC deployed prometheus-0.1.0 2.43.0 siebel test133 1 2023-05-05 05:57:26.729612653 +0000 UTC deployed siebel-0.1.0 23.3 siebel-artifacts test133 1 2023-05-05 05:57:28.295972875 +0000 UTC deployed siebel-artifacts-0.1.0 23.3 siebel-config test133 1 2023-05-05 05:57:29.249531247 +0000 UTC deployed siebel-config-0.1.0 23.3 siebel-gateway test133 1 2023-05-05 05:57:32.912426931 +0000 UTC deployed siebel-gateway-0.1.0 23.3 test133-ingress-nginx test133 1 2023-05-05 05:55:27.333118701 +0000 UTC deployed ingress-nginx-4.1.0 1.2.0 test133-metacontroller test133 1 2023-05-05 05:56:26.313921575 +0000 UTC deployed metacontroller-v2.0.12 v2.0.12
Rollback steps for Helm Charts
In case of any failures noticed in the above two commands, find out the stable helmchart revision and do a rollback of helm charts by running the following commands:
-
To find out the previous stable REVISION deployed:
bash-4.2$ helm history siebel -n test133 REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 1 Fri May 5 05:57:26 2023 deployed siebel-0.1.0 23.3 Install complete
-
Rollback to the previous stable REVISION identified by the previous command, that is, helm history:
bash-4.2$ helm rollback siebel -n test133 1 W0505 10:56:23.450209 3296 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "persist-folders", "sai" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "persist-folders", "sai" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "persist-folders", "sai" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "persist-folders", "sai" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") W0505 10:56:23.511704 3296 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "persist-fix", "ses" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "persist-fix", "ses" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "persist-fix", "ses" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "persist-fix", "ses" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Rollback was a success! Happy Helming!
-
-
-
Verify the application URLs once the environment comes up.