Deploying Siebel CRM using SCM

This topic describes the steps to deploy Siebel CRM on a Kubernetes cluster on premises or in the cloud using Bring Your Own Resource (BYOR) through SCM.

This section includes the following information:

Before Deploying Siebel CRM on a Kubernetes Cluster

Before you deploy Siebel CRM on a Kubernetes cluster on premises or in the cloud using BYOR through SCM, you must complete the following tasks:

  1. Push the SCM utilities and Siebel CRM image from the Oracle registry to your container registry using the mirror API as follows:
    1. Fetch the administrator user credentials, that will be used to access the SCM APIs, from the api_creds.ini file in:
      • The /home/opc/config directory or
      • The NFS share <nfsServer>:/<nfsPath>/<namespace>/config directory.

      where:

      • <nfsServer> is the is the IP address or name of the NFS share server.
      • <nfsPath> is the NFS share path.
      • <namespace> is namespace to deploy Siebel CRM in.
      Note: All APIs in SCM are basic authentication enabled, hence administrator user credentials are used for authentication.
    2. Use the mirror API to push the SCM utilities and Siebel CRM image to the container registry. For more information on how to use the mirror API, see Mirroring Siebel Base Container Images.
  2. Ensure the following resources that are required for Siebel CRM deployment are available for smooth deployment of Siebel CRM:
    1. Siebel CRM file system: For the Siebel CRM file system, you can create a NFS share or use the NFS share that you created as part of the prerequisites.
    2. Database: Make a note of the wallet path referenced inside the SCM pod and the TNS connect string alias value. These details will be used in the payload for deploying Siebel CRM on a Kubernetes cluster. In case of TNS without TLS:
      1. Create a tnsnames.ora file with the database details in a directory named wallet.
      2. Copy the wallet directory to the SCM NFS share config directory. For example,
        <nfsServer>:/<nfsPath>/<namespace>/config/wallet
    3. Container registry: Make a note of the following container registry details to which the SCM utilities and Siebel CRM image were mirrored:
      • The registry URL
      • The registry user name and password
      • The registry prefix.
    4. Kubernetes cluster: You can use the same cluster you used to deploy SCM, to deploy Siebel CRM. Copy the Kubeconfig file of the cluster to the pod or make it accessible to the pod. Make a note of this path, it will be used in the payload for Siebel CRM deployment. For example,
      <nfsServer>:/<nfsPath>/<namespace>/config/kubeconfig
    5. Gitlab instance: Make a note of the Gitlab instance details (IP address or hostname, username, access token and root certificate). Copy the certificate to the pod or make it accessible to the pod, this path will be used in the payload for Siebel CRM deployment. For example,
      <nfsServer>:/<nfsPath>/<namespace>/config/rootCA.crt

Deploying Siebel CRM using SCM on a Kubernetes Cluster

After you have performed all the prerequisite tasks for Siebel CRM deployment and ensured that all resources are available, you can use SCM to deploy Siebel CRM on a Kubernetes cluster. You must prepare a suitable payload and then execute this payload on SCM.

To deploy Siebel CRM on a Kubernetes cluster:

  1. Prepare the payload for deployment.
    Note: The following example is a sample payload for creating a Greenfield Siebel CRM deployment on an OCNE cluster with observability, update the parameters according to your local configuration and prepare the payload.
    {
          "name": "demo",
          "siebel": {
                "registry_url": "<container_registry_url>",
                "registry_user": "<container_userName>",
                "registry_password": "<container_registry_userPassword>",
                "registry_prefix": "<container_registry_prefix>",
                "database_type": "Vanilla",
                "industry": "Telecommunications" 
          },
          "infrastructure": {
                "gitlab_url": "URL",
                "gitlab_accesstoken": "<gitlab_access_token>",
                "gitlab_user": "root",
                "gitlab_selfsigned_cacert": "/home/opc/config/rootCA.crt", 
                "kubernetes": {
                "kubernetes_type": "BYO_OCNE",
                "byo_ocne": {
                      "kubeconfig_path": "/home/opc/config/kubeconfig"
                }
          },
          "ingress_controller": {
                "ingress_service_type": "NodePort"
          },
          "mounttarget_exports": {
                "siebfs_mt_export_paths": [
                      {
                      "mount_target_private_ip": "<nfsServer>",
                      "export_path": "<nfsPath>"
                      }
                ],
                "migration_package_mt_export_path": {
                      "mount_target_private_ip": "<nfsServer>",
                      "export_path": "<nfsPath>"
                      }
                }
          },
          "database": {
                "db_type": "BYOD",
                "byod": {
                      "wallet_path": "/home/opc/config/wallet", 
                      "tns_connection_name": "<TNS_connect_string>"
                },
                "auth_info": {
                      "table_owner_password": "<tableOwnerUserPassword>",
                      "table_owner_user": "<tableOwnerUser>",
                      "default_user_password": "<plainTextPWD>",
                      "anonymous_user_password": "<plainTextPWD>",
                      "siebel_admin_password": "<plainTextPWD>",
                      "siebel_admin_username": "<adminUser>"
                      }
                },
          "observability": {
                "siebel_monitoring": true,
                "siebel_logging": true,
                "enable_oracle_opensearch": true,
                "prometheus": {
                      "storage_class_name": "local-storage",
                      "local_storage_info": {
                            "local_storage": "/mnt/test",
                            "kubernetes_node_hostname": "<hostname>"
                      }
                },
                "oracle_opensearch": {
                      "storage_class_name": "local-storage",
                      "local_storage_info": [
                      {
                            "local_storage": "/mnt/test1",
                            "kubernetes_node_hostname": "<hostName>"
                      },
                      {
                            "local_storage": "/mnt/test2",
                            "kubernetes_node_hostname": "<hostName>"
                      },
                      {
                            "local_storage": "/mnt/test3",
                            "kubernetes_node_hostname": "<hostName>"
                      }
                ]
          },
                "monitoring_mt_export_path": {
                      "mount_target_private_ip": "<mountTargetIPAddress>",
                      "export_path": "/olcne-migration"
                }
          }
    }
    Note: For more information on the parameters in the deployment payload, see Parameters in Payload Content.
    Note: You can create a similar payload for deploying Siebel CRM on any other CNCF certified Kubernetes cluster by setting the parameter kubernetes_type to BYO_OTHER and updating the parameter values accordingly.
  2. Submit the payload using the environment API through a POST request as follows:
    POST https://<nodeIPAddress>:<nodePortNumber>/scm/api/v1.0/environment

    where:

    • <nodeIPaddress> is any active node IP address.
    • <nodePortNumber> is the assigned node port number.
  3. Check the status of the workflow using the GET API, self-link for the same will be available in the POST request response. You must ensure that the:
    • Environment status of the Siebel CRM deployment using SCM is "completed".
    • Status of all stages is "passed".
    • Siebel CRM URLs are available in the GET API response once the workflow is complete.

    For information on troubleshooting, see Troubleshooting Siebel CRM Deployment.

    Note: When lifting an existing Siebel CRM environment and deploying it on premises, you can now specify the NFS server endpoint that holds all the Siebel CRM artifacts using the nfs section within the siebel block. This section allows the users to specify the NFS server details such as the NFS server endpoint, the NFS share directory path and the Persistent Volume Claim (PVC) size.

    To use the nfs section in the payload:

    1. Place all Siebel CRM artifacts lifted by the Lift and Shift utility in a NFS share directory that's accessible to the cluster.
    2. Include the nfs section in the siebel block, in the Siebel CRM deployment payload as follows:
      "siebel": {
            "registry_url": "<container registry url>",
            "registry_user": "<container user name>",
            "registry_password": "<container registry user password>",
            "registry_prefix": "<container registry prefix>",
            "nfs": {
                  "server": "<nfsServer>",
                  "path": "<nfsServerPath>",
                  "storage": "<storage>"
          }
      }

      where

      • <nfsServer> is the NFS server endpoint.
      • <nfsServerPath> is the NFS server directory path that holds the lifted Siebel CRM artifacts.
      • <storage> is the optional parameter that's used to specify the PVC size of the intermediate artifactory server. Default size is 100 GB.

      For more information, see Downloading and Running the Siebel Lift Utility and Parameters in Payload Content.