Deploying Siebel CRM using SCM

This topic describes the steps to deploy Siebel CRM on a Kubernetes cluster on premises or in the cloud or in your data center on OC3, using BYOR through SCM.

This section includes the following information:

Before Deploying Siebel CRM on a Kubernetes Cluster

Before you deploy Siebel CRM on a Kubernetes cluster, you must complete the following tasks:

  1. Push the SCM utilities and Siebel CRM image from the Oracle registry to your container registry using the mirror API as follows:
    1. Fetch the administrator user credentials, that will be used to access the SCM APIs, from the api_creds.ini file in:
      • The /home/opc/config directory or
      • The NFS share <nfsServer>:/<nfsPath>/<namespace>/config directory.

      The variables in the example have the following values:

      • <nfsServer> is the is the IP address or name of the NFS share server.
      • <nfsPath> is the NFS share path.
      • <namespace> is namespace to deploy Siebel CRM in.
      Note: All APIs in SCM are basic authentication enabled, hence administrator user credentials are used for authentication.
    2. Use the mirror API to push the SCM utilities and Siebel CRM image to the container registry. For more information on how to use the mirror API, see Mirroring Siebel Base Container Images.
  2. Ensure the following resources that are required for Siebel CRM deployment are available for smooth deployment of Siebel CRM:
    1. Siebel CRM file system: For the Siebel CRM file system, you can create a NFS share or use the NFS share that you created as part of the prerequisites.
    2. Database: Make a note of the wallet path referenced inside the SCM pod and the TNS connect string alias value. These details will be used in the payload for deploying Siebel CRM on a Kubernetes cluster. In case of TNS without TLS:
      1. Create a tnsnames.ora file with the database details in a directory named wallet.
      2. Copy the wallet directory to the SCM NFS share config directory. For example,
        <nfsServer>:/<nfsPath>/<namespace>/config/wallet
    3. Container registry: Make a note of the following container registry details to which the SCM utilities and Siebel CRM image were mirrored:
      • The registry URL
      • The registry user name and password
      • The registry prefix.
    4. Kubernetes cluster: You can use the same cluster you used to deploy SCM, to deploy Siebel CRM. Copy the Kubeconfig file of the cluster to the pod or make it accessible to the pod. Make a note of this path, it will be used in the payload for Siebel CRM deployment. For example,
      <nfsServer>:/<nfsPath>/<namespace>/config/kubeconfig
    5. Git instance: Make a note of the Git instance details (IP address or hostname, username, access token and root certificate). Copy the certificate to the pod or make it accessible to the pod, this path will be used in the payload for Siebel CRM deployment. For example,
      <nfsServer>:/<nfsPath>/<namespace>/config/rootCA.crt

Deploying Siebel CRM using SCM on a Kubernetes Cluster

After you have performed all the prerequisite tasks for Siebel CRM deployment and ensured that all resources are available, you can use SCM to deploy Siebel CRM on a Kubernetes cluster. You must prepare a suitable payload and then execute this payload on SCM.

To deploy Siebel CRM on a Kubernetes cluster:

  1. Prepare the payload for deployment as follows:
    • Sample payload for creating a Greenfield Siebel CRM deployment on an OCNE cluster with observability:
      Note: You must update the parameters according to your local configuration and prepare the payload.
      {
            "name":"demo", 
            "siebel": {
                  "registry_url": "<container_registry_url>", 
                  "registry_user": "<container_userName>", 
                  "registry_password": "<container_registry_userPassword>", 
                  "registry_prefix": "<container_registry_prefix>",
                  "database_type": "Vanilla",
                  "industry": "Telecommunications"
            },
            "infrastructure": { 
                  "git": {
                        "git_type": "gitlab",
                        "gitlab": {
                              "git_url": "https://<IP address>", 
                              "git_accesstoken": "<gitlab_token>", 
                              "git_user": "root",
                              "git_selfsigned_cacert": "/home/opc/certs/rootCA.crt" 
                        }
                  },  
                  "kubernetes": {
                  "kubernetes_type": "BYO_OCNE", 
                  "byo_ocne": {
                        "kubeconfig_path": "/home/opc/config/kubeconfig"
                  }
            },
            "ingress_controller": { 
                  "ingress_service_type": "NodePort"
                  },
            "mounttarget_exports":{
                  "siebfs_mt_export_paths": [
                  {
                        "mount_target_private_ip": "<nfsServer>", 
                        "export_path": "<nfsPath>"
                  }
                  ],
                  "migration_package_mt_export_path": { 
                        "mount_target_private_ip": "<nfsServer>", 
                        "export_path": "<nfsPath>"
                        }
                  }
            },
            "database": {
                  "db_type": "BYOD",
                  "byod": {
                        "wallet_path": "/home/opc/config/wallet", 
                        "tns_connection_name": "<TNS_connect_string>"
                  },
                  "auth_info": {
                        "table_owner_password": "<tableOwnerUserPassword>", 
                        "table_owner_user": "<tableOwnerUser>", 
                        "default_user_password": "<plainTextPWD>", 
                        "anonymous_user_password": "<plainTextPWD>", 
                        "siebel_admin_password": "<plainTextPWD>", 
                        "siebel_admin_username": "<adminUser>"
                  }
            },
            "observability": { 
                  "siebel_monitoring": true, 
                  "siebel_logging": true, 
                  "enable_oracle_opensearch": true, 
                  "prometheus": {
                        "storage_class_name": "local-storage", 
                        "local_storage_info": {
                              "local_storage": "/mnt/test", 
                              "kubernetes_node_hostname": "<hostname>"
                        }
                  },
                  "oracle_opensearch": { 
                        "storage_class_name": "local-storage", 
                        "local_storage_info": [
                        {
                              "local_storage": "/mnt/test1", 
                              "kubernetes_node_hostname": "<hostName>"
                        },
                        {
                              "local_storage": "/mnt/test2", 
                              "kubernetes_node_hostname": "<hostName>"
                        },
                        {
                              "local_storage": "/mnt/test3", 
                              "kubernetes_node_hostname": "<hostName>"
                        }
                        ]
            },
                  "monitoring_mt_export_path": {
                        "mount_target_private_ip": "<mountTargetIPAddress>", 
                        "export_path": "/olcne-migration"
                  }
            }
      }
      Note: You can create a similar payload for deploying Siebel CRM on any other CNCF certified Kubernetes cluster by setting the parameter kubernetes_type to BYO_OTHER and updating the parameter values accordingly.
    • Sample payload for creating a Greenfield Siebel CRM deployment on a Kubernetes cluster set up in OC3 with observability:
      Note: You must update the parameters according to your local configuration and prepare the payload.
      {
            "name": "demo",
            "siebel": {
                  "registry_url": "<container_registry_url>", 
                  "registry_user": "<container_userName>",
                  "registry_password": "<container_registry_userPassword>",
                  "registry_prefix":"<container_registry_prefix>",        
                  "database_type": "Vanilla",
                  "industry": "Telecommunications"  
            },
            "infrastructure": {
                  "git": {
                        "git_type": "gitlab",
                        "gitlab": {
                              "git_url": "https://<IP address>", 
                              "git_accesstoken": "<gitlab_token>", 
                              "git_user": "root",
                              "git_selfsigned_cacert": "/home/opc/config/rootCA.crt" 
                        }
                  },  
                  "kubernetes": {
                        "kubernetes_type": "BYO_OKE",
                        "byo_oke": {
                        "oke_cluster_id": "ocid1.xxx",
                        "oke_endpoint": "PUBLIC"
                        }
                  },   
                  "ingress_controller": {
                        "ingress_service_type": "loadbalancer",
                        "ingress_controller_service_annotations": {
                              "oci.oraclecloud.com/load-balancer-type": "lb",
                              "service.beta.kubernetes.io/oci-load-balancer-internal": "false",
                              "service.beta.kubernetes.io/oci-load-balancer-shape": "flexible",
                              "service.beta.kubernetes.io/oci-load-balancer-shape-flex-min": "11",
                              "service.beta.kubernetes.io/oci-load-balancer-shape-flex-max": "105",
                              "service.beta.kubernetes.io/oci-load-balancer-subnet1": "ocid1.subnet.oc1.amaaaaaa2x5pucianjvte3dymz2",
                              "service.beta.kubernetes.io/oci-load-balancer-tls-secret": "lb-tls-certificate",
                              "service.beta.kubernetes.io/oci-load-balancer-ssl-ports": 443,
                              "oci.oraclecloud.com/oci-load-balancer-listener-ssl-config": "{
                                    \"CipherSuiteName\":\"oci-default-http2-tls-12-13-ssl-cipher-suite-v1\", 
                                    \"Protocols\":[\"TLSv1.2\",\"TLSv1.3\"]}"
                        }
                  },
                  "mounttarget_exports": {
                        "siebfs_mt_export_paths": [
                        {
                              "mount_target_private_ip": "<NFS server name/IP>",
                              "export_path": "<nfs-path>"
                        }
                        ],
                        "migration_package_mt_export_path": {
                              "mount_target_private_ip": "<NFS server name/IP>",
                              "export_path": "<nfs-path>"
                        }
                  }
            },
            "database": {
                  "db_type": "BYOD",
                  "byod": {
                        "wallet_path": "/home/opc/config/wallet", 
                        "tns_connection_name": "<TNS connect string>"
                  },
                  "auth_info": {
                        "table_owner_password": "<Plain Text PWD>",
                        "table_owner_user": "<e.g. siebel>",
                        "default_user_password": "<Plain Text PWD>",
                        "anonymous_user_password": "<Plain Text PWD>",
                        "siebel_admin_password": "<Plain Text PWD>",
                        "siebel_admin_username": "<e.g. SADMIN>"
                  }
            }
      }
    Note: For more information on the parameters in the deployment payload, see Parameters in Payload Content.
  2. Submit the payload using the environment API through a POST request as follows:
    POST https://<IPAddress>:<PortNumber>/scm/api/v1.0/environment

    The variables in the example have the following values:

    • <IPaddress> is any active IP address.
    • <PortNumber> is the assigned port number.
  3. Check the status of the workflow using the GET API, self-link for the same will be available in the POST request response. You must ensure that the:
    • Environment status of the Siebel CRM deployment using SCM is "completed".
    • Status of all stages is "passed".
    • Siebel CRM URLs are available in the GET API response once the workflow is complete.

    For information on troubleshooting, see Troubleshooting Siebel CRM Deployment.

Using a Shared Network File System During Lift-And-Shift

When lifting an existing Siebel CRM environment and deploying it on premises, you can now specify the NFS server endpoint that holds all the Siebel CRM artifacts using the nfs section within the siebel block. This section allows the users to specify the NFS server details such as the NFS server endpoint, the NFS share directory path and the Persistent Volume Claim (PVC) size.

To use the nfs section in the payload:

  1. Place all Siebel CRM artifacts lifted by the Lift utility in a NFS share directory that's accessible to the cluster.
  2. Include the nfs section in the siebel block, in the Siebel CRM deployment payload as follows:
    "siebel": {
          "registry_url": "<Container_registry_URL>",
          "registry_user": "<Container_user_name>",
          "registry_password": "<Container_registry_password>",
          "registry_prefix": "<Container_registry_prefix>",
          "nfs": {
                "server": "<nfsServer>",
                "path": "<nfsServerPath>",
                "storage": "<storage>"
        }
    }

    The variables in the example have the following values:

    • <nfsServer> is the NFS server endpoint.
    • <nfsServerPath> is the NFS server directory path that holds the lifted Siebel CRM artifacts.
    • <storage> is the optional parameter that's used to specify the PVC size of the intermediate artifactory server. Default size is 100 GB.

    For more information, see Downloading and Running the Siebel Lift Utility.