Prerequisites for Deploying Siebel CRM on a Kubernetes Cluster

This topic lists the prerequisites to deploy Siebel CRM on a Kubernetes cluster on premises or in the cloud, or in your data center on OC3, using Siebel Installer for SCM.

You'll need the following to successfully run Siebel Installer and install SCM:

  • Linux VM version 8 or above with minimum 20 GB free disk space.
  • Helm version 3.8 or later: A package that contains the resources needed to deploy an application on a Kubernetes cluster. The SCM Helm package contains the artifacts required to deploy Siebel on a Kubernetes cluster. It is also used to push the SCM package into the container registry and to deploy SCM and Siebel CRM on a Kubernetes cluster. For more information, refer the online documentation for "Installing Helm".
  • Podman: An open source tool for managing containers on Linux, Windows and so on. Here, Podman is used to mange the SCM container registry. For more information, refer the online documentation for "Podman Installation".
  • Kubectl: A command line tool that helps users to manage their Kubernetes clusters. Here, Kubectl is used to manage the cluster on which SCM and Siebel CRM are deployed. For more information, refer the online documentation for kubectl.
  • VNC session or Xterm to run Siebel Installer in the GUI mode.
  • Siebel CRM: The minimum version of Siebel CRM for migration to OCNE is Siebel CRM 18.12 or later. The Siebel CRM on-premises environment must be running when you run the Siebel Lift utility.
  • Container registry with the appropriate credentials: You must have an open container initiative compliant registry like Harbor with the following registry details:
    • Registry URL: The container registry URL.
    • Registry credentials: The user name and password to access the container registry.
    • (Optional) Registry prefix: When the prefix is specified, the repository path is constructed using the registry prefix. The registry user must have the privileges to create the repository or it must be created before running the installer.
  • Kubernetes cluster: A Kubernetes cluster to install SCM and Siebel CRM. You must configure access to the cluster in the Linux host, that is copy the kubeconfig file of the Kubernetes cluster to a directory on Linux and set the path of the environment variable KUBECONFIG as follows:
    export KUBECONFIG = "/scratch/.kube/<kubeConfigFile>"

    In this example, <kubeConfigFile> is the configuration file of the Kubernetes cluster on which you want to install SCM and deploy Siebel CRM.

  • NFS share: You must create an NFS share to store the SCM and Siebel CRM environment state information. This directory should be accessible from the Kubernetes cluster worker nodes for mounting into the SCM pod and, later, into the Siebel CRM pods for the Siebel file system. For example:
    <nfsServerHost>:/<nfs-path>
    Note: The NFS share should have the no_root_squash parameter set for exports.
  • Kubernetes namespace: The logical division within the Kubernetes cluster in which you want to install SCM. You can create the namespace for SCM installation as follows:
    kubectl create namespace <namespace>

    In this example, <namespace> is the name of the Kubernetes namespace to install SCM in.

    Note: You can also use an existing namespace, but ensure the namespace is empty.
Note:
  • SCM instructions currently are in U.S. English (ENU).
  • While "lift-and-shift" supports all languages that Siebel CRM supports, Greenfield deployments of Siebel CRM using SCM currently support U.S. English (ENU) only.

For OC3, additionally, you must ensure that the following are available to successfully install SCM and deploy Siebel CRM:

  • Python 3.6 or later: Python is required to run OCI Command Line Interface (CLI). Hence, you must set up an OCI CLI compatible version of Python on the Linux host machine on which you will run Siebel Installer and install SCM using Helm.
  • OCI CLI: OCI CLI provides the same core functionality as the OCI Console. It is used to ensure that the OCI config file is set up correctly. To set up OCI CLI on the Linux host, run the following commands:
    pip3 install oci-cli --user
    ls -l .local/bin/
    export PATH=${PATH}:/home/<username>/.local/bin
    oci -v
    Note: When installing OCI CLI, if the compatible version of Python is not installed, OCI CLI will install Python.
  • OCI configuration files: SCM installed on a Kubernetes cluster in OC3 uses OCI SDKs and CLI. Hence, you must ensure that the OCI API-compatible services running in the OC3 control plane components are accessible from the Linux host machine on which SCM is installed. To ensure accessibility, prepare the OCI SDK and OCI CLI configuration files in the OC3 environment as follows.
Note: The default location of the OCI SDK and OCI CLI configuration files, config and oci_cli_rc, respectively, is the ~/.oci directory.
Note: All the paths mentioned here are only for example. You can select the paths of your choice when deploying in your environment.
  1. Prepare the RSA key pair in PEM format as follows:
    1. Generate a 2048-bit private key in the PEM format, as follows:
      mkdir /home/opc/.oci; cd ~/.oci
      openssl genrsa -out /home/opc/.oci/oci_api_key.pem 2048
      chmod 600 /home/opc/.oci/oci_api_key.pem
    2. Generate a public key in the PEM format, as follows:
      openssl rsa -pubout -in /home/opc/.oci/oci_api_key.pem –out $HOME/.oci/oci_api_key_public.pem
    3. Open the public key PEM file and copy the key:
      cat /home/opc/.oci/oci_api_key_public.pem
      Note: You must ensure that you copy the lines BEGIN PUBLIC KEY and END PUBLIC KEY also along with the key.
    4. Add the public key to your user account in the OC3 console, as follows:
      1. Sign in to the OC3 console.
      2. Navigate to the My profile section.
      3. In the left pane, click API keys.
      4. Click Add API key.
      5. Select Paste a public key.
      6. Paste the public key (copied in step c).
      7. Click Add.
  2. Download the Certificate Authority (CA) certificate bundle for the OC3 environment as follows:
    curl -k https://<oc3_region>/cachain > /home/opc/.oci/ca.crt

    In the example above, <oc3_region> is the customer region.

  3. Update the OCI SDK configuration file with connectivity details such as the user credentials, tenancy OCID, and so on, as follows:
    vi /home/opc/.oci/config 
        
    [DEFAULT]
    user=ocid1.user.xxxxxx...........oe2f249bo8ho4z2kp5di6gk20w.........kg1i705v.....
    fingerprint=c4:11:86:05:d3:4........:64:91:ea:2d......
    tenancy=ocid1.tenancy.xxxxxx...........ohtq81ez9p8etm2cry04u0m6..........
    region=<oc3_region>
    key_file=/home/opc/.oci/oci_api_key.pem
  4. Create and update the OCI CLI configuration file with the CA certificate details as follows:
    vi /home/opc/.oci/oci_cli_rc
    
    [DEFAULT]
    custom_cert_location=/home/opc/.oci/ca.crt
    cert-bundle=/home/opc/.oci/ca.crt
  5. Verify the OCI API connectivity in OC3, as follows:
    oci os ns get

    Response received:

    { 
          "data": "aveu8wbpqcen" 
    }