Installing SCM using Helm

This topic describes the steps to install SCM on a Kubernetes cluster on premises or in the cloud or in your data center on OC3 using Helm.

This topic includes the following sections:

Before Installing SCM

You must perform the following preinstallation tasks before installing SCM on a Kubernetes cluster:

  1. Ensure you've access to the installation directory and container registry provided in Siebel Installer.
  2. Create an image pull secret: A pod uses a secret to pull an image from the container registry. To use the SCM image and SCM Helm chart from the container registry, create a secret using the kubectl command as follows:
    kubectl -n <namespace> create secret docker-registry <secretName> --docker-server=<registryURL> --docker-username=<userName> --docker-password=<password> --docker-email=<email>

    The variables in the example have the following values:

    • <namespace> is the name of the namespace you want to install SCM in.
    • <secretName> is the name of the secret.
    • <registryURL> is the container registry URL to which the SCM image and SCM Helm chart were pushed by Siebel Installer.
    • <userName> is the container registry user name.
    • <password> is the container registry user password.
    • <email> is the container registry user email.
  3. Update the values.yaml file: The SCM Helm package includes a default values.yaml file which determines how SCM will be configured. Before installing SCM, you must update the values.yaml file to configure SCM as per your requirements. To update the values.yaml file:
    1. Open the values.yaml file. You can use the values.yaml file in either:
      • The installation directory on the Linux host machine that was used to run Siebel installer, or
      • The SCM Helm chart in your container registry. To use the values.yaml in the container registry:
        1. Sign in to the container registry as follows:
          helm registry login <registry>

          In this example, <registry> is the basename of the container registry.

        2. Pull the SCM Helm chart from the container registry:
          helm pull oci://<registry>/<repositoryPath> --version <releaseVersion>

          The variables in the example have the following values:

          • <registry> is the container registry basename.
          • <repositoryPath> is the SCM Helm chart (cloudmanager) repository path.
          • <releaseVersion> is the SCM release version.
    2. Unzip the SCM Helm chart zip file as follows:
      tar -zxf cloudmanager_CM_<releaseVersion>.tgz

      In this example, <releaseVersion> is the SCM release build version that you downloaded.

    3. Update the following sections in the values.yaml file:
      • The image section with the container registry details (provided in the Siebel Installer configuration tasks) from which the SCM image and SCM Helm chart will be used for deployment, as follows:
        image:
              registry: "<registryURL>"               
              repository: "<imageRepository>"     
              tag: "<imageTag>"                   
              imagePullPolicy: IfNotPresent

        The variables in the example have the following values:

        • <registryURL> is the container registry URL that was provided in the installer configuration tasks.
        • <imageRepository> is the container registry prefix that was provided in the installer configuration tasks.
        • <imageTag> is the SCM release version.
        • <imagePullPolicy> determines when the SCM image is pulled from container registry. It can take the following values: IfNotPresent, Always or Never.
      • (Optional) The resources section with resource (CPU, memory, and ephemeral storage) allocation for the SCM pod. The default limits and requests values already specified for the resources in the values.yaml are sufficient for Siebel CRM deployment, but you can update these values as required as per the size of your Siebel CRM deployment.
      • The storage section with the network file system (NFS) path for SCM and Siebel CRM deployment as follows:
        storage:
              nfsServer: <nfsServer> 
              nfsPath: <nfsPath> 
              storageSize: 200Gi

        The variables in the example have the following values:

        • <nfsServer> is the IP address or fully qualified domain name of the NFS server.
        • <nfsPath> is the export path in the NFS server to access the SCM file system.
      • The imagePullSecrets section with the secret name required to pull the SCM image from the container registry as follows:
        imagePullSecrets:
              name: <secretName>

        In this example, <secretName> is the name of the secret you created in the step 1 of this section.

      • The sshKeysection with the public and private key file names required for establishing connection between Git repository and Fluxcd operator as follows:
        1. Create a SSH key pair as follows:
          % ssh-keygen
          Generating public/private ed25519 key pair.
          Enter file in which to save the key (/Users/<uname>/.ssh/id_ed25519): /Users/<uname>/sample
          Enter passphrase (empty for no passphrase):
          Enter same passphrase again:
          Your identification has been saved in /Users/<uname>/sample
          Your public key has been saved in /Users/<uname>/sample.pub

          In this example, <uname> is the user name.

        2. Copy the private and public key files to the ssh directory in the SCM Helm chart home directory (cloudmanager).
        3. Update the sshKey section with the private and public key file names:
          sshKey:
                pvtKeyFilename: <privateKeyFilename>
                pubKeyFilename: <publicKeyFilename>

          The variables in the example have the following values:

          • <privateKeyFilename> is the private key file name.
          • <publicKeyFilename> is the public key file name.
      • The ociConfig section with the details of the files required for OCI API authentication to access OCI infrastructure services in an OC3 environment as follows:
        Note: You must configure the caCrtFilename and ociCliRcFilename parameters only when deploying Siebel CRM in OC3.
        ociConfig:
              ociPvtKeyFilename:<ociPrivateKeyFilename>
              caCrtFilename: <caCertificateFileName>
              ociCliRcFilename : <cliRCFileName>

        The variables in the example have the following values:

        • <ociPrivateKeyFilename> is the private key PEM file name. For example, oci_api_key.pem.
        • <caCertificateFileName> is the CA certificate file name. For example, ca.crt.
        • <cliRCFileName> is the OCI CLI RC configuration file name.For example, oci_cli_rc.
      • The instanceMetaData section with the applicable region and compartment OCID values as follows:
        instanceMetaData:
              vaultEnabled: "False"                                   
              region: <region>                                
              compartmentOcid: <compartmentOCID>
              ociDeployment: <deploymentType>

        The variables in the example have the following values:

        • <region> is the canonical region name. For example, us-ashburn-1.
        • <compartmentOCID> is the OCID of the compartment used for Oracle Cloud Infrastructure (OCI) calls.
        • <deploymentType> determines the environment on which you are deploying Siebel CRM. If you are deploying Siebel CRM on:
          • A CNCF certified Kubernetes cluster on premises or in the cloud, set the value of this parameter to "false". This parameter is of string type, so ensure you enclose false in quotes.
          • OC3 in your data center, set the value of this parameter to "oc3".
          • OCI, set the value of this parameter to "public".
      • The userEncryptionKey section, enable this section and update it only when the vaultEnabled parameter is set to false.
        userEncryptionKey:
              uek: "<encryptionkey>"

        In this example, <encryptionkey> is a key which matches the following expression: ^[a-zA-Z0-9]{56,60}$

      • The service section with the service type that will be used to expose SCM deployment as follows:
        service:
              serviceType: <servicetype>

        In this example, <servicetype> is one of the following: ClusterIP, NodePort or LoadBalancer. Based on the service type selected, configure the other parameters applicable for the service type. For example, for NodePort service type configure the NodePort section under the service section as follows:

        NodePort:
              name: "scm-node-port" 
              customMetadata: {} 
              customLabels: {} 
              customAnnotations: {} 
              secret:
                    name: "scm-node-port-ssl-secret" 
                    sslCertificatePath: "/etc/ssl/certs/scm.crt" 
                    sslKeyPath: "/etc/ssl/private/scm.key"       
                    customMetadata: {} 
                    customLabels: {}   
                    customAnnotations: {} 
              selfSignedCert:
                    country: "US"                
                    state: "California"          
                    locality: "San Francisco"    
                    organization: "Oracle Corporations"  
                    commonName: "oracle.com"     
                    dnsName: "scm-cluster-ip-service"
        Note: When deploying Siebel CRM on OC3 using LoadBalancer as the serviceType, you must configure the customAnnotations and secret sections appropriately as per the instructions in the values.yaml file. An example of the customAnnotations and secret sections when deploying Siebel CRM in OC3 using LoadBalancer as the serviceType is as follows:
        service:
              serviceType: LoadBalancer
        customAnnotations:               
              oci.oraclecloud.com/load-balancer-type: "lb"
              service.beta.kubernetes.io/oci-load-balancer-tls-secret: scm-lb-cert-lb-secret
              service.beta.kubernetes.io/oci-load-balancer-internal: "false"
              service.beta.kubernetes.io/oci-load-balancer-shape: "flexible"
              service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: "10"
              service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: "100"
              service.beta.kubernetes.io/oci-load-balancer-subnet1: ocid1.xxxxx.xxx.xx.xxxxxxxxx................speygundxpjuhu23lorqq
              oci.oraclecloud.com/oci-load-balancer-listener-ssl-config: '{"CipherSuiteName":"oci-default-http2-tls-12-13-ssl-cipher-suite-v1",
              "Protocols":["TLSv1.2","TLSv1.3"]}'
              service.beta.kubernetes.io/oci-load-balancer-ssl-ports: "443"
               
        secret:
              certFileSecretNameLbTlsTermination: scm-lb-cert-lb-secret 
        Note: If you updated the values.yaml that you pulled from the container registry, you can push the updated SCM Helm chart in to the container registry after updating the values.yaml file as follows:
        tar -zcf cloudmanager_CM_updated_<releaseVersion>.tgz
        helm push cloudmanager_CM_updated_<releaseVersion>.tgz oci://<registry>/<repositoryPath>

        The variables in the example have the following values:

        • <registry> is the container registry basename.
        • <repositoryPath> is the SCM Helm chart (cloudmanager) repository path.
        • <releaseVersion> is the SCM release version.

Installing SCM

This section describes the steps to install SCM on a Kubernetes cluster on premises or in the cloud or in your data center on OC3 using Helm.

To install SCM using Helm:

  1. Go to the SCM Helm chart directory and run the Helm install command as follows:
    cd cloudmanager
    helm install <releaseName> . -n <namespace>

    The variables in the example have the following values:

    • <releaseName> is the SCM Helm chart instance identifier.
    • <namespace> is the name of the namespace to install SCM in.
  2. Verify that the SCM pod is running and fetch the endpoint URL for SCM using the following command:
    kubectl get pods -n <namespace> 
  3. Build the SCM application URL (when the service type is NodePort) as follows:
    1. Get a node IP address:
      kubectl get nodes –wide
      Note: The SCM application port is mapped to all active nodes, hence any node IP can be used to build the SCM application URL. You can copy the external IP (if available) or the internal IP as per your Kubernetes configuration.
    2. Get the assigned node port number from the service (Port Range 30000 – 32767):
      kubectl get svc/scm-app-service -n <namespace>
    3. Build the SCM application URL using the node IP address and node port as follows:
      https://<nodeIPAddress>:<nodePortNumber> 

      The variables in the example have the following values:

      • <nodeIPaddress> is any active node IP address.
      • <nodePortNumber> is the assigned node port number.
    Note: When the serviceType is set to LoadBalancer, build the SCM application URL as follows:
    1. Get the external IP and port:
      kubectl get svc -n <namespace>
    2. Build the SCM application URL using the external IP and port number as follows:
      https://<externalIP>:<PortNumber>
  4. Access the SCM application URL and verify that the swagger page is loading correctly.