Installing SCM using Helm

This topic describes the steps to install SCM on a Kubernetes cluster on premises or in the cloud using Helm.

This topic includes the following sections:

Before Installing SCM

You must perform the following preinstallation tasks before installing SCM on a Kubernetes cluster:

  1. Ensure you've access to the installation directory and container registry provided in Siebel Installer.
  2. Create an image pull secret: A pod uses a secret to pull an image from the container registry. To use the SCM image and SCM Helm chart from the container registry, create a secret using the kubectl command as follows:
    kubectl -n <namespace> create secret docker-registry <secretName> --docker-server=<registryURL> --docker-username=<userName> --docker-password=<password> --docker-email=<email>

    where:

    • <namespace> is the name of the namespace you want to install SCM in.
    • <secretName> is the name of the secret.
    • <registryURL> is the container registry URL to which the SCM image and SCM Helm chart were pushed by Siebel Installer.
    • <userName> is the container registry user name.
    • <password> is the container registry user password.
    • <email> is the container registry user email.
  3. Update the values.yaml file: The SCM Helm package includes a default values.yaml file which determines how SCM will be configured. Before installing SCM, you must update the values.yaml file to configure SCM as per your requirements. To update the values.yaml file:
    1. Open the values.yaml file. You can use the values.yaml file in either:
      • The installation directory on the Linux host machine that was used to run Siebel installer, or
      • The SCM Helm chart in your container registry. To use the values.yaml in the container registry:
        1. Sign in to the container registry as follows:
          helm registry login <registry>

          where <registry> is the basename of the container registry.

        2. Pull the SCM Helm chart from the container registry:
          helm pull oci://<registry>/<repositoryPath> --version <releaseVersion>

          where

          • <registry> is the container registry basename.
          • <repositoryPath> is the SCM Helm chart (cloudmanager) repository path.
          • <releaseVersion> is the SCM release version.
    2. Unzip the SCM Helm chart zip file as follows:
      tar -zxf cloudmanager_CM_<releaseVersion>.tgz

      where <releaseVersion> is the SCM release build version that you downloaded.

    3. Update the following sections in the values.yaml file:
      • The image section with the container registry details (provided in the Siebel Installer configuration tasks) from which the SCM image and SCM Helm chart will be used for deployment, as follows:
        image:
              registry: "<registryURL>"               
              repository: "<imageRepository>"     
              tag: "<imageTag>"                   
              imagePullPolicy: IfNotPresent

        where:

        • <registryURL> is the container registry URL that was provided in the installer configuration tasks.
        • <imageRepository> is the container registry prefix that was provided in the installer configuration tasks.
        • <imageTag> is the SCM release version.
        • <imagePullPolicy> determines when the SCM image is pulled from container registry. It can take the following values: IfNotPresent, Always or Never.
      • (Optional) The resources section with resource (CPU, memory, and ephemeral storage) allocation for the SCM pod. The default limits and requests values already specified for the resources in the values.yaml are sufficient for Siebel CRM deployment, but you can update these values as required as per the size of your Siebel CRM deployment.
      • The storage section with the network file system (NFS) path for SCM and Siebel CRM deployment as follows:
        storage:
              nfsServer: <nfsServer> 
              nfsPath: <nfsPath> 
              storageSize: 200Gi

        where:

        • <nfsServer> is the IP address or fully qualified domain name of the NFS server.
        • <nfsPath> is the export path in the NFS server to access the SCM file system.
      • The imagePullSecrets section with the secret name required to pull the SCM image from the container registry as follows:
        imagePullSecrets:
              name: <secretName>

        where <secretName> is the name of the secret you created in the step 1 of this section.

      • The sshKeysection with the public and private key file names required for establishing connection between Gitlab projects and Fluxcd operator as follows:
        1. Create a SSH key pair as follows:
          % ssh-keygen
          Generating public/private ed25519 key pair.
          Enter file in which to save the key (/Users/<uname>/.ssh/id_ed25519): /Users/<uname>/sample
          Enter passphrase (empty for no passphrase):
          Enter same passphrase again:
          Your identification has been saved in /Users/<uname>/sample
          Your public key has been saved in /Users/<uname>/sample.pub

          where <uname> is the user name.

        2. Copy the private and public key files to the ssh directory in the SCM Helm chart home directory (cloudmanager).
        3. Update the sshKey section with the private and public key file names:
          sshKey:
                pvtKeyFilename: <privateKeyFilename>
                pubKeyFilename: <publicKeyFilename>

          where:

          • <privateKeyFilename> is the private key file name.
          • <publicKeyFilename> is the public key file name.
      • The instanceMetaData section with the applicable region and compartment OCID values as follows:
        instanceMetaData:
              vaultEnabled: "False"                                   
              region: <region>                                
              compartmentOcid: <compartmentOCID>
              ociDeployment: "False"

        where:

        • <region> is the canonical region name. For example, us-ashburn-1.
        • <compartmentOCID> is the OCID of the compartment used for Oracle Cloud Infrastructure (OCI) calls.
      • The userEncryptionKey section, enable this section and update it only when the vaultEnabled parameter is set to false.
        userEncryptionKey:
              uek: "<encryptionkey>"

        where <encryptionkey> is a key which matches the following expression: ^[a-zA-Z0-9]{56,60}$

      • The service section with the service type that will be used to expose SCM deployment as follows:
        service:
              serviceType: <servicetype>

        where <servicetype> is one of the following: ClusterIP, NodePort or LoadBalancer. Based on the service type selected, configure the other parameters applicable for the service type. For example, for NodePort service type configure the NodePort section under the service section as follows:

        NodePort:
              name: "scm-node-port" 
              customMetadata: {} 
              customLabels: {} 
              customAnnotations: {} 
              secret:
                    name: "scm-node-port-ssl-secret" 
                    sslCertificatePath: "/etc/ssl/certs/scm.crt" 
                    sslKeyPath: "/etc/ssl/private/scm.key"       
                    customMetadata: {} 
                    customLabels: {}   
                    customAnnotations: {} 
              selfSignedCert:
                    country: "US"                
                    state: "California"          
                    locality: "San Francisco"    
                    organization: "Oracle Corporations"  
                    commonName: "oracle.com"     
                    dnsName: "scm-cluster-ip-service"
        Note: If you updated the values.yaml that you pulled from the container registry, you can push the updated SCM Helm chart in to the container registry after updating the values.yaml file as follows:
        tar -zcf cloudmanager_CM_updated_<releaseVersion>.tgz
        helm push cloudmanager_CM_updated_<releaseVersion>.tgz oci://<registry>/<repositoryPath>

        where:

        • <registry> is the container registry basename.
        • <repositoryPath> is the SCM Helm chart (cloudmanager) repository path.
        • <releaseVersion> is the SCM release version.

Installing SCM

This section describes the steps to install SCM on a Kubernetes cluster on premises or in the cloud using Helm.

To install SCM using Helm:

  1. Go to the SCM Helm chart directory and run the Helm install command as follows:
    cd cloudmanager
    helm install <releaseName> . -n <namespace>

    where:

    • <releaseName> is the SCM Helm chart instance identifier.
    • <namespace> is the name of the namespace to install SCM in.
  2. Verify that the SCM pod is running and fetch the endpoint URL for SCM using the following command:
    kubectl get pods -n <namespace> 
  3. Build the SCM application URL (when the service type is NodePort) as follows:
    1. Get a node IP address:
      kubectl get nodes –wide
      Note: The SCM application port is mapped to all active nodes, hence any node IP can be used to build the SCM application URL. You can copy the external IP (if available) or the internal IP as per your Kubernetes configuration.
    2. Get the assigned node port number from the service (Port Range 30000 – 32767):
      kubectl get svc/scm-app-service -n <namespace>
    3. Build the SCM application URL using the node IP address and node port as follows:
      https://<nodeIPAddress>:<nodePortNumber> 

      where:

      • <nodeIPaddress> is any active node IP address.
      • <nodePortNumber> is the assigned node port number.
  4. Access the SCM application URL and verify that the swagger page is loading correctly.