2 Installing and Configuring Oracle Identity Role Intelligence

Installing and configuring Oracle Identity Role Intelligence involves setting up the configuration files, creating the wallet, installing the Helm chart, and starting the data load process.

2.1 About OIRI on Kubernetes

OIRI uses Kubernetes as the Container Orchestration System.

OIRI uses Helm as the package manager for Kubernetes, which is used to install and upgrade OIRI on Kubernetes.

Figure 2-1 shows the deployment architecture of OIRI.

Figure 2-1 Deployment Architecture

Description of Figure 2-1 follows
Description of "Figure 2-1 Deployment Architecture"

OIRI deployment includes the following components:

  • Oracle Identity Governance (OIG): This represents an already existing OIG setup. OIG is a prerequisite for setting up OIRI, and acts as an Identity Provider (IDP) for OIRI. As a result, any user logging in to OIRI is authenticated against OIG, which can also be used to load or import data for role mining into OIRI. Access to OIG database is required to import data into OIRI. Data is imported into OIRI through the Data Ingestion Command Line Interface (ding-cli) component.

  • OIRI Command Line Interface (oiri-cli): This component is used to configure and install OIRI. This CLI is run as a pod inside the Kubernetes cluster. All the configuration scripts and Helm chart exists inside this pod. Command-line utilities, such as kubectl and helm is also available from inside the container. This CLI is also used to create the wallet and keystore. The wallet is used to securely store the credentials of the OIRI databse, OIG database, KeyStore, and OIG service account. KeyStore contains the Secure Sockets Layer (SSL) and token signing certificates. This VM should also have connectivity to the OIRI and OIG databases.

  • OIRI Cluster: The OIRI Service, OIRI UI, Spark History Server, and front-end loadbalancer are installed as part of the Helm chart installation from the oiri-cli container. Spark History Server is not exposed outside the Kubernetes cluster and can be accessed by using kubectl port-formward. See Installing the OIRI Helm Chart. OIRI Service has connectivity with OIG to authenticate the user logging in to OIRI UI. This is also used to publish the mined roles back to OIG.

  • Data Ingestion Command Line Interface (ding-cli): This is a secure VM to be used by the ETL Admin to carry out the data import process. This VM should have the connectivity and access to the Kubernetes cluster to trigger the data import jobs. The data import jobs are run inside a Spark cluster. This VM should have the connectivity with the OIRI and OIG databases.

  • Spark Cluster: This is an ephemeral Spark cluster. When a request for data import job is triggered from the ding-cli, the Kubernetes scheduler spins a driver and executor pod(s). When the data import job is completed, the executor pods are terminated, and the driver pod state is changed from Running to Completed. This Spark cluster should have the connectivity with the OIRI and OIG databases.

  • Persistent Volume (PV): This is a persistent volume mounted on the Network File System (NFS) server. This is used to store all the configuration files and data that needs to be persisted, such as logs. All the components should have access to the PV.

  • Container Registry: This is the Docker registry, from which the required Docker images are pulled. Optionally, you can also use the .tar files for the images and load the images manually on all the VMs and Kubernetes nodes.

2.2 Prerequisites for Installing OIRI

The prerequisites for installing OIRI on Kubernetes are:

  • Oracle Database version starting from 12c Release 2 (12.2.0.1), on-premises or container-based, is installed and running. Oracle Database versions 18.3 and 19.3 are also supported.

    Note:

    if you have upgraded the OIRI database from 12.1.x to 12.2.x, 18c, or 19c, you should update the database parameter compatible to a value of '12.2' or higher. If this is not done, you will see ORA-00972: identifier is too long errors when creating some OIRI database objects.
  • Oracle Identity Governance 12c (12.2.1.4.0) is installed and Oracle Identity Governance Bundle Patch 12.2.1.4.210428 is applied.

  • Docker version 19.03.11+ and Kubernetes Cluster (v1.17+) with kubectl is installed. See Kubernetes documentation for information about installing Kubernetes cluster.

  • Network File System (NFS) is available. NFS is used to create persistent volumes for using across nodes.

  • Create a user in Oracle Identity Governance (OIG) to log in to OIRI. See Creating a User in Performing Self Service Tasks with Oracle Identity Governance.

  • Authentication configuration is completed to authenticate users from OIG. See Configuring Authentication With Oracle Identity Governance for information about configuring authentication with OIG.

  • The Identity Audit feature is enabled in OIG. See Enabling Identity Audit in Performing Self Service Tasks with Oracle Identity Governance for information about enabling Identity Audit in OIG.

2.3 System Requirements and Certification

Ensure that your environment meets the system requirements such as hardware and software, minimum disk space, memory, required system libraries, packages, or patches before performing any installation.

The minimum system requirements for installing OIRI is:

  • For installing OIRI on a standalone host:
    • 16 GB of RAM
    • Disk space of 50 GB
    • 2 CPU
  • For installing OIRI on a kubernetes cluster:
    • Number of nodes : 3
    • 16 GB of RAM per node
    • 2 CPU per node (with virtualization support, for example Intel VT)
    • Disk space of 150 GB

The certification document covers supported installation types, platforms, operating systems, databases, JDKs, and third-party products:

http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html

2.4 Configuring Authentication With Oracle Identity Governance

Oracle Identity Governance (OIG) manages the OIRI user and user access to the OIRI application.

To configure authentication to OIRI with OIG:

  1. Create the user, for example janedoe, to login to OIRI.
  2. Create the OIRI role engineer role in OIG. To do so, create a role OrclOIRIRoleEngineer, and assign it to the application user, such as janedoe. Only the user with role OrclOIRIRoleEngineer can login to the OIRI application. See Creating Roles in Performing Self Service Tasks with Oracle Identity Governance.
  3. Create a user, for example OIRIServiceAccountUser, in OIG to use as service principal in OIRI for the purpose of back channel authentication and role publishing task. This is to serve the following purposes:
    • On startup, OIRI service authenticates with OIG by using the service account user, such as OIRIServiceAccountUser.

    • OIRI application uses the service account to authenticate the application user with OIG during the application user login. For authenticating the application user, such as janedoe, with OIG, the Service account user, such as OIRIServiceAccountUser, must have an admin role with User - View/Search capabilities. This is required as the service account user has to search the application user in OIG for authenticating the user.

    • OIRI uses the service account user to publish roles to OIG. For publishing roles to OIG, the service account user, such as OIRIServiceAccountUser, must have an admin role with the following capabilities:

      • User - View / Search
      • Role - Create
      • Access Policy - Create

      See Creating an Admin Role in Performing Self Service Tasks with Oracle Identity Governance for information about creating an admin role in OIG.

      Note:

      • The role with the above capabilities must have Scope of Control and Organization as Top. It is required for creating access policies as the provisioned applications might belong to different organizations.

      • The OIRI service account password in OIG expires per the password policy. To update the service account password in OIRI wallet when the OIRI service account password is updated in OIG, perform step 2 of Verifying and Updating the Wallet by using OIGSA mode. After the service account password is updated in the OIRI wallet, restart the OIRI service before publishing roles to OIG.

2.5 Loading the Container Images

The OIRI service comprises of four container images as follows:

  • oiri: OIRI service
  • oiri-cli: OIRI command line interface
  • oiri-ding: For data import
  • oiri-ui: Identity Role Intelligence user interface

You can load the images by referring to the following:

2.5.1 Using the Container Images from the Container Registry

You can download the container images from the OIRI repository, which is available inside middleware/ at container-registry.oracle.com.

To pull the image:

  1. From your container environment, log in to the Oracle Container Registry, and enter your Oracle SSO username and password when prompted:
    $ docker|podman login container-registry.oracle.com

    Prompt:

    Username: <USERNAME>
        Password: <PASSWORD>
  2. Pull the oiri-cli image by running the following command:
    $ docker|podman pull container-registry.oracle.com/middleware/oiri-cli:latest

    Note:

    to download that latest patchset you should pull the latest CPU by running the following command:
    $ docker|podman pull container-registry.oracle.com/middleware/oiri-cli_cpu:<TAG>
    $ docker|podman pull container-registry.oracle.com/middleware/oiri-ding_cpu:<TAG>
    $ docker|podman pull container-registry.oracle.com/middleware/oiri-ui_cpu:<TAG>
    $ docker|podman pull container-registry.oracle.com/middleware/oiri_cpu:<TAG>

    For example:

    $ docker|podman pull container-registry.oracle.com/middleware/oiri-cli_cpu:12.2.1.4.230310
    $ docker|podman pull container-registry.oracle.com/middleware/oiri-ding_cpu:12.2.1.4.230310
    $ docker|podman pull container-registry.oracle.com/middleware/oiri-ui_cpu:12.2.1.4.230310
    $ docker|podman pull container-registry.oracle.com/middleware/oiri_cpu:12.2.1.4.230310

    Continue with the steps to install and configure OIRI, as described in Setting Up the Configuration Files.

2.6 Setting Up the Configuration Files

To set up the files required for configuring data import (or data ingestion) and Helm chart:

  1. Create the following directories on NFS:

    The Kubernetes Cluster Administrator performs the following steps:

    $ mkdir <OIRI_SHARE>
    $ mkdir <OIRI_DING_SHARE>
    $ mkdir <OIRI_WORK_SHARE>

    For example:

    $ mkdir /nfs/oiri
    $ mkdir /nfs/ding
    $ mkdir /nfs/k8s

    Note:

    Create the directories as your OIRI user rather than root. If you create as root you will experience permissions errors when running setupConfFiles.sh.
  2. Ensure write permissions on the directories created on step 1 by running the following commands:

    The Kubernetes Cluster Administrator performs the following steps:

    $ chmod -R 777 /nfs/ding /nfs/oiri /nfs/k8s
  3. Setup Kube config. To do so:

    The Kubernetes Cluster Administrator performs the following steps:

    1. Create namespaces for OIRI and DING.
      $ kubectl create namespace oirins
      namespace/oirins created
      $ kubectl create namespace dingns
      namespace/dingns created
    2. Create oiri-service-account.yaml with the following content. Replace <OIRINS> with the OIRI namespace, and <DINGNS> with the DING namespace.
      apiVersion: v1
      kind: ServiceAccount
      metadata:  
        name: oiri-service-account
      namespace: <OIRINS>
      ---apiVersion: v1
      kind: Secret
      type: kubernetes.io/service-account-token
      metadata:  
        name: oiri-service-account-secret
        namespace: oiri
        annotations:
          kubernetes.io/service-account.name: "oiri-service-account"
      ---apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: oiri-ns-role
        namespace: <OIRINS>
      rules:
      - apiGroups: ["*"]
        resources: ["*"]
        verbs: ["*"]
      ---apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: ding-ns-role
        namespace: <DINGNS>
      rules:
      - apiGroups: ["*"]
        resources: ["*"]
        verbs: ["*"]
      ---kind: ClusterRole
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: oiri-ingress-nginx-clusterrole
      rules:
      - apiGroups: [""]
        resources: ["configmaps", "endpoints", "nodes", "pods", "secrets"]
        verbs: ["watch", "list"]
      - apiGroups: [""]
        resourceNames: ["<OIRINS>"]
        resources: ["namespaces"]
        verbs: ["get"]
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["get"]
      - apiGroups: [""]
        resources: ["services"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "patch"]
      - apiGroups: ["extensions"]
        resources: ["ingresses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: ["extensions"]
        resources: ["ingresses/status"]
        verbs: ["update"]
      - apiGroups: ["networking.k8s.io"]
        resources: ["ingresses/status"]
        verbs: ["update"]
      - apiGroups: ["networking.k8s.io"]
        resources: ["ingresses", "ingressclasses"]
        verbs: ["create", "delete", get", "list", "watch"]
      ---apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: oiri-ingress-nginx-clusterrolebinding-<OIRINS>
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: oiri-ingress-nginx-clusterrole
      subjects:
      - namespace: <OIRINS>
        kind: ServiceAccount
        name: oiri-service-account
      ---apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: oiri-clusterrolebinding-<OIRINS>
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: system:persistent-volume-provisioner
      subjects:
      - namespace: <OIRINS>
        kind: ServiceAccount
        name: oiri-service-account
      ---apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: oiri-rolebinding
        namespace: <OIRINS>
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: oiri-ns-role
      subjects:
      - namespace: <OIRINS>
        kind: ServiceAccount
        name: oiri-service-account
      ---apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: ding-rolebinding
        namespace: <DINGNS>
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: ding-ns-role
      subjects:
      - namespace: <OIRINS>
        kind: ServiceAccount
        name: oiri-service-account
    3. Run the following kubectl commands. Replace <OIRINS> with the OIRI namespace where appropriate.
      $ kubectl apply -f oiri-service-account.yaml
      $ TOKEN=`kubectl -n oiri get secret oiri-service-account-secret -o jsonpath='{.data.token}'| base64 --decode`
      $ kubectl -n oiri get secret oiri-service-account-secret -o jsonpath='{.data.ca\.crt}'| base64 --decode > ca.crt
      $ K8SURL=`grep server: $KUBECONFIG | sed 's/server://;s/ //g'`

      Note:

      The command to get K8SURL works only if you have a single cluster configured. Please make sure that the URL returned is the one where you want to install OIRI.
    4. Share the ca.crt and TOKEN to the OIRI Installation Administrator by copying the ca.crt to the Kubernetes directory, and listing the TOKEN created in step 5.c
      $ cp ca.crt /nfs/k8s
  4. Configure and start the OIRI CLI

    The OIRI Installation Administrator performs the following steps:

    1. The OIRI Installation Administrator sets up environment variables for the OIRI namespace, and a working directory.
      OIRINS=oiri
      WORKDIR=/work/oiri/
      TOKEN=<Token Shared by the Kubernetes Cluster Admin>
      K8SURL=<Kubernetes API Server URL shared by the Kubernetes Cluster Admin>
      $ kubectl config --kubeconfig=$WORKDIR/oiri_config set-cluster oiri-cluster --server=$K8SURL --certificate-authority=$WORKDIR/ca.crt --embed-certs=true
      $ kubectl config --kubeconfig=$WORKDIR/oiri_config set-credentials oiri-service-account --token=$TOKEN
      $ kubectl config --kubeconfig=$WORKDIR/oiri_config set-context oiri --user=oiri-service-account --cluster=oiri-cluster
      $ kubectl config --kubeconfig=$WORKDIR/oiri_config use-context oiri

      These commands generate a file called oiri_config in the <WORKDIR> location. This file contains the Kubernetes cluster details.

    2. The OIRI Installation Administrator creates a container-registry secret.

      If you are using a container registry and want to pull the container images on demand, you must create a secret that contains the login details of the container registry. This step is not required if you have staged the container images locally.

      To create a container registry secret, use the following command.

      $ kubectl create secret -n <NAMESPACE> docker-registry regcred --docker-server=<REGISTRY_ADDRESS> --docker-username=<USERNAME> --docker-password=<PASSWORD>

      where:

      • NAMESPACE is OIRI/DING namespace.
      • REGISTRY_ADDRESS is the location of the registry. For example: container-registry.oracle.com.
      • USERNAME is the name of the user using which you log in to the registry.
      • PASSWORD is the registry user password.
      For example:
      $ kubectl create secret \
      -n oiri docker-registry regcred \
      --docker-server=container-registry.oracle.com \
      --docker-username=myemail@email.com \
      --docker-password=<password>
      $ kubectl create secret \
      -n ding docker-registry regcred \
      --docker-server=container-registry.oracle.com \
      --docker-username=myemail@email.com \
      --docker-password=<password>
    3. Create a file called oiri-cli.yaml with the following content:
      apiVersion: v1
      kind: Pod
      metadata:
        name: oiri-cli
        namespace: <OIRINS>
        labels:
          app: oiricli
      spec:
        restartPolicy: OnFailure
        volumes:
          - name: oiripv
            nfs:
              server: <PVSERVER>
              path: <OIRI_SHARE>
          - name: dingpv
            nfs:
              server: <PVSERVER>
              path: <OIRI_DING_SHARE>
          - name: workpv
            nfs:
              server: <PVSERVER>
              path: <OIRI_WORK_SHARE>
        containers:
        - name: oiricli
          image: <OIRI_CLI_IMAGE>:<OIRICLI_VER>
          volumeMounts:
            - name: oiripv
              mountPath: /app/oiri
            - name: dingpv
              mountPath: /app
            - name: workpv
              mountPath: /app/k8s
          command: ["/bin/bash", "-ec", "tail -f /dev/null"]
        imagePullSecrets:
          - name: regcred

      where:

      • OIRINS is the name of the namespace you are using to hold the OIRI objects.
      • PVSERVER is the IP address of the NFS server hosting the persistent volumes.
      • OIRI_SHARE is the NFS mount location for the OIRI persistent volume.
      • OIRI_DING_SHARE is the NFS mount location for the OIRI Ding persistent volume.
      • OIRI_WORK_SHARE is the NFS mount of the OIRI Work persistent volume.
      • OIRI_CLI_IMAGE is the name of the OIRI CLI image file. If you are using a container registry, the name will be prefixed with the container registry name. For example:
        container-registry.oracle.com/idm/oiri-cli
        .
      • OIRICLI_VER is the version of the image you want to use. For example:
        12.2.1.4.latest
        .
      • ImagePullSecrets
        is required only if you are using a container registry and
        regcred
        is the name of the Kubernetes secret you created with the registry credentials stored.

      For example:

      apiVersion: v1
      kind: Pod
      metadata:
        name: oiri-cli
        namespace: oiri
        labels:
          app: oiricli
      spec:
        restartPolicy: OnFailure
        volumes:
          - name: oiripv
          nfs:
            server: 100.69.233.106
            path: /nfs/oiri
          - name: dingpv
          nfs:
            server: 100.69.233.106
            path: /nfs/ding
          - name: workpv
          nfs:
            server: 100.69.233.106
            path: /nfs/k8s
        containers:
        - name: oiricli
        image: container-registry.oracle.com/idm/oiri-cli:12.2.1.4.02106
        volumeMounts:
          - name: oiripv
            mountPath: /app/oiri
          - name: dingpv
            mountPath: /app
          - name: workpv
            mountPath: /app/k8s
        command: ["/bin/bash", "-ec", "tail -f /dev/null"]
        imagePullSecrets:
          - name: regcred
       
    4. Start the Administration CLI pod using the following command.
      $ kubectl apply -f oiri-cli.yaml

      Note:

      When examples ask you to run a command from within the OIRI-CLI, you should connect to the running pod as described below, and then run the commands as specified.
      $ kubectl exec -n oiri -ti oiri-cli -- /bin/bash
    5. Copy files to the CLI pod.

      Copy the ca.crt and oiri_config files to the OIRI-CLI pod, using the following commands.

      $ OIRINS=oiri
      $ WORKDIR=/work/oiri
      $ cp $WORKDIR/ca.crt $OIRINS/oiri-cli:/app/k8s 
      $ cp $WORKDIR/oiri_config $OIRINS/oiri-cli:/app/k8s/config

      Connect to the oiri-cli pod and set the file permissions.

       $ kubectl exec -n oiri -ti oiri-cli -- /bin/bash
      $ chmod 400 /app/k8s/config
  5. Set up configuration files by running the following command:
    1. Connect to the oiri-cli pod.
      $ kubectl exec -n oiri -ti oiri-cli -- /bin/bash
    2. Setup the configuration files using the following command:
      [oiri@1234 scripts]$ ./setupConfFiles.sh -m prod \
        --oigdbhost {OIG_DB_HOST} \
        --oigdbport {OIG_DB_PORT} \
        --oigdbsname {OIG_DB_SERVICE_NAME} \
        --oiridbhost {OIRI_DB_HOST} \
        --oiridbport {OIRI_DB_PORT} \
        --oiridbsname {OIRI_DB_SERVICE} \
        --sparkmode {SPARK_MODE} \
        --dingnamespace {DING_NAMESPACE} \
        --dingimage {DING_IMAGE} \
        --imagepullsecret {IMAGE_PULL_SECRET} \
        --k8scertificatefilename {KUBERNETES_CERTIFICATE_FILE_NAME} \
        --sparkk8smasterurl {KUBERNETES_MASTER_URL} \
        --oigserverurl {OIG_SERVER_URL} \
        

      For example:

      [oiri@1234 scripts]$ ./setupConfFiles.sh -m prod \
        --oigdbhost oigdbhost1.example.com \
        --oigdbport 1234 \
        --oigdbsname oimdb.example.com \
        --oiridbhost OIRI_DB_HOST_IP_ADDRESS \
        --oiridbport 1521 \
        --oiridbsname oiripdb \
        --sparkmode k8s \
        --dingnamespace dingns \
        --dingimage oiri-ding-12.2.1.4:latest \
        --imagepullsecret regcred \
        --k8scertificatefilename ca.crt \
        --sparkk8smasterurl k8s://https://IP_ADDRESS:PORT \
        --oigserverurl http://oigdbhost1.example.com:14000 \
        

      Note:

      The example of the ./setupConfFiles.sh command provided in this step is a sample command. For information about more parameters that you can pass with this command, see the following topics:

      The output is:

      INFO: OIG DB as source for ETL is true
      INFO: Setting up /app/data/conf/config.yaml
      INFO: Setting up /app/data/conf/data-ingestion-config.yaml
      INFO: Setting up /app/data/conf/custom-attributes.yaml
      INFO: Setting up /app/oiri/data/conf/application.yaml
      INFO: Setting up /app/oiri/data/conf/authenticationConf.yaml
      INFO: Setting up /app/data/conf/dbconfig.yaml

      Note:

      When running the ./setupconfFiles.sh command with OIRI DBCS setup, specify PDB service name instead of CDB service name for the --oiridbsname parameter.

  6. Verify that the configuration files have been generated by running the following commands:

    Command:

    [oiri@1234 scripts]$ ls /app/data/conf/

    Output:

    config.yaml custom-attributes.yaml data-ingestion-config.yaml dbconfig.yaml

    Command:

    [oiri@1234 scripts]$ ls /app/oiri/data/conf

    Output:

    application.yaml authenticationConf.yaml
  7. Optionally, you can run the following command to update the configuration files:
    $ kubectl exec -n oiri -ti oiri-cli -- /bin/bash
     
    [oiri@1234 scripts]$ ./updateConfig.sh --parameter_name_1 parameter_value_1 ...... --parameter_name_n parameter_value_n
    

    For example, if you want to update the OIRI database host to newhost, then run the following command:

    [oiri@1234 scripts]$ ./updateConfig.sh --oiridbhost newhost

    Note:

  8. Set up the values.yaml file to be used for Helm chart by running the following command:

    Note:

    See Helm Chart Configuration Values for information about the parameters required for setting up the values.yaml file.

    $ kubectl exec -n oiri -ti oiri-cli -- /bin/bash
     
    [oiri@1234 scripts]$ ./setupValuesYaml.sh \
       --oiriapiimage {OIRI_API_IMAGE} \
       --oirinfsserver {OIRI_NFS_SERVER} \
       --oirinfsstoragepath {OIRI_NFS_PATH} \
       --oirinfsstoragecapacity {OIRI_NFS_STORAGE_CAPACITY} \
       --oiriuiimage {OIRI_UI_IMAGE} \
       --dingimage {DING_IMAGE} \
       --oirinamespace [OIRI_NAMESPACE] \
       --dingnamespace {DING_NAMESPACE} \
       --dingnfsserver {OIRI_NFS_SERVER} \
       --dingnfsstoragepath {DING_NFS_STORAGE_PATH} \
       --dingnfsstoragecapacity {DING_NFS_STORAGE_CAPACITY} \
       --ingresshostname {INGRESS_HOSTNAME} \
       --sslsecretname (SSL_SECRET_NAME)

    For example:

    [oiri@1234 scripts]$ ./setupValuesYaml.sh \
       --oiriapiimage oiri/oiri:latest \
       --oirinfsserver oirihost.example.com \
       --oirinfsstoragepath /nfs/oiri \
       --oirinfsstoragecapacity 10Gi \
       --oiriuiimage oiri/oiri-ui:latest \
       --dingimage oiri/oiri-ding:latest \
       --oirinamespace oirins \
       --dingnamespace dingns \
       --dingnfsserver oirihost.example.com \
       --dingnfsstoragepath /nfs/ding \
       --dingnfsstoragecapacity 10Gi \
       --ingresshostname oirihost.example.com \
       --sslsecretname "oiri-tls-cert"
  9. Verify that values.yaml has been generated by running the following command:
    $ ls /app/k8s/
    

    The output is:

    values.yaml
  10. Optionally, run the following command to update values for Helm:
    $ kubectl exec -n oiri -ti oiri-cli -- /bin/bash
     
    $ ./updateValuesYaml.sh --parameter_name_1 parameter_value_1 ...... --parameter_name_n parameter_value_n

    For example, if you want to update oiriapiimage, then run the following command:

    $ ./updateValuesYaml.sh --oiriapiimage oiri-12.2.1.4:latest

2.7 Parameters Required for Source Configuration

Table 2-1 lists the parameters required for OIRI database, OIG database, OIG server, and ETL source configurations.

Table 2-1 Source Configuration Parameters

Parameter Description Mandatory Default Value Argument Argument Shorthand

OIG DB Host

Host name of OIG database.

This value is required for specifying OIG database as the source for ETL.

No

None

--oigdbhost

-oigdbh

OIG DB Port

Port number of the OIG database.

This value is required for specifying OIG database as the source for ETL.

No

None

--oigdbport

-oigdbp

OIG DB Service Name

Service name of the OIG database.

This value is required for specifying OIG database as the source for ETL.

No

None

--oigdbsname

-oigdbs

OIRI DB Host

Host name of the OIRI database.

Yes

None

--oiridbhost

-oiridbh

OIRI DB Port

Port number of the OIRI database.

Yes

None

--oiridbport

-oiridbp

OIRI DB Service

Service name of the OIRI database. If you are using OIRI DBCS setup, then specify the PDB service name.

Yes

None

--oiridbsname

-oiridbs

OIG DB as Source for ETL

Set this to true to enable OIG database as the source for ETL.

No

true

--useoigdbforetl

-uoigdb

Flat File as Source for ETL

Set this to true to enable flat file as the source for ETL

.

No

false

--useflatfileforetl

-uff

OIG Server URL

The URL of OIG server.

If the OIG service is in the same K8s cluster as that of OIRI, this parameter typically takes the format http://<OIM Service Name>.<Namespace>.svc.cluster.local:14000

Yes

None

--oigserverurl

-oigsu

OIG Connection Timeout

Connect timeout interval, in milliseconds.

No

10000

--oigconnectiontimeout

-oigct

OIG Read Timeout

Read timeout interval, in milliseconds.

No

10000

--oigreadtimeout

-oigrt

OIG KeepAlive Timeout

KeepAlive timeout is used in keep alive strategy. This strategy will first try to apply the host's Keep-Alive policy stated in the header. If that information is not present in the response header it will keep alive connections for the period of --oigkeepalivetimeout i.e. 10

No

10

--oigkeepalivetimeout

-oigkat

OIG Connection Pool Maximum number

The total number of connections in the OIRI database connection pool.

No

15

--oigconnectionpoolmax

-oigcpmx

OIG Connections per route

The maximum number of connections per (any) route.

No

15

--oigconnectionpoolmaxroute

-oigcpmr

OIG Proxy URI

OIG Proxy URI

No

 

--oigproxyuri

-oigpuri

OIG Proxy Username

OIG Proxy Username

No

 

--oigproxyusername

-oigpu

OIG Proxy Password

OIG Proxy Password

No

 

--oigproxypassword

-oigpp

Key Store Name

Key Store Name

No

 

--keystorename

-ksn

2.8 Additional Parameters Required for Data Import

Table 2-2 lists the additional parameters required for configuring data import.

Table 2-2 Additional Parameters for Data Import

Parameter Description Mandatory Default Value Argument Argument Shorthand

Spark Event Logs Enabled

This flag enables the event logs that are used by the Spark history server to show job history. The allowed values for this flag are true/false. If set to false, no event logs are generated and you will not be able to see the job history on Spark history server.

No

true

--sparkeventlogsenabled

-sele

Spark Mode

The supported values are local and k8s. If the value of this parameter is local, then the data import is run inside the ding-cli container. Local mode is recommended when you do not want to run the data import in a distributed manner. This can be ideal for small data sets. This mode should not be used for large data sets and when you want to do horizontal scaling. Oracle recommends using k8s mode for large data sets.

No

local

--sparkmode

-sm

Spark K8S Master URL

This must be a URL with the format k8s://<API_SERVER_HOST>:<k8s_API_SERVER_PORT>. You must always specify the port, even if it is the HTTPS port 443. You can find the values of <API_SERVER_HOST> and <k8s_API_SERVER_PORT> in Kube config.

Yes, if the value of the Spark Mode parameter is k8s. If the value is local, then it is not mandatory.

None

--sparkk8smasterurl

-skmu

Ding Namespace

This is the value of the namespace in which you want to start the Spark driver and executor pods for ETL

No

Ding

--dingnamespace

-dns

Ding Image

This is the name of the ding image to be used for spinning up the Spark driver and executor pods. This image contains the logic to run ETL.

Yes, if the value of the Spark Mode parameter is k8s. If the value is local, then it is not mandatory.

None

--dingimage

-di

Number of Executors

This is the number of executor instances to be run in the Kubernetes cluster. These executors are terminated as soon as the ETL jobs are completed.

No

3

--numberofexecutors

-noe

Image Pull Secret

This is the Kubernetes secret name to pull the ding image from the registry. This is required only when using the Docker images from the container registry.

No

None

--imagepullsecret

-ips

Kubernetes Certificate File Name

This is the name of the Kubernetes Certificate Name to be used for securely communicating to the kubernetes API server.

Yes, if the value of the Spark Mode parameter is k8s. If the value is local, then it is not mandatory.

None

--k8scertificatefilename

-kcfn

Driver Request Cores

This is to specify the CPU request for the driver pod. The values of this parameter conform to the Kubernetes convention. For information about the meaning of CPU, see Meaning of CPU in Kubernetes documentation.

Example values can be 0.1, 500m, 1.5, or 5, with the definition of CPU units documented in CPU units of Kubernetes documentation.

This takes precedence over spark.driver.cores for specifying the driver pod CPU request, if set.

No

0.5

--driverrequestcores

-drc

Driver Limit Cores

This is to specify a hard CPU limit for the driver pod.

See Resource requests and limits of Pod and Container for information about CPU limit.

No

1

--driverlimitcores

-dlc

Executor Request Cores

This is to specify the cpu request for each executor pod. Values conform to the Kubernetes convention. Example values can be 0.1, 500m, 1.5, and 5, with the definition of CPU units in Kubernetes documentation.

No

0.5

--executorrequestcores

-erc

Executor Limit Cores

This is to specify a hard CPU limit for each executor pod launched for the Spark application.

No

0.5

--executorlimitcores

-elc

Driver Memory

This is the amount of memory to use for the driver process where SparkContext is initialized, in the same format as JVM memory strings with a size unit suffix ("k", "m", "g" or "t"), for example, 512m, 2g.

No

1g

--drivermemory

-dm

Executor Memory

This is the amount of memory to use per executor process, in the same format as JVM memory strings with a size unit suffix ("k", "m", "g" or "t"), for example 512m, 2g.

No

1g

--executorymemory

-em

Driver Memory Overhead

This is the amount of non-heap memory to be allocated per driver process in cluster mode, in MiB unless otherwise specified. This is memory that accounts for VM overheads, interned strings, other native overheads, and so on. This tends to grow with the container size (typically 6 to 10 percent).

No

256m

--drivermemoryoverhead

-dmo

Executor Memory Overhead

This is the amount of additional memory to be allocated per executor process in cluster mode, in MiB unless otherwise specified. This is memory that accounts for VM overheads, interned strings, other native overheads, and so on. This tends to grow with the executor size (typically 6 to 10 percent).

No

256m

--executorymemoryoverhead

-emo

2.9 Parameters Required for Authentication Configuration

Table 2-3 lists the parameters required for authentication configuration.

Table 2-3 Authentication Configuration Parameters

Parameter Description Mandatory Default Value Argument Argument Shorthand

Authentication Provider

The authentication provider for authenticating to OIRI.

No

OIG

--authprovider

-ap

Access Token Issuer

The OIG access token issuer.

No

www.example.com

--oigaccesstokenissuer

-oigati

Cookie Domain

The domain attribute specifies the hosts that are allowed to receive the cookie. If unspecified, it defaults to the same host that set the cookie, excluding subdomains.

No

None

--cookiedomain

-cd

OIRI Access Token Issuer

The OIRI access token issuer.

No

www.example.com

--accesstokenissuer

-ati

Cookie Secure Flag

If you are using non-SSL setup, then set this parameter to false.

No

true

--cookiesecureflag

-csf

Cookie Same Site

Whether or not the cookie should be restricted to the same-site context.

No

Strict

--cookiesamesite

-css

OIRI Access Token Audience

The OIRI access token audience

No

www.example.com

--accesstokenaudience

-ata

OIRI Access Token Expiration Time in minutes

The OIRI access token expiration in minutes.

No

20

--accesstokenexpirationtime

-atet

OIRI Access Token allowed clock skew

The OIRI access token allowed clock skew.

No

30

--accesstokenallowedclockskew

-atacs

Auth Roles

A user with the role specified as the value of this parameter can login to OIRI.

No

OrclOIRIRoleEngineer

--authroles

-ar

Idle Session Timeout

The session timeout in minutes if the OIRI application is idle.

No

15

--idlesessiontimeout

-ist

Session Timeout

OIRI session timeout in minutes

No

240

--sessiontimeout

-st

2.10 Entity Parameters for Data Import

Table 2-4 lists the user entity parameters that you can update by running the updateDataIngestionConfig.sh command.

Note:

To view all the supported parameters for the updateDataIngestionConfig.sh script, run the following command from the ding-cli pod:

$ ./updateDataIngestionConfig.sh --help

Or:

$ ./updateDataIngestionConfig.sh -h

Table 2-4 User Entity Parameters for Data Import

Parameter Description Default Value Argument Argument Shorthand

Enabled (true/false)

Determines whether the entity is enabled or disabled during data import.

TRUE

--entityusersenabled

-eue

Sync Mode (full/incremental)

For Day 0 data import, use full mode. For Day n data import, use incremental mode. In full mode, all the data is loaded in the OIRI database. In incremental mode, only updated data from the source is loaded in the OIRI database.

full

--entityuserssyncmode

-eusm

Lower Bound

The minimum value for the partitionColumn parameter that is used to determine partition stride.

0

--entityuserslowerbound

-eulb

Upper Bound

The maximum value for the partitionColumn parameter that is used to determine partition stride.

10000

--entityusersupperbound

-euub

Number of Partitions

The number of partitions. This, along with lowerBound (inclusive) and upperBound (exclusive) form the partition strides for the generated WHERE clause expressions that are used to split the partitionColumn evenly.

3

--entityusersnumberofpartitions

-eunop

Table 2-5 lists the application entity parameters for data import that you can update by running the updateDataIngestionConfig.sh command.

Table 2-5 Application Entity Parameters for Data Import

Parameter Description Default Value Argument Argument Shorthand

Enabled (true/false)

Determines whether the application entity will be enabled or disabled during data import.

TRUE

--entityapplicationsenabled

-eae

Sync Mode (full/incremental)

For Day 0 data import, use full mode. For Day n data import, use incremental mode. In full mode, all the data is loaded in the OIRI database. In incremental mode, only updated data from the source is loaded in the OIRI database.

full

--entityapplicationssyncmode

-easm

Lower Bound

The minimum value for the partitionColumn parameter that is used to determine partition stride.

0

--entityapplicationslowerbound

-ealb

Upper Bound

The maximum value for the partitionColumn that is used to determine partition stride.

10000

--entityapplicationsupperbound

-eaub

Number of Partitions

The number of partitions. This, along with lowerBound (inclusive) and upperBound (exclusive) form the partition strides for the generated WHERE clause expressions that are used to split the partitionColumn evenly.

3

--entityapplicationsnumberofpartitions

-eanop

Table 2-6 lists the entitlement entity parameters for data import that you can update by running the updateDataIngestionConfig.sh command.

Table 2-6 Entitlement Entity Parameters for Data Import

Parameter Description Default Value Argument Argument Shorthand

Enabled (true/false)

Determines whether the entity is enabled or disabled during data import.

TRUE

--entityentitlementsenabled

-eee

Sync Mode (full/incremental)

For Day 0 data import, use full mode. For Day n data import, use incremental mode. In full mode, all the data is loaded in the OIRI database. In incremental mode, only updated data from the source is loaded in the OIRI database.

full

--entityentitlementssyncmode

-eesm

Lower Bound

The minimum value for the partitionColumn that is used to determine partition stride.

0

--entityentitlementslowerbound

-eelb

Upper Bound

The maximum value for the partitionColumn that is used to determine partition stride.

10000

--entityentitlementsupperbound

-eeub

Number of Partitions

The number of partitions. This, along with lowerBound (inclusive) and upperBound (exclusive) form the partition strides for the generated WHERE clause expressions that are used to split the partitionColumn evenly.

.

3

--entityentitlementsnumberofpartitions

-eenop

Table 2-7 lists the assigned entitlement parameters for data import that you can update by running the updateDataIngestionConfig.sh command.

Table 2-7 Assigned Entitlement Parameters for Data Import

Parameter Description Default Value Argument Argument Shorthand

Enabled (true/false)

Determines whether the entity is enabled or disabled during data import.

TRUE

--entityassignedentitlementsenabled

-eaee

Sync Mode (full/incremental)

For Day 0 data import, use full mode. For Day n data import, use incremental mode. In full mode, all the data is loaded in the OIRI database. In incremental mode, only updated data from the source is loaded in the OIRI database.

full

--entityassignedentitlementssyncmode

-eaesm

Lower Bound

The minimum value for partitionColumn that is used to determine partition stride.

0

--entityassignedentitlementslowerbound

-eaelb

Upper Bound

The maximum value for partitionColumn that is used to determine partition stride.

10000

--entityassignedentitlementsupperbound

-eaeub

Number of Partitions

The number of partitions. This, along with lowerBound (inclusive) and upperBound (exclusive) form the partition strides for the generated WHERE clause expressions that are used to split the partitionColumn evenly.

3

--entityassignedentitlementsnumberofpartitions

-eaenop

Table 2-8 lists the role entity parameters for data import that you can update by running the updateDataIngestionConfig.sh command.

Table 2-8 Role Entity Parameters for Data Import

Parameter Description Default Value Argument Argument Shorthand

Enabled (true/false)

Determines whether the entity is enabled or disabled during data import.

TRUE

--entityrolesenabled

-ere

Sync Mode (full/incremental)

For Day 0 data import, use full mode. For Day n data import, use incremental mode. In full mode, all the data is loaded in the OIRI database. In incremental mode, only updated data from the source is loaded in the OIRI database.

full

--entityrolessyncmode

-ersm

Lower Bound

The minimum value for partitionColumn that is used to determine partition stride.

0

--entityroleslowerbound

-erlb

Upper Bound

The maximum value for partitionColumn that is used to determine partition stride.

10000

--entityrolesupperbound

-erub

Number of Partitions

The number of partitions. This, along with lowerBound (inclusive) and upperBound (exclusive) form the partition strides for the generated WHERE clause expressions that are used to split the partitionColumn evenly.

3

--entityrolesnumberofpartitions

-ernop

Table 2-9 lists the role hierarchy entity parameters for data import that you can update by running the updateDataIngestionConfig.sh command.

Table 2-9 Role Hierarchy Entity Parameters for Data Import

Parameter Description Default Value Argument Argument Shorthand

Enabled (true/false)

Determines whether the entity is enabled or disabled during data import.

TRUE

--entityrolehierarchyenabled

-erhe

Sync Mode (full/incremental)

For Day 0 data import, use full mode. For Day n data import, use incremental mode. In full mode, all the data is loaded in the OIRI database. In incremental mode, only updated data from the source is loaded in the OIRI database.

full

--entityrolehierarchysyncmode

-erhsm

Number of Partitions

The number of partitions. This, along with lowerBound (inclusive) and upperBound (exclusive) form the partition strides for the generated WHERE clause expressions that are used to split the partitionColumn evenly.

3

--entityrolehierarchynumberofpartitions

-erhnop

Table 2-10 lists the role user membership entity parameters for data import that you can update by running the updateDataIngestionConfig.sh command.

Table 2-10 Role User Membership Entity Parameters for Data Import

Parameter Description Default Value Argument Argument Shorthand

Enabled (true/false)

Determines whether the entity is enabled or disabled during data import.

TRUE

--entityroleusermembershipsenabled

-erume

Sync Mode (full/incremental)

For Day 0 data import, use full mode. For Day n data import, use incremental mode. In full mode, all the data is loaded in the OIRI database. In incremental mode, only updated data from the source is loaded in the OIRI database.

full

--entityroleusermembershipssyncmode

-erumsm

Lower Bound

The minimum value for partitionColumn that is used to determine partition stride.

0

--entityroleusermembershipsowerbound

-erumlb

Upper Bound

The maximum value for partitionColumn that is used to determine partition stride.

10000

--entityroleusermembershipsupperbound

-erumub

Number of Partitions

The number of partitions. This, along with lowerBound (inclusive) and upperBound (exclusive) form the partition strides for the generated WHERE clause expressions that are used to split the partitionColumn evenly.

3

--entityroleusermembershipsnumberofpartitions

-erumnop

Table 2-11 lists the role entitlement composition entity parameters for data import that you can update by running the updateDataIngestionConfig.sh command.

Table 2-11 Role Entitlement Composition Entity Parameters for Data Import

Parameter Description Default Value Argument Argument Shorthand

Enabled (true/false)

Determines whether the entity is enabled or disabled during data import.

TRUE

--entityroleentitlementcompositionsenabled

-erece

Sync Mode (full/incremental)

For Day 0 data import, use full mode. For Day n data import, use incremental mode. In full mode, all the data is loaded in the OIRI database. In incremental mode, only updated data from the source is loaded in the OIRI database.

full

--entityroleentitlementcompositionssyncmode

-erecsm

Lower Bound

The minimum value for partitionColumn that is used to determine partition stride.

0

--entityroleentitlementcompositionslowerbound

-ereclb

Upper Bound

The maximum value for partitionColumn that is used to determine partition stride.

10000

--entityroleentitlementcompositionsupperbound

-erecub

Number of Partitions

The number of partitions. This, along with lowerBound (inclusive) and upperBound (exclusive) form the partition strides for the generated WHERE clause expressions that are used to split the partitionColumn evenly.

3

--entityroleentitlementcompositionsnumberofpartitions

-erecnop

Table 2-12 lists the account entity parameters for data import that you can update by running the updateDataIngestionConfig.sh command.

Table 2-12 Account Entity Parameters for Data Import

Parameters Description Default Value Argument Argument Shorthand

Enabled (true/false)

Determines whether the entity is enabled or disabled during data import.

TRUE

--entityaccountsenabled

-eace

Sync Mode (full/incremental)

For Day 0 data import, use full mode. For Day n data import, use incremental mode. In full mode, all the data is loaded in the OIRI database. In incremental mode, only updated data from the source is loaded in the OIRI database.

full

--entityaccountssyncmode

-eacsm

Lower Bound

The minimum value for partitionColumn that is used to determine partition stride.

0

--entityaccountsslowerbound

-eaclb

Upper Bound

The maximum value for partitionColumn that is used to determine partition stride.

10000

--entityaccountsupperbound

-eacub

Number of Partitions

The number of partitions. This, along with lowerBound (inclusive) and upperBound (exclusive) form the partition strides for the generated WHERE clause expressions that are used to split the partitionColumn evenly.

3

--entityaccountsnumberofpartitions

-eacnop

2.11 Flat File Parameters for Data Import

Table 2-13 lists the flat file parameters for data import.

Note:

To view all the parameters for the updateDataIngestionConfig.sh script that you can modify, run the following command from the ding-cli pod:

$ ./updateDataIngestionConfig.sh --help

Or:

$ ./updateDataIngestionConfig.sh -h

Table 2-13 Flat File Parameters for Data Import

Parameter Description Default Value Argument Argument Shorthand

Flat File Enabled

Setting this parameter determines whether or not data import will be performed against flat files. The value can be true or false.

false

--useflatfileforetl

-uff

Flat File Format

The format of the flat file, which is CSV.

csv

--flatfileformat

-fff

Flat File Data Separator

The data separator in the rows of the flat files, which can be comma (,) colon (:) or vertical bar (|).

,

--flatfileseparator

-ffs

Flat File Time Stamp Format

The timestamp format in the flat files.

yyyy-MM-dd

--flatfiletimestamp

-fftsf

2.12 Helm Chart Configuration Values

Table 2-14 lists the parameters required for setting up the Values.yaml file to be used for Helm chart.

Table 2-14 Helm Chart Configuration Parameters

Parameter Name Description Mandatory Default Value Argument Argument Shorthand

OIRI Namespace

The name of the Kubernetes namespace on which you want to install OIRI. This namespace contains the installation of OIRI API pods and OIRI UI pods.

No

oiri

--oirinamespace

-ons

OIRI Replicas

The number of OIRI API pods to be run in the OIRI namespace.

No

1

--oirireplicas

-or

OIRI API Image

Name of the OIRI API Image. For example:

oiri-12.2.1.4:<TAG>

Yes

None

--oiriapiimage

-oai

OIRI NFS Server

NFS Server to be used for OIRI. This must be available across the Kubernetes nodes.

Yes

None

--oirinfsserver

-onfs

OIRI NFS Storage Path

The path on the NFS server that can be accessed by OIRI API and UI Pods, for example, /nfs/oiri.

Yes

None

--oirinfsstoragepath

-onfsp

OIRI NFS Storage Capacity

The capacity of the NFS Server. See the Kubernetes Resource Model for information about the units expected by capacity, for example, 10Gi.

Yes

None

--oirinfsstoragecapacity

-onfsc

OIRI UI Image

Name of the OIRI UI Image. For example:

oiri-ui-12.2.1.4:<TAG>

Yes

None

--oiriuiimage

-oui

OIRI UI Replicas

Number of OIRI UI pods to be run in the OIRI Namespace.

No

1

--oiriuireplicas

-our

DING Namespace

Name of the Kubernetes namespace on which you want to install the Spark Kubernetes history server. This namespace contains the installation of Spark history server and Spark cluster, including drivers and executors, for ETL.

No

ding

--dingnamespace

-dns

Spark History Server Replicas

Number of Spark history server pods to be run in the DING namespace.

No

1

--sparkhistoryserverreplicas

-shs

r

DING NFS Server

NFS server to be used for DING. This must be available across the Kubernetes nodes.

Yes

None

--dingnfsserver

-dnfs

DING NFS Storage Path

The path on the NFS server that can be accessed by the Spark history server, driver, and executors in the spark cluster. For example:

/nf/ding/

Yes

None

--dingnfsstoragepath

-dnfsp

DING NFS Storage Capacity

The capacity of the NFS Server. See the Kubernetes Resource Model for information about the units expected by capacity, for example, 10Gi.

Yes

None

--dingnfsstoragecapacity

-dnfs

c

DING Image

Name of the data ingestion image to be used by the Spark history server, executor, and driver pods. For example:

oiri-ding-12.2.1.4:<TAG>

Yes

None

--dingimage

-di

Image Pull Secret

Name of the Kubernetes secret to pull the image from the registry.

No

regcred

--imagepullsecret

-ips

Ingress Enabled

Whether ingress is enabled. Default value of this parameter is 'true' which creates an ingress resource and an ingress controller. Setting this value to false will prevent creation of an ingress controller.

No

true

--ingressenabled

-ie

Ingress Class Name

Set Ingress controller. Default value of the parameter is 'nginx'. If you want to use your existing ingress controller then set this class to the class name managed by the controller.

No

nginx

--ingressclass

-ic

Ingress Host Name

Ingress host name

Yes

None

--ingresshostname

-ih

Install Service Account Name

Service Account Name that is used to create the Kubernetes configuration when installing OIRI.

No

oiri-service-account

--installserviceaccount

-isa

Nginx-Ingress Type

The type of ingress you want to create to access the OIRI API and OIRI UI. This can be NodePort or LoadBalancer. This release of OIRI supports only the NodePort ingress type.

No

NodePort

--ingresstype

-it

Nginx-Ingress NodePort

The port number of the ingress. Make sure the port provided is available and can be used.

No

30305

--ingressnodeport

-inp

Nginx-Ingress SSL enabled

Set this parameter to configure SSL.

Yes

true

--sslenabled

-ssle

Nginx-Ingress TLS secret

This is the TLS secret in the default namespace. This is required when SSL is enabled. This should match with the name you provide while creating a TLS secret using kubectl in step 2b of Installing the OIRI Helm Chart.

No (required only if SSL is enabled)

None

--sslsecretname

-sslsn

Nginx-Ingress Replica Count

Replica count for nginx controller.

No

1

--nginxreplicas

-nr

2.13 Creating the Wallets

To create the OIRI and DING wallets:
  1. Connect to the oiri-cli pod.
    $ kubectl exec -n oiri -ti oiri-cli -- /bin/bash
    
  2. Generate a keystore by running the following command:
    [oiri@1234 scripts]$ keytool -genkeypair \
      -alias <OIRI_JWT_KEY_ALIAS> \
      -keypass <OIRI_KEYSTORE_PASSWORD> \
      -keyalg RSA \
      -keystore /app/oiri/data/keystore/keystore.jks \
      -storetype pkcs12 \
      -storepass <OIRI_KEYSTORE_PASSWORD>

    Note:

    The keypass and storepass passwords are the same.

    The following is a sample command:

    $ keytool -genkeypair -alias oiri -keypass <PASSWORD> -keyalg RSA -keystore /app/oiri/data/keystore/keystore.jks -storepass <PASSWORD> -storetype pkcs12

    The output is:

    What is your first and last name?
      [Unknown]:
    What is the name of your organizational unit?
      [Unknown]:
    What is the name of your organization?
      [Unknown]:
    What is the name of your City or Locality?
      [Unknown]:
    What is the name of your State or Province?
      [Unknown]:
    What is the two-letter country code for this unit?
      [Unknown]:
    Is CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct?
      [no]:  yes
  3. Exit the pod.
  4. Import OIG certificate in the keystore. To do so:
    1. Export OIG certificate for signature verification by running the following command:
      $ keytool -export -rfc -alias xell -file xell.pem -keystore default-keystore.jks

      The default-keystore.jks is located at DOMAIN_HOME/config/fmwconfig. The certificate you are exporting here protects the OIG REST API. It is not the same as the OIG server certificate.

    2. Copy the xell.pem file exported from the OIG keystore to the /nfs/oiri/data/keystore/ directory.
    3. Import the certificate into OIRI keystore by running the following command from the oiri-cli pod:
      $ kubectl exec -n oiri -ti oiri-cli -- /bin/bash
      [oiri@1234 scripts]$ keytool -import \
        -alias xell \
        -file /app/oiri/data/keystore/xell.pem \
        -keystore /app/oiri/data/keystore/keystore.jks
  5. To integrate OIRI with OIG in SSL mode, import OIG SSL certificate chain into OIRI. To do so:
    1. Download the OIG SSL certificate chain from OIG server by running the following command:

      $ echo -n | openssl s_client -connect ${host}:${port} | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > oigsslcert.cer

      For example:

      $ echo -n | openssl s_client -connect oim.example.com:123 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > oigsslcert.cer
    2. Copy the certificate file downloaded from the OIG keystore to the /nfs/oiri/data/keystore/ directory.
    3. Import the certificate chain into OIRI keystore by running the following command from the oiri-cli pod:

      $ kubectl exec -n oiri -ti oiri-cli -- /bin/bash
      [oiri@1234 scripts]$ keytool -import -alias oigsslcert -file oigsslcert.cer -keystore /app/oiri/data/keystore/keystore.jks

      When prompted, enter the same keystore password that you provided in step 1.

  6. To create the wallets, connect to the oiri-cli pod, and run the following command:
    [oiri@1234 scripts]$ oiri-cli --config=/app/data/conf/config.yaml wallet create

    Enter the following information when prompted:

    • OIRI database username prefix and password
    • OIG database username and password
    • OIG service account username and password
    • OIRI keystore password
    • OIRI JWT key alias and password

    You can either provide all the parameter values on prompt or all of them in the command line. Therefore, instead of providing the values in the prompt, you can provide the values of the parameters in the command line, as follows:

    [oiri@1234 scripts]$ oiri-cli --config=/app/data/conf/config.yaml wallet create --oiridbuprefix OIRI_DB_PREFIX --oiridbp OIRI_DB_PASSWORD --oigdbu OIG_DB_USERNAME --oigdbp OIG_DB_PASSWORD -oigsau OIG_SERVICE_ACCOUNT_USERNAME --oigsap OIG_SERVICE_ACCOUNT_PASSWORD --oiriksp OIRI_KEYSTORE_PASSWORD --oirijka OIRI_JWT_KEY_ALIAS -oirijkp OIRI_JWT_KEY_PASSWORD

    The output is as shown:

    Setting up wallet in [/app/data/wallet]
    DING Wallet created.
    Setting up wallet in [/app/oiri/data/wallet]
    OIRI Wallet created.
  7. Verify that the OIRI and Ding wallets have been created by running the following commands:

    Command:

    [oiri@1234 scripts]$ ls /app/data/wallet

    Output:

    cwallet.sso cwallet.sso.lck

    Command:

    $ ls /app/oiri/data/wallet

    Output:

    cwallet.sso cwallet.sso.lck

2.14 Creating and Seeding the OIRI Database Schema

To create and seed the OIRI database schema:
  1. Connect to the oiri-cli container.
    $ kubectl exec -n oiri -ti oiri-cli -- /bin/bash
  2. Create the database user schema by running the following command:
    [oiri@1234 scripts]$ oiri-cli --config=/app/data/conf/config.yaml schema create /app/data/conf/dbconfig.yaml

    After you provide the SYS password when prompted, the output is:

    Creating the schema ci_oiri
    CREATING OIRI SCHEMA ............
    ===================================================
    DB USER ci_oiri has been successfully created
  3. Seed the schema by running the following command:
    $ oiri-cli --config=/app/data/conf/config.yaml schema migrate /app/data/conf/dbconfig.yaml

    The output is:

    Migrating the OIRI schema
    Migrating OIRI SCHEMA ............
    ===================================================
    ............
    OIRI Schema has been successfully migrated

    Note:

    The Schema Create command creates the permanent tablespace and temporary tablespace by using the tablespaceConfiguration parameter in the dbconfig.yaml file. By default, only one DATAFILE for permanent database and one TEMPFILE for temporary database are created. Because there is a limit on the file size, perform regular checks in the database tablespaces, and add the additional datafiles when required.

2.15 Verifying and Updating the Wallet

To verify the wallet and update the credentials in the wallet:
  1. Verify the wallets by running the following command:

    Note:

    This command verifies the wallet locations, OIRI database connection, OIG database connection, keystore entries, and OIG server connection by using the service account.

    $ ./verifyWallet.sh

    The output is:

    Verifying Wallets. Wallet locations and entries will be validated
    DING Wallet is Valid.
    OIRI Wallet is Valid.
    OIRI DB Connection is Valid.
    OIG DB Connection is Valid.
    KeyStore location and entries are Valid.
    OIG Server Connection is Valid.
    SUCCESS: Wallet locations and entries are valid.
  2. Optionally, run the following command to update credentials in the wallet:
    $ oiri-cli --config=/app/data/conf/config.yaml wallet update

    The output is as shown with sample values:

    Please enter the DB name, credentials of which need to be updated. Supported values are OIGSA/OIGDB/OIRIDB/OIRIKS/OIRIJWT: OIRIDB
    Please enter OIRI DB UserName: oiri_core
    Please enter OIRI DB password: <OIRI_DB_PASSWORD>
    Updating OIRI DB Credentials in OIRI wallet
    Updating DB wallet in [/app/oiri/data/dbwallet]
    OIRI Wallet updated.
    Updating OIRI DB Credentials in DING wallet
    Updating DB wallet in [/app/data/dbwallet]
    DING Wallet updated.

    The supported modes prompted in the output are:

    • OIGSA: Use this mode to update the OIG service account username and password.

    • OIGDB: Use this mode to update the OIG database username and password.

    • OIRIDB: Use this mode to update the OIRI database schema prefix and password.

    • OIRIKS: Use this mode to update the OIRI keystore password.

    • OIRIJWT: Use this mode to update the OIRI JWT key alias and password.

2.16 Installing the OIRI Helm Chart

To create the OIRI Helm chart:
  1. Create Image Pull Secrets for the oiri and ding namespaces created in Step 1.

    Command:

    $ kubectl create secret docker-registry regcred --docker-server=<registry_server_url> --docker-username=<registry_user> --docker-password=<registry_password> -n <oirins>
    
    $ kubectl create secret docker-registry regcred --docker-server=<registry_server_url> --docker-username=<registry_user> --docker-password=<registry_password> -n <dingns>
  2. Optionally, perform the following steps (2a and 2b) if you want to enable SSL from a Docker container host machine that is outside the oiri-cli container:

    Note:

    Skip this step if you have specified false as the value of --sslenabled while running the setupValuesYaml.sh script.

    1. Create a certificate if you do not have an existing certificate. You can skip this if you already have a key and a certificate.
      $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=<HOSTNAME>"
      

      For example:

      $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=oiri.example.com"

      The output is:

      Generating a 2048 bit RSA private key
      ..+++
      ............................+++
      writing new private key to 'tls.key'
      -----
    2. Create the TLS secret by running the following command:
      $ kubectl create secret tls oiri-tls-cert --key="tls.key" --cert="tls.crt"

      The output is:

      secret/oiri-tls-cert created
  3. Install the chart by running the following command:
    $ kubectl exec -n oiri -ti oiri-cli -- /bin/bash
    [oiri@1234 scripts] helm install oiri /helm/oiri -f /app/k8s/values.yaml -n <oirinamespace>

    The output is:

    NAME: oiri
    LAST DEPLOYED: Mon Jan 11 15:14:22 2021
    NAMESPACE: default
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    Please be patient while the chart installs. Pod may not be in running status.
     
    To check the status of the pod, run following command.
    Pods READY state must be 1/1 and status RUNNING
     
        kubectl get pods --namespace oiri
     
        kubectl get pods --namespace ding
     
    Access OIRI Service by using following URL in your browser.
     
        https://IP_ADDRESS:PORT/
     
    Access OIRI UI by using following URL in your browser.
     
        https://IP_ADDRESS:PORT/oiri/ui/v1/console
     
    Admins can access DING History Server by port forwarding the ding-history pod through kubectl.
     
      kubectl port-forward <pod_name> <desired_port>:18080 -n ding
     
    Inside the DING-CLI, use following commands to start data ingestion
     
        ding-cli --config=/app/data/conf/config.yaml data-ingestion start /app/data/conf/data-ingestion-config.yaml

    Note:

    The log files for installation and configuration are found in the following locations:

    • For oiri-cli: The following log files are in the /nfs/oiri/data/logs/ directory:

      • oiri-service-audit.log: This file contains the audit information of the OIRI API Server.
      • oiri-service.log: This file contains the OIRI API Server logs. The logs are enabled in WARN mode.
    • For oiri-ding: The following log files are in the /nfs/ding/data/logs/ directory:

      • oiri-ding-access-xxx.log: This file contains the access information of the data ingestion container.
      • oiri-ding-cli-xxx.log: This file contains the logs of the data ingestion CLI.

If you want to upgrade the Helm chart after you have updated the values in the values.yaml file, then run the updateValuesYaml.sh script from the oiri-cli container, as described in Upgrade the OIRI Image in Deploy Oracle Identity Role Intelligence on Kubernetes.

If you want to change the data load configuration before running the data load process, then see Importing Entity Data to OIRI Database.

2.17 Uninstalling the OIRI Helm Chart (Optional)

While installing the OIRI Helm chart, if you encounter any issue, then fix the issue, unistall OIRI Helm chart, and then reinstall it again. If you do not unistall OIRI Helm chart, then the install process will fail with errors.

To uninstall the OIRI Helm chart, run the following command:

$ helm uninstall oiri -n <oirinamespace>

The output is:

release "oiri" uninstalled

2.18 Starting the Data Load Process

To start the data load process:
  1. Create the ding-cli.yaml file with the following content.
    apiVersion: v1
    kind: Pod
    metadata:
      name: oiri-ding-cli
      namespace: <DINGNS>
      labels:
        app: dingcli
    spec:  
      serviceAccount: ding-sa
      restartPolicy: OnFailure
      volumes:
        - name: oiripv
          nfs:
            server: <PVSERVER>
            path: <OIRI_SHARE>
        - name: dingpv
          nfs:
            server: <PVSERVER>
            path: <OIRI_DING_SHARE>
        - name: workpv
          nfs:
            server: <PVSERVER>
            path: <OIRI_WORK_SHARE>
      containers:
      - name: oiricli
        image: <OIRI_DING_IMAGE>:<OIRIDING_VER>
        volumeMounts:
          - name: oiripv
            mountPath: /app/oiri
          - name: dingpv
            mountPath: /app
          - name: workpv
            mountPath: /app/k8s
        command: ["/bin/bash","-ec", "tail -f /dev/null"]
      imagePullSecrets:
        - name: regcred

    where:

    • DINGNS is the name of the namespace you are using to hold the DING objects.
    • PVSERVER is the IP address of the NFS server hosting the persistent volumes.
    • OIRI_SHARE is the NFS mount location for the OIRI persistent volume.
    • OIRI_DING_SHARE is the NFS mount location for the OIRI DING persistent volume.
    • OIRI_WORK_SHARE is the nfs mount of the OIRI work persistent volume.
    • OIRI_CLI_IMAGE is the name of the OIRI CLI image file. If you are using a container registry, the name will be prefixed with the container registry name. For example: iad.ocir.io/mytenancy/idm/oiri-cli.
    • OIRICLI_VER is the version of the image you want to use. For example: 12.2.1.4.02106.
    • ImagePullSecrets are required only if you are using a container registry and regcred is the name of the Kubernetes secret you created with the registry credentials stored.

    For example.

    apiVersion: v1
    kind: Pod
    metadata:
      name: oiri-ding-cli
      namespace: ding
      labels:
        app: dingcli
    spec:
      serviceAccount: ding-sa
      restartPolicy: OnFailure
      volumes:
        - name: oiripv
          nfs:
            server: 100.69.233.106
            path: /nfs/oiri
        - name: dingpv
          nfs:
            server: 100.69.233.106
            path: /nfs/ding
        - name: workpv
          nfs:
            server: 100.69.233.106
            path: /nfs/k8s
      containers:
      - name: oiricli
        image: iad.ocir.io/mytenancy/idm/oiri-ding:12.2.1.4.02106
        volumeMounts:
          - name: oiripv
            mountPath: /app/oiri
          - name: dingpv
            mountPath: /app
          - name: workpv
            mountPath: /app/k8s
        command: ["/bin/bash", "-ec", "tail -f /dev/null"]
      imagePullSecrets:
        - name: regcred
  2. Start the DING Administration CLI using the following command.
    $ kubectl apply -f ding-cli.yaml
  3. Connect to the DING pod.
    $ kubectl exec -n ding -ti ding-cli -- /bin/bash
  4. Copy the certificate to the DING pod using the command:
    $ kubectl cp <WORKDIR>/ca.crt <DINGNS>/oiri-ding-cli:/app/ca.crt

    For example:

    $ kubectl cp $WORKDIR/ca.crt ding/oiri-ding-cli:/app/ca.crt
  5. Verify the data load configuration by running the following command:
    $ ding-cli --config=/app/data/conf/config.yaml data-ingestion verify /app/data/conf/data-ingestion-config.yaml

    Note:

    The data-ingestion verify command works with Service URL specified in the data-ingestion-config.yaml file but throws the following error if SID is specified:

    oracle.net.ns.NetException: Listener refused the connection with the following error:
    ORA-12514, TNS:listener does not currently know of service requested in connect descriptor
    
    at oracle.net.ns.NSProtocolNIO.negotiateConnection(NSProtocolNIO.java:284)
    at oracle.net.ns.NSProtocol.connect(NSProtocol.java:340)
    at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1596)
    at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:588)

    This is because OIRI supports service name for connecting to the database instead of SID. If you want to use SID in your environment, then edit the data-ingestion-config.yaml file, and change the URL in the following format:

    url: jdbc:oracle:thin:@_DBHOSTNAME:DBHOSTPORT:DBSID_
  6. If you want to load custom attributes as part of the Day 0 data load, then configure the schema definition for the custom attributes.

    See Importing Custom Attributes for information about configuring data import for custom attributes.

  7. If you want to update the existing data load configuration, you can use the following command:
    $ kubectl exec -n ding -ti ding-cli -- /bin/bash
     
    $ ./updateDataIngestionConfig.sh --parameter_name_1 parameter_value_1 --parameter_name_2 parameter_value_2 ...... --parameter_name_n parameter_value_n

    For example, if you want to update useflatfileforetl to true and useoigdbforetl to false, then run the following command:

    $ ./updateDataIngestionConfig.sh --useoigdbforetl false --useflatfileforetl true

    Note:

    To view all the parameters for the updateDataIngestionConfig.sh script, run the following command:

    ./updateDataIngestionConfig.sh --help

    Or:

    ./updateDataIngestionConfig.sh -h

    See Entity Parameters for Data Import for information about the entity parameters that you can update by running the updateDataIngestionConfig.sh script.

    See Flat File Parameters for Data Import for information about the flat file parameters that you can update by running the updateDataIngestionConfig.sh script.

2.19 Upgrading the Container Image

To upgrade the OIRI image to a newer version, complete the steps detailed in this section:
  1. Update the oiri-cli.yaml and ding-cli.yaml with the updated images.
    $ kubectl apply -f oiri-cli.yaml
    $ kubectl apply -f ding-cli.yaml
  2. Connect to the oiri-cli pod.
    $ kubectl exec -n oiri -ti oiri-cli -- /bin/bash
  3. Update the images.
    $ ./updateValuesYaml.sh \
    --oiriapiimage {OIRI_NEW_IMAGE} \
    --oiriuiimage {OIRI_UI_NEW_IMAGE} \
    --dingimage {DING_NEW_IMAGE}
    $ ./updateConfig.sh \
    --dingimage {DING_NEW_IMAGE}
  4. Upgrade the Helm Chart.
    $ helm upgrade oiri /helm/oiri -f /app/k8s/values.yaml -n oiri
  5. If the OIRI schema has been changed, seed the schema by running the following command:
    $ oiri-cli --config=/app/data/conf/config.yaml schema migrate /app/data/conf/dbconfig.yaml
  6. If upgrading from the April 2021 Release, perform the one-time step below.
    $ kubectl create secret docker-registry regcred --docker-server=<registry_server_url> --docker-username=<registry_user> --docker-password=<registry_password> -n <oirins>
    $ kubectl create secret docker-registry regcred --docker-server=<registry_server_url> --docker-username=<registry_user> --docker-password=<registry_password> -n <dingns>