20 Installing and Configuring Oracle Identity Role Intelligence

Oracle Identity Role Intelligence (OIRI) authenticates using users and groups defined in Oracle Identity Governance. Therefore, you have to install and configure Oracle Identity Governance first.

See Configuring Oracle Identity Governance Using WDT. Unlike the traditional Oracle Identity and Access Management products, Oracle Identity Role Intelligence is deployed as a series of microservices.

In this release, Oracle uses a standalone container image to install and configure OIRI. The container image is started manually in the Kubernetes cluster.

This chapter includes the following topics:

About Oracle Identity Role Intelligence

OIRI is used by system administrators to perform role mining operations.

It consists of the following components:
  • Oracle Identity Role Intelligence Management Container (OIRI-CLI)
  • Oracle Identity Role Intelligence (OIRI) Microservice
  • Oracle Identity Role Intelligence User Interface (OIRI-UI) Microservice
  • Oracle Identity Role Intelligence Data Ingester (OIRI-DING) Microservice

OIRI uses a dedicated database because it can process large amount of data in a way that is different from the OAM/OIG data stores. It uses a data warehouse model rather than OnLine Transaction Processing (OLTP).

Variables Used in this Chapter

The later sections of this chapter provide instructions to create a number of files. These sample files contain variables which you need to substitute with values applicable to your deployment.

Variables are formatted as <VARIABLE_NAME>. The following table provides the values you should set for each of these variables.

Table 20-1 The Variables to be Changed

Variable Sample Value Description

<REGISTRY_ADDRESS>

iad.ocir.io/<mytenancy>

The location of the registry.

<REGISTRY_SECRET_NAME>

regcred

The name of the Kubernetes secret containing the container registry credentials. Required only if you are pulling images directly from a container registry. See Creating Container Registry Secrets.

<REG_USER>

mytenancy/oracleidentitycloudservice/myemail@email.com

The name of the user you use to log in to the registry.

<REG_PWD>

<password>

The registry user password.

<OIRI_CLI_REPOSITORY>

oracle/oiri-cli

local/oracle/oiri-cli

container-registry.oracle.com/middleware/oiri-cli_cpu

<REGISTRY_ADDRESS>/oracle/oiri-cli

The name of the OIRI CLI software repository.

If you have downloaded and staged a container image, this value will be: oracle/local/oiri-cli. If you use OLCNE, the value will be local/oiri-cli.

If you use the Oracle container registry, the value will be container-registry.oracle.com/middleware/local/oiri-cli_cpu.

If you use a container registry, the value will be the name of the registry with the product name: <REGISTRY_ADDRESS>/oiri-cli.

<OIRICLI_VER>

12.2.1.4.220429

The version of the image you want to use.

<OIRI_REPOSITORY>

oracle/oiri

local/oracle/oiri

container-registry.oracle.com/middleware/oiri_cpu

<REGISTRY_ADDRESS>/oracle/oiri

The name of the OIRI software repository.

If you have downloaded and staged a container image, this value will be: local/oracle/oiri. If you use OLCNE, the value will be local/oracle/oiri.

If you use the Oracle container registry, the value will be container-registry.oracle.com/middleware/local/oiri_cpu.

If you use a container registry, the value will be the name of the registry with the product name: <REGISTRY_ADDRESS>/oracle/oiri.

<OIRI_VER>

12.2.1.4.02106

The version of the OIRI image you want to use.

<OIRI_DING_REPOSITORY>

oracle/oiri-ding

local/oracle/oiri-ding

container-registry.oracle.com/middleware/oiri-ding_cpu

<REGISTRY_ADDRESS>/oracle/oiri-ding

The name of the OIRI DING software repository.

If you have downloaded and staged a container image, this value will be: oracle/local/oiri-ding. If you use OLCNE, the value will be local/oiri-ding.

If you use the Oracle container registry, the value will be container-registry.oracle.com/middleware/local/oiri-ding_cpu.

If you use a container registry, the value will be the name of the registry with the product name: <REGISTRY_ADDRESS>/oracle/oiri-ding.

<OIRIDING_VER>

12.2.1.4.02106

The version of the image you want to use.

<OIRI_UI_REPOSITORY>

oracle/oiri-ui

local/oracle/oiri-ui

container-registry.oracle.com/middleware/oiri-ui_cpu

<REGISTRY_ADDRESS>/oracle/oiri-ui

The name of the OIRI UI software repository.

If you have downloaded and staged a container image, this value will be: oracle/local/oiri-ui. If you use OLCNE, the value will be local/oiri-ui.

If you use the Oracle container registry, the value will be container-registry.oracle.com/middleware/local/oiri-ui_cpu.

If you use a container registry, the value will be the name of the registry with the product name: <REGISTRY_ADDRESS>oiri-ui.

<OIRIUI_VER>

12.2.1.4.02106

The version of the image you want to use.

<PVSERVER>

1.1.1.1

The name or IP address of the NFS server hosting the persistent volumes.

<OIRINS>

oirins

The name of the OIRI namespace you are using to hold the OIRI objects..

<DINGNS>

dingns

The name of the OIRI DING namespace you are using to hold the DING objects.

<WORKDIR>

/workdir/OIRI/

The working directory for OIRI.

<OIRI_SHARE>

/exports/IAMPVS/oiripv

The NFS mount location for the OIRI persistent volume.

<OIRI_SHARE_SIZE>

10Gi

The size of the NFS share.

<OIRI_DING_SHARE>

/exports/IAMPVS/dingpv

The NFS mount location for the OIRI Ding persistent volume.

<OIRI_DING_SHARE_SIZE>

10Gi

The size of the NFS share.

<OIRI_WORK_SHARE>

/exports/IAMPVS/workpv

The NFS mount of the OIRI Work persistent volume.

<OIRI_NFS_SHARE>

/exports/iampvs/oiripv

The NFS share mount point for OIRI persistent volume.

<OIG_DB_SCAN>

db-scan.example.com

Database host of the OIG database. Use the SCAN address if you use a RAC database.

<OIG_DB_LISTENER>

1521

Listener port of the Oracle Identity Governance database.

<OIG_DB_SERVICE>

oig_s.example.com

Name of the database service for the OIG database.

<OIRI_DB_SCAN>

db-scan.example.com

Database host of the OIRI database. Use the SCAN address if using a RAC database.

<OIRI_DB_SYS_PWD>

<password>

The OIRI database sys password.

<OIRI_DB_LISTENER>

1521

Listener port of the OIRI database.

<OIRI_DB_SERVICE>

oiri_s.example.com

Name of the database service for the OIRI database.

<OIRI_RCU_PREFIX>

oiri

The database schema prefix you want to assign to the OIRI database schema objects.

<OIRI_SCHEMA_PWD>

myoigschemapwd

The password you want to assign to the OIRI schemas.

<OIG_RCU_PREFIX>

IGD_OIM

The prefix you used when you created the OIG schemas.

<OIG_SCHEMA_PWD>

myoigschemapwd

The password associated with the <OIG_RCU_PREFIX>_OIM schema.

<USE_INGRESS>

false

If you want the installation to create an Ingress controller in the OIRI namespace, set this value to true. If you are using your own Ingress controller or NodePort, set this to value to false.

<OIRI_INGRESS_HOST>

oiri.example.com

igdadmin.example.com

k8workers.example.com

Set this value to the OIRI load balancer virtual host name. If you use OIRI standalone, this value will be the name of an OIRI virtual host that you have defined specifically for OIRI. If you are deploy this host as part of an Integrated Oracle Identity and Access Management deployment (containers or otherwise), you can use the existing virtual host for your Oracle Identity Governance administration operations.

<OIRI_REPLICAS>

2

The number of OIRI servers to start. A minimum of two servers is required for HA.

<OIRI_UI_REPLICAS>

2

The number of OIRI UI servers to start. A minimum of two servers is required for HA.

<OIRI_DING_REPLICAS>

2

The number of DING servers to start. A minimum of two servers is required for HA.

<OIRI_KEYSTORE_PASSWORD>

mykeystorepwd

The password you used when you imported the OIG REST certificate. See Importing the OIG REST Certificate into OIRI.

<OIRI_SERVICE_USER>

oirisvc

The user name of the service you have created. See Creating the OIRI Service User.

<OIRI_SERVICE_PWD>

myservicepwd

The password assigned to the <OIRI_SERVICE_USER> account.

<OIRI_K8>

30305

The Kubernetes service port of the OIRI NodePort service.

<OIRI_UI_K8>

30306

The Kubernetes service port of the OIRI-UI NodePort Service.

<CERT_FILE>

prov.example.com.pem

The name of the certificate file.

<K8_URL>

https://10.0.0.10:6443

The URL of the Kubernetes API. You can obtain the URL by using the following command on the admin host:

grep server: $KUBECONFIG | sed 's/server://;s/ //g'

<CERTIFICATE_FILE >

/workdir/OIRI/ca.crt

The location of the ca.crt certificate file you generated. See Generating the ca.crt Certificate.

<OIGNS>

oigns

The namespace used by OIG.

<OIG_DOMAIN_NAME>

governancedomain

The name of the OIG domain.

<OIG_LBR_HOST>

prov.example.com

The load balancer entry point for OIM.

<OIG_LBR_PORT>

443

The load balancer port for OIM.

<OIG_URL>

http://igdinternal.example.com:7777

http://governancedomain-cluster-oim-clusteroigns.svc.cluster.local:14000

This is the URL of the OIG installation. OIRI does not need to exit the organization. So it is fine to use the internal call back URL.

If your OIG installation is inside the same Kubernetes cluster as your OIG deployment, you can use the internal service name which looks as follows:

http://$OIG_DOMAIN_NAME-cluster-oim-cluster.$OIGNS.svc.cluster.local:14000

<ELK_HOST>

https://elasticsearch-es-http.elkns.svc:9200

The host and port of the centralized Elasticsearch deployment. This host can be inside the Kubernetes cluster or external to it. This host is used only when Elasticsearch is used.

<ELK_VER>

8.11.0

The version of Elasticsearch you want to use.

Characteristics of the OIRI Installation

This section lists the key characteristics of the OIRI installation that you are about to create. Review these characteristics to understand the purpose and context of the procedures that are used to configure OIRI.

Table 20-2 Key Characteristics of the OIRI Installation

Characteristics of OIRI More Information

Each microservice is deployed into a pod in the Kubernetes cluster.

See About the Kubernetes Deployment.

Places the OIRI components in a dedicated Kubernetes namespace.

See About the Kubernetes Deployment.

Places the DING components in a dedicated namespace, although these could be combined with the OIRI namespace if desired.

See About the Kubernetes Deployment.

Uses the Kubernetes services to interact with microservices.

See Creating the Kubernetes Services.

Uses the Kubernetes persistent volumes to hold configuration information.

See unresolvable-reference.html#GUID-CF07EE44-34D9-4F36-97BE-6B3FBB4FCEA8.

Each Kubernetes pod is built from a pre-built Oracle container image.

See Identifying and Obtaining Software Distributions for an Enterprise Deployment.

Requires Oracle Identity Governance to be installed and configured.

See Configuring Oracle Identity Governance Using WDT.

Installation can be standalone or integrated.

See unresolvable-reference.html#GUID-62F73C7C-55E9-4E1E-9D2A-AE35749DC11D.

Kubernetes Services

If you are using NodePort Services, the Kubernetes services 'oiri-nodeport' and 'oiri-ui-nodeport' are created as part of the OIRI installation. If you are using Ingress, an Ingress service will be created.

Table 20-3 Kubernetes NodePort Services

Service Name Type Service Port Mapped Port

oiri-nodeport

NodePort

30305

8005

oiri-ui-nodeport

NodePort

30306

8080

If you use an Ingress-based deployment, the following Ingress service will be created as part of this deployment:

Table 20-4 Ingress Service

Service Name Host Name

oiriI-ingress

igdadmin.edg.com and oiri.example.com

Before You Begin

Before you begin the installation, you have to ensure that all the required tasks listed in this topic are complete.

Complete and ensure that you have:

Setting Up a Product Specific Work Directory

Before you begin the installation, you should have downloaded and staged the Oracle Identity Governance container image and the code repository. See Downloading Images from a Container Registry and Staging the Code Repository. You must also have deployed the Oracle WebLogic Operator as described in Installing the WebLogic Kubernetes Operator

This section describes copying the downloaded sample deployment scripts to a temporary working directory on the configuration host for OIRI.

  1. Create a temporary working directory as the install user. The install user should have kubectl access to the Kubernetes cluster.
    mkdir -p /<WORKDIR>
    For example:
    mkdir -p /workdir/OIRI
  2. Change directory to this location:
    cd /workdir/OIRI

Creating User Names and Groups in Oracle Identity Governance

Oracle Identity Role Intelligence authenticates using users and groups in Oracle Identity Governance. Before you start configuring OIRI, log in to the OIG Self Service Console using the https://prov.example.com/identity URL to create the required user names and groups.

Create the following users and groups:
  • OIRI Service Account
  • OIRI User Account
  • OIRI Group

Creating the OIRI Service User

To create the OIRI service user:
  1. Log in to the OIG Self Service Console using the https://prov.example.com/identity URL and the system administration user. For example: xelsysadm.
  2. Click Manage.
  3. Click Users.
  4. Click Create.
  5. Specify the following information in the Create User screen. The remaining fields are optional.
    • First Name: For example oirisvc.
    • Last Name: For example oirisvc.
    • Organization: Xellerate Users.
    • User Type: Full Time Employee.
    • User Login: For example oirisvc.
    • Password: Choose a password for the account.
    • Confirm Password: Repeat the password.
  6. Click Submit.
  7. From the Home screen, select Admin Roles.
    1. Search for the Administration Role OrclOIMUserViewer.
    2. Click User, and then click on the Members tab.
    3. Click Assign Users.
    4. Search for the newly created user. For example oirisvc.
    5. Select the user and select Add Selected.
    6. Click Select.
  8. Repeat Step 5 for the roles OrclOIMRoleAdministrator and OrclOIMAccessPolicyAdministrator.

Creating the OIRI User

To create the OIRI user:
  1. Log in to the OIG Self Service Console using the https://prov.example.com/identity URL and the system administration user. For example: xelsysadm.
  2. Click Manage.
  3. Click Users.
  4. Click Create.
  5. Specify the following information in the Create User screen. The remaining fields are optional.
    • First Name: For example oiri.
    • Last Name: For example oiri.
    • Organization: Xellerate Users.
    • User Type: Full Time Employee.
    • User Login: For example oiri.
    • Password: Choose a password for the account.
    • Confirm Password: Repeat the password.
  6. Click Submit.

Creating the OIRI Engineering Role

To create the OIRI engineering role:
  1. Log in to the OIG Self Service Console using the https://prov.example.com/identity URL and the system administration user. For example: xelsysadm.
  2. Click Roles and Access Policies - Roles.
  3. Click Create and specify the following information:
    • Name: OrclOIRIRoleEngineer
    • Display Name: OrclOIRIRoleEngineer
    • Role Description: OIRI Engineer Role
  4. Click Next.
  5. On the Hierarchy screen, click Next.
  6. On the Access Policy screen, click Next.
  7. On the Add Role Membership screen, click Add Members.
    1. On the Add Role Membership screen, click Add Members.
    2. Select the oiri user and click Add Selected.
    3. Click Select.
    4. Click Next.
  8. On the Organizations screen, click Add Organization.
    1. Search for the Organization Top and select it.
    2. Click Add Selected.
    3. Click Select.
    4. Click Next.
  9. On the Summary screen, review the information that you have entered and click Finish.

Ensuring that OIG Compliance Mode is Enabled

For OIRI to function, ensure that the compliance functionality is enabled for OIG.

To check whether compliance is enabled:
  1. Log in to the OIG Sysadmin Console using the http://igdadmin.example.com/sysadmin URL.
  2. Click Configuration Properties.
  3. Search for the system property with the keyword: OIG.IsIdentityAuditorEnabled.
  4. Edit the property and ensure that it is set to true. If not, amend the value to true and click Save.

Creating Kubernetes Namespaces

The Kubernetes namespaces are used to store all the OIRI objects.

Use the following commands to create separate namespaces for OIRI and DING.

kubectl create namespace <OIRINS>
kubectl create namespace <DINGNS>

For example:

kubectl create namespace oirins
kubectl create namespace dingns

Creating Container Registry Secrets

If you are using a container registry and want to pull the Oracle container images on demand, you must create a secret which contains the login details of the container registry.

This step is not required if you have staged the container images locally.

To create a container registry secret for OIRI, use the following command:
kubectl create secret -n <OIRINS> docker-registry <REGISTRY_SECRET_NAME> --docker-server=<REGISTRY_ADDRESS> --docker-username=<REG_USER> --docker-password=<REG_PWD>
For example:
kubectl create secret -n oirins docker-registry regcred --docker-server=iad.ocir.io/mytenancy --docker-username=mytenancy/oracleidentitycloudservice/myemail@email.com --docker-password=<password>

To create a container registry secret for OIRI DING, use the following command:

kubectl create secret -n <DINGNS> docker-registry <REGISTRY_SECRET_NAME> --docker-server=<REGISTRY_ADDRESS> --docker-username=<REG_USER> --docker-password=<REG_PWD>
For example:
kubectl create secret -n dingns docker-registry regcred --docker-server=iad.ocir.io/mytenancy --docker-username=mytenancy/oracleidentitycloudservice/myemail@email.com --docker-password=<password>

Creating a Kubernetes Secret for Docker Hub Images

This secret allows Kubernetes to pull an image from hub.docker.com which contains third-party images such as helm, kubectl, and logstash commands. These commands are used by the OUD cron job to test for pods that are stuck in the 'Terminating' state, and restart them if necessary.

You should have an account on hub.docker.com. If you want to stage the images in your own repository, you can do so and modify the helm override file as appropriate.

To create a Kubernetes secret for hub.docker.com, use the following command:

$ kubectl create secret docker-registry dockercred --docker-server="https://index.docker.io/v1/" --docker-username="<DH_USER>" --docker-password="<DH_PWD>" --namespace=<OUDNS>
For example:
$ kubectl create secret docker-registry dockercred --docker-server="https://index.docker.io/v1/" --docker-username="username" --docker-password="<mypassword>" --namespace=oudns

Starting the Administration CLI

Before starting the Administration CLI, ensure that you have created the persistent volumes.

See Creating File Systems and Mount Targets.

To start the Administration CLI:

  1. Create a file called oiri-cli.yaml with the following content:
    apiVersion: v1
    kind: Pod
    metadata:
      name: oiri-cli
      namespace: <OIRINS>
      labels:
        app: oiricli
    spec:
      restartPolicy: OnFailure
      volumes:
        - name: oiripv
          nfs:
            server: <PVSERVER>
            path: <OIRI_SHARE>
        - name: dingpv
          nfs:
            server: <PVSERVER>
            path: <OIRI_DING_SHARE>
        - name: workpv
          nfs:
            server: <PVSERVER>
            path: <OIRI_WORK_SHARE>
      containers:
      - name: oiricli
        image: <OIRI_CLI_REPOSITORY>:<OIRICLI_VER>
        volumeMounts:
          - name: oiripv
            mountPath: /app/oiri
          - name: dingpv
            mountPath: /app
          - name: workpv
            mountPath: /app/k8s
        command: ["/bin/bash", "-ec", "tail -f /dev/null"]
      imagePullSecrets:
        - name: regcred
    For example:
    apiVersion: v1
    kind: Pod
    metadata:
      name: oiri-cli
      namespace: oirins
      labels:
        app: oiricli
    spec:
      restartPolicy: OnFailure
      volumes:
        - name: oiripv
          nfs:
            server: 0.0.0.0
            path: /exports/IAMPVS/oiripv
        - name: dingpv
          nfs:
            server: 0.0.0.0
            path: /exports/IAMPVS/dingpv
        - name: workpv
          nfs:
            server: 0.0.0.0
            path: /exports/IAMPVS/workpv
      containers:
      - name: oiricli
        image: iad.ocir.io/mytenancy/idm/oiri-cli:12.2.1.4.220429
        volumeMounts:
          - name: oiripv
            mountPath: /app/oiri
          - name: dingpv
            mountPath: /app
          - name: workpv
            mountPath: /app/k8s
        command: ["/bin/bash", "-ec", "tail -f /dev/null"]
      imagePullSecrets:
        - name: regcred
  2. Start the Administration CLI container using the following command:
    kubectl create -f oiri-cli.yaml
  3. Connect to the running container using the command:
    kubectl exec -n oirins -ti oiri-cli /bin/bash

    Note:

    When the examples say "use the following command from within the OIRI-CLI", it means that you should connect to the running container as described here, and then run the commands as specified.

Granting the CLI Access to the Kubernetes Cluster

The OIRI CLI container has built-in commands to interact with the Kubernetes cluster. You must provide the Administration CLI with details on how to access the Kubernetes cluster.

To provide access, perform the following steps on any node which has a working kubectl command:

Creating a Kubernetes Service Secret

If you are using Kubernetes 1.24 release or later, then you need to create a secret for the Kubernetes service account using the following command:
kubectl create -f <WORKDIR>/create_svc_secret.yaml
For example:
kubectl create -f /workdir/OIRI/create_svc_secret.yaml
Here, <WORKDIR>/create_svc_secret.yaml has the following content:
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: oiri-service-account
  namespace: <OIRINS>
  annotations:
    kubernetes.io/service-account.name: "oiri-service-account" 

Creating a Kubernetes Service Account

Create a workdir/oiri/create_svc.yaml file for the Kubernetes service account in the OIRI namespace by using the following command:
kubectl apply -f <WORKDIR>/create_svc.yaml
For example:
kubectl apply -f /workdir/OIRI/create_svc.yaml
The create_svc.yaml file has the following content:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: oiri-service-account
  namespace: <OIRINS>
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: oiri-ns-role
  namespace: <OIRINS>
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ding-ns-role
  namespace: <DINGNS>
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: oiri-clusterrolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:persistent-volume-provisioner
subjects:
- namespace: <OIRINS>
  kind: ServiceAccount
  name: oiri-service-account
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: oiri-clusteradmin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- namespace: <OIRINS>
  kind: ServiceAccount
  name: oiri-service-account
--- 
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: oiri-rolebinding
  namespace: <OIRINS>
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: oiri-ns-role
subjects:
- namespace: <OIRINS>
  kind: ServiceAccount
  name: oiri-service-account
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ding-rolebinding
  namespace: <DINGNS>
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ding-ns-role
subjects:
- namespace: <OIRINS>
  kind: ServiceAccount
  name: oiri-service-account
For example:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: oiri-service-account
  namespace: oirins
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: oiri-ns-role
  namespace: oirins
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ding-ns-role
  namespace: dingns
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: oiri-clusterrolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:persistent-volume-provisioner
subjects:
- namespace: oirins
  kind: ServiceAccount
  name: oiri-service-account
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: oiri-clusteradmin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- namespace: oirins
  kind: ServiceAccount
  name: oiri-service-account
--- 
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: oiri-rolebinding
  namespace: oirins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: oiri-ns-role
subjects:
- namespace: oirins
  kind: ServiceAccount
  name: oiri-service-account
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ding-rolebinding
  namespace: dingns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ding-ns-role
subjects:
- namespace: oirins
  kind: ServiceAccount
  name: oiri-service-account

Generating the ca.crt Certificate

Obtain the Kubernetes certificate using the following commands:

Set up the environment variables for the OIRI namespace, and a working directory.

OIRINS=<OIRINS>
WORKDIR=/workdir/OIRI
For the Kubernetes releases up to 1.23, use the command:
TOKENNAME=`kubectl -n $OIRINS get serviceaccount/oiri-service-account -o jsonpath='{.secrets[0].name}'`
For the Kubernetes releases 1.24 or later, use the commands:
TOKENNAME=oiri-service-account
TOKEN=`kubectl -n $OIRINS get secret $TOKENNAME -o jsonpath='{.data.token}'| base64 --decode`
kubectl -n $OIRINS get secret $TOKENNAME -o jsonpath='{.data.ca\.crt}'| base64 --decode > $WORKDIR/ca.crt

Creating a Kubernetes Configuration File for OIRI

Generate a Kubernetes configuration file to tell OIRI how to interact with kubectl. To do this perform the following steps:

Set up environment variables for the OIRI namespace, and a working directory.

OIRINS=<OIRINS>
WORKDIR=/workdir/OIRI
TOKEN=`kubectl -n $OIRINS get secret $TOKENNAME -o jsonpath='{.data.token}'| base64 --decode`
K8URL=`grep server: $KUBECONFIG | sed 's/server://;s/ //g'`
kubectl config --kubeconfig=$WORKDIR/oiri_config set-cluster oiri-cluster --server=$K8URL --certificate-authority=$WORKDIR/ca.crt --embed-certs=true
kubectl config --kubeconfig=$WORKDIR/oiri_config set-credentials oiri-service-account --token=$TOKEN
kubectl config --kubeconfig=$WORKDIR/oiri_config set-context oiri --user=oiri-service-account --cluster=oiri-cluster
kubectl config --kubeconfig=$WORKDIR/oiri_config use-context oiri

These commands generate a file called oiri_config in the <WORKDIR> location. This file contains the Kubernetes cluster details.

Copying Files to the OIRI-CLI Container

Copy the ca.crt (see Generating the ca.crt Certificate) and oiri_config (see Creating a Kubernetes Configuration File for OIRI) files to the OIRI-CLI container, using the following commands:

OIRINS=<OIRINS>
WORKDIR=/workdir/OIRI
kubectl cp $WORKDIR/ca.crt $OIRINS/oiri-cli:/app/k8s
kubectl cp $WORKDIR/oiri_config $OIRINS/oiri-cli:/app/k8s/config
From the oiri-cli, run the following command:
chmod 400  /app/k8s/config

Validating the kubectl Command

Validate that the kubectl command works from inside the Kubernetes container by using the following command from the OIRI-CLI container.

kubectl get pod -n $OIRINS
For example:
kubectl get pod -n oirins

The command should show you the running oiri-cli pod.

Creating the Configuration Files

OIRI uses a number of property files to deploy OIRI. These property files are populated using the CLI commands.

Run the following steps from inside the OIRI-CLI container.

Creating the Setup Configuration Files

To create the initial config files, run the following command:
/oiri-cli/scripts/setupConfFiles.sh -m prod \
             --oigdbhost <OIG_DB_SCAN> \
             --oigdbport <OIG_DB_LISTENER> \
             --oigdbsname <OIG_DB_SERVICE> \
             --oiridbhost <OIRI_DB_SCAN> \
             --oiridbport <OIRI_DB_LISTENER> \
             --oiridbsname <OIRI_DB_SERVICE> \
             --sparkmode k8s \
             --dingnamespace <DINGNS> \
             ----dingimage <OIRI_DING_REPOSITORY>:<OIRIDING_VER> \
             --cookiesecureflag false \
             --k8scertificatefilename <CERTIFICATE_FILE> \
             --sparkk8smasterurl k8s://<K8_URL> \
             --oigserverurl <OIG_URL>
For example:
/oiri-cli/scripts/setupConfFiles.sh -m prod \
             --oigdbhost db-scan.example.com \
             --oigdbport 1521 \
             --oigdbsname oig_s.example.com  \
             --oiridbhost db-scan.example.com \
             --oiridbport 1521 \
             --oiridbsname oiri_s.example.com  \
            --sparkmode k8s  \
            --dingnamespace dingns \
             --dingimage oiri-ding:12.2.1.4.02106  \
            --cookiesecureflag false \
             --k8scertificatefilename ca.crt  \
            --sparkk8smasterurl k8s://https://10.0.0.10:6443  \
            --http://governancedomain-cluster-oim-cluster.oigns.svc.cluster.local:14000
The output appears as follows:
INFO: OIG DB as source for ETL is true
INFO: Setting up /app/data/conf/config.yaml
INFO: Setting up /app/data/conf/data-ingestion-config.yaml
INFO: Setting up /app/data/conf/custom-attributes.yaml
INFO: Setting up /app/oiri/data/conf/application.yaml
INFO: Setting up /app/oiri/data/conf/authenticationConf.yaml
INFO: Setting up /app/data/conf/dbconfig.yaml
Verify that the files have been created correctly by using the following command in the OIRI-CLI container:
ls /app/data/conf
You should see the following:
config.yaml  custom-attributes.yaml   data-ingestion-config.yaml   dbconfig.yaml

Creating the Helm Configuration File

OIRI is deployed using helm. To create a configuration file for helm, run the following command inside the OIRI-CLI container.

/oiri-cli/scripts/setupValuesYaml.sh  \
              --oiriapiimage <OIRI_REPOSITORY>:<OIRI_VER> \
              --oirinamespace <OIRINS> \
              --oirinfsserver  <PVSERVER> \
              --oirireplicas <OIRI_REPLICAS> \
              --oiriuireplicas <OIRI_UI_REPLICAS> \
              --sparkhistoryserverreplicas <OIRI_DING_REPLICAS> \
              --oirinfsstoragepath <OIRI_NFS_SHARE> \
              --oirinfsstoragecapacity <OIRI_SHARE_SIZE> \
              --oiriuiimage <OIRI_UI_REPOSITORY>:<OIRIUI_VER> \
              --dingimage <OIRI_DING_REPOSITORY>:<OIRIDING_VER> \
              --dingnamespace <DINGNS> \
              --dingnfsserver <PVSERVER> \
              --dingnfsstoragepath <OIRI_DING_SHARE> \
              --dingnfsstoragecapacity <OIRI_DING_SHARE_SIZE> \
              --ingressenabled <USE_INGRESS> \
              --ingresshostname <OIRI_INGRESS_HOST> \
              --sslenabled false
For example:
/oiri-cli/scripts/setupValuesYaml.sh  \
              --oiriapiimage oiri:12.2.1.4.02106 \
              --oirinamespace oirins \
              --oirinfsserver  1.1.1.1 \
              --oirireplicas 2 \
              --oiriuireplicas 2 \
              --sparkhistoryserverreplicas 2 \
              --oirinfsstoragepath /exports/iampvs/oiripv \
              --oirinfsstoragecapacity 10Gi \
              --oiriuiimage oiri-ui:12.2.1.4.02106 \
              --dingimage oiri-ding:12.2.1.4.02106 \
              --dingnamespace dingns \
              --dingnfsserver 1.1.1.1 \
              --dingnfsstoragepath /exports/iampvs/dingpv \
              --dingnfsstoragecapacity 10Gi \
              --ingressenabled false \
              --ingresshostname igdadmin.example.com \
              --sslenabled false

Verify that the /app/k8s/values.yaml file is created.

Creating the OIRI Keystore

Create a keystore in OIRI using the keytool command. This command should be run from the ORI-CLI container.

Use the following command to create the keystore:
keytool -genkeypair -alias oiri -keypass <OIRI_KEYSTORE_PWD> -keyalg RSA \
             -keystore /app/oiri/data/keystore/keystore.jks \
             -storepass <OIRI_KEYSTORE_PWD> -storetype pkcs12 \
             -dname \"CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown\" \
             -noprompt

Loading the OIG Certificates into OIRI

For OIRI to trust OIG, you should load the OIG certificate into OIRI.

Complete the following steps to load the certificates:

Obtaining the OIG REST Certificate

To obtain the OIG rest certificate (this is different from the regular OIG certificate), you should run the commands from inside the OIG Administration Server container.

To obtain the OIG rest certificate:

  1. Log in to the OIG Administration Container using the command:
    kubectl exec -n <OIGNS> -ti <OIG_DOMAIN_NAME>-adminserver –- /bin/bash
    For example:
    kubectl exec -n oigns -ti governancedomain-adminserver –- /bin/bash
  2. Obtain the certificate using the command:
    keytool -export -rfc -alias xell \
                   -file /u01/user_projects/workdir/xell.pem \
                   -keystore /u01/user_projects /domains/$OIG_DOMAIN_NAME/config/fmwconfig/default-keystore.jks \
                   -storepass <OIG_WEBLOGIC_PWD>
    For example:
    keytool -export -rfc -alias xell \
                   -file /u01/user_projects/workdir/xell.pem \
                   -keystore /u01/user_projects /domains/governancedomain/config/fmwconfig/default-keystore.jks \
                   -storepass <password>
  3. Copy the certificate to OIRI by using the following command. This command should be run from the administration node:
    kubectl cp <OIGNS>/<OIG_DOMAIN_NAME> adminserver:/u01/oracle/user_projects/workdir/xell.pem <WORKDIR>/xell.pem
    kubectl cp <WORKDIR>/xell.pem <OIRINS>/oiri-cli:/app/k8s/xell.pem
    For example:
    kubectl cp oigns/governancedomain-adminserver:/u01/oracle/user_projects/workdir/xell.pem /workdir/OIRI
    kubectl cp /workdir/OIRI/xell.pem oirins/oiri-cli:/app/k8s/xell.pem
OIG Running Inside Kubernetes

You must run the commands inside Kubernetes.

OIG Not Running Inside Kubernetes
  1. Log into your OIG host and change directory to DOMAIN_HOME/config/fmwconfig/.

    For Example:

    cd /u01/oracle/config/domains/governancedomain/config/fmwconfig/

    (Optional) <Enter a step example.>
  2. Use the following command to obtain the certificate:
    
    keytool -export -rfc -alias xell \
                   -file /tmp/xell.pem \
                   -keystore <DOMAIN_HOME>/config/fmwconfig/default-keystore.jks \
                   -storepass <OIG_WEBLOGIC_PWD>

    For Example:

    
    keytool -export -rfc -alias xell \
                   -file /tmp/xell.pem \
                   -keystore /u01/oracle/config/domains/governancedomain/config/fmwconfig/default-keystore.jks \
                   -storepass password
  3. Transfer the /tmp/xell.pem file to a host which has access to your kubernetes cluster.

    Note:

    The recommended practice is to place the file inside your <WORKDIR>.
  4. Use the following command to copy the certificate to OIRI. This command should be run from the administration node:
    kubectl cp <WORKDIR>/xell.pem <OIRINS>/oiri-cli:/app/k8s/xell.pem
    kubectl cp /workdir/OIRI/xell.pem oirins/oiri-cli:/app/k8s/xell.pem

Importing the OIG REST Certificate into OIRI

After obtaining a copy of the OIG REST certificate in the OIRI container, you have to import the certificate using the following command from the OIRI-CLI container.
keytool -import \
               -alias xell \
               -file /app/k8s/xell.pem \
               -keystore /app/oiri/data/keystore/keystore.jks\
               -storepass  <OIRI_KEYSTORE_PWD> -noprompt

Obtaining the OIG SSL Certificate

In addition to the OIG REST certificate, you should also trust the OIG SSL Certificate. This certificate is assigned to the load balancer. The simplest way to obtain this certificate is to use the following command that has access to the OIG load balancer:

openssl s_client -connect <OIG_LBR_HOST>:<OIG_LBR_PORT> -showcerts </dev/null 2>/dev/null|openssl x509 -outform PEM > <OIG_LBR_HOST>.pem
For example:
openssl s_client -connect prov.example.com:443 -showcerts </dev/null 2>/dev/null|openssl x509 -outform PEM > prov.example.com.pem
Copy the certificate to the OIRI-CLI container using the command:
kubectl cp /workdir/OIRI/prov.example.com.pem oirins/oiri-cli:/app/k8s/prov.example.com.pem

Importing the OIG SSL Certificate into OIRI

After obtaining a copy of the OIG certificate in the OIRI container, you should import the certificate using the following command from the OIRI-CLI container:
keytool -import \
               -alias oigssl \
               -file /app/k8s/<CERT_FILE> \
               -keystore /app/oiri/data/keystore/keystore.jks\
               -storepass  <OIRI_KEYSTORE_PWD> -noprompt
For example:
keytool -import \
               -alias oigssl \
               -file /app/k8s/prov.example.com.pem \
               -keystore /app/oiri/data/keystore/keystore.jks\
               -storepass <password> -noprompt

Creating Wallets

OIRI stores database/OIG connection information in a wallet. You have to create the wallet by running the command from inside the OIRI-CLI pod.

Use the following command to create the wallet:
oiri-cli --config=/app/data/conf/config.yaml wallet create \
             --oigsau <OIRI_SERVICE_USER> \
             --oigsap <OIRI_SERVICE_PWD> \
             --oirijka oiri \
             --oirijkp <OIRI_KEYSTORE_PWD> \
             --oiriksp <OIRI_KEYSTORE_PWD> \
             --oiridbuprefix <OIRI_RCU_PREFIX> \
             --oiridbp <OIRI_SCHEMA_PWD> \
             --oigdbu <OIG_RCU_PREFIX>_OIM \
             --oigdbp <OIG_SCHEMA_PWD>
For example:
oiri-cli --config=/app/data/conf/config.yaml wallet create \
             --oigsau oirisvc \
             --oigsap myservicepwd \
             --oirijka oiri \
             --oirijkp mykeystorepwd \
             --oiriksp mykeystorepwd \
             --oiridbuprefix oiri \
             --oiridbp myschemapwd \
             --oigdbu IGD_OIM \
             --oigdbp myoigschemapwd
Verify that the wallets have been created, using the command:
ls /app/data/wallet /app/oiri/data/wallet

Creating the Database Schemas

Create the OIRI database schemas in the database by running the commands from the OIRI-CLI container.

Use the following commands:
oiri-cli --config=/app/data/conf/config.yaml schema create /app/data/conf/dbconfig.yaml --sysp <OIRI_DB_SYS_PWD>
oiri-cli --config=/app/data/conf/config.yaml schema migrate /app/data/conf/dbconfig.yaml
The output will appear as follows:
Creating the schema oiri_oiri
CREATING OIRI SCHEMA ............
===================================================
DB USER oiri_oiri has been successfully created
Migrating the OIRI schema
Migrating OIRI SCHEMA ............
===================================================
log4j:WARN No appenders could be found for logger (org.flywaydb.core.internal.scanner.classpath.ClassPathScanner).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
========== Before Migrate =============
 Script:V1__RoleMining.sql Installed On:null State:PENDING Version:1 Description:RoleMining
========== After Migrate =============
 Script:V1__RoleMining.sql Installed On:2021-03-02 08:01:54.18592 State:SUCCESS Version:1 Description:RoleMining
OIRI Schema has been successfully migrated

Verifying the Wallet

After creating the wallet, you should validate the wallet. If the validation fails, correct the wallet before you proceed.

Verify the wallet using the following command:
./verifyWallet.sh
The output will appear as follows:
Verifying Wallets. Wallet locations and entries will be validated
DING Wallet is Valid.
OIRI Wallet is Valid.
OIRI DB Connection is Valid.
OIG DB Connection is Valid.
KeyStore location and entries are Valid.
OIG Server Connection is Valid.
SUCCESS: Wallet locations and entries are valid.
If any validation fails, correct the wallet using the command:
oiri-cli --config=/app/data/conf/config.yaml wallet update

Deploying OIRI Using Helm

After creating the namespaces, you can now deploy OIRI using the generated Helm chart. You should execute the command from the OIRI-CLI.

Use the following command:
helm install oiri /helm/oiri -f /app/k8s/values.yaml
The output will appear as follows:
NAME: oiri
LAST DEPLOYED: Mon Jan 11 15:14:22 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please be patient while the chart installs. Pod may not be in running status.
 
To check the status of the pod, run following command.
Pods READY state must be 1/1 and status RUNNING
 
    kubectl get pods --namespace oiri
 
    kubectl get pods --namespace ding
 
Access OIRI Service by using following URL in your browser.
 
    https://IP_ADDRESS:PORT/
 
Access OIRI UI by using following URL in your browser.
 
    https://IP_ADDRESS:PORT/oiri/ui/v1/console
 
Admins can access DING History Server by port forwarding the ding-history pod through kubectl.
 
  kubectl port-forward <pod_name> <desired_port>:18080 -n ding
 
Inside the DING-CLI, use following commands to start data ingestion
 
    ding-cli --config=/app/data/conf/config.yaml data-ingestion start /app/data/conf/data-ingestion-config.yaml

Verifying that OIRI is Running

After you deploy OIRI, you should verify that it is running successfully.

To verify that OIRI is running, use the command:
kubectl -n <OIRINS> get pods -o wide
kubectl -n <DINGNS> get pods -o wide

You should see list of pods with the status 'Running'. For example:

NAME                    READY     STATUS    RESTARTS   AGE
oiri-6cd5755fb-j7r4      1/1      Running    0         42h
oiri-6cd5755fb-s42xx     1/1      Running    0         42h
oiri-ui-d55cd6b69-62nm6  1/1      Running    0         42h
oiri-ui-d55cd6b69-rxdwm  1/1      Running    0         42h
kubectl -n dingns get pods
NAME                                   READY   STATUS    RESTARTS AGE
oiri-ding-7045127aee424d93-driver      0/1     Completed  0       41h
spark-history-server-6cc6d9d8c7-2594r  1/1     Running    0       42h
spark-history-server-6cc6d9d8c7-jdcqc  1/1     Running    0       42h

Creating the Kubernetes NodePort Services

By default, OIRI gets created with all the components configured as ClusterIP services. The configuration indicates that the Oracle Identity Role Intelligence components are visible only within the Kubernetes cluster.

In an enterprise deployment, all interactions with the OIRI components take place through the Oracle HTTP Server which is located outside of the Kubernetes cluster. If you use OHS and Ingress controller, the Ingress services are created for you. If you are using a NodePort deployment, you should create the NodePort Services to access the deployment.

Creating an OIRI NodePort Service

To create an OIRI NodePort Service:
  1. Create the oiri_nodeport.yaml text file with the following content:
    kind: Service
    apiVersion: v1
    metadata:
      name: oiri-nodeport
      namespace: <OIRINS>
    spec:
      type: NodePort
      selector:
        app: oiri
      ports:
        - targetPort: 8005
          port: 8005
          nodePort: <OIRI_K8>
          protocol: TCP

    Note:

    Ensure that the namespace is set to the namespace you want to use.
    For example:
    kind: Service
    apiVersion: v1
    metadata:
      name: oiri-nodeport
      namespace: oirins
    spec:
      type: NodePort
      selector:
        app: oiri
      ports:
        - targetPort: 8005
          port: 8005
          nodePort: 30305
          protocol: TCP
  2. Create the service using the following command:
    kubectl create -f oiri_nodeport.yaml

Creating an OIRI UI NodePort Service

To create an OIRI UI NodePort Service:
  1. Create the oiriui_nodeport.yaml text file with the following content:
    kind: Service
    apiVersion: v1
    metadata:
      name: oiri-ui-nodeport
      namespace: <OIRINS>
    spec:
      type: NodePort
      selector:
        app: oiri-ui
      ports:
        - targetPort: 8080
          port: 8080
          nodePort: <OIRI_UI_K8>
          protocol: TCP

    Note:

    Ensure that the namespace is set to the namespace you want to use.
    For example:
    kind: Service
    apiVersion: v1
    metadata:
      name: oiri-ui-nodeport
      namespace: oirins
    spec:
      type: NodePort
      selector:
        app: oiri-ui
      ports:
        - targetPort: 8080
          port: 8080
          nodePort: 30306
          protocol: TCP
  2. Create the service using the following command:
    kubectl create -f oiriui_nodeport.yaml

Updating the OHS Configuration

If you have not already done so, you should now add the OIRI entries to the Oracle HTTP configuration.

See Configuring Oracle HTTP Server for Oracle Identity Role Intelligence.

Performing an Initial Data Load Using the Data Ingester

After OIRI is up and running, you may want to perform an initial data load from the OIG database.

Complete the following steps for a data load:

Starting the DING CLI

To start the DING CLI, perform the steps on a node that has access to the kubectl command:
  1. Create the ding-cli.yaml file with the following content:
    apiVersion: v1
    kind: Pod
    metadata:
      name: oiri-ding-cli
      namespace: <DINGNS>
      labels:
        app: dingcli
    spec:
      serviceAccount: ding-sa
      restartPolicy: OnFailure
      volumes:
        - name: oiripv
          nfs:
            server: <PVSERVER>
            path: <OIRI_SHARE>
        - name: dingpv
          nfs:
            server: <PVSERVER>
            path: <OIRI_DING_SHARE>
        - name: workpv
          nfs:
            server: <PVSERVER>
            path: <OIRI_WORK_SHARE>
      containers:
      - name: oiricli
        image: <OIRI_DING_REPOSITORY>:<OIRIDING_VER>
        volumeMounts:
          - name: oiripv
            mountPath: /app/oiri
          - name: dingpv
            mountPath: /app
          - name: workpv
            mountPath: /app/k8s
        command: ["/bin/bash", "-ec", "tail -f /dev/null"]
      imagePullSecrets:
        - name: regcred
    For example:
    apiVersion: v1
    kind: Pod
    metadata:
      name: oiri-ding-cli
      namespace: dingns
      labels:
        app: dingcli
    spec:
      serviceAccount: ding-sa
      restartPolicy: OnFailure
      volumes:
        - name: oiripv
          nfs:
            server: 0.0.0.0
            path: /exports/IAMPVS/oiripv
        - name: dingpv
          nfs:
            server: 0.0.0.0
            path/exports/IAMPVS/dingpv
        - name: workpv
          nfs:
            server: 0.0.0.0
            path: /exports/IAMPVS/workpv
      containers:
      - name: oiricli
        image: iad.ocir.io/mytenancy/idm/oiri-ding:12.2.1.4.02106
        volumeMounts:
          - name: oiripv
            mountPath: /app/oiri
          - name: dingpv
            mountPath: /app
          - name: workpv
            mountPath: /app/k8s
        command: ["/bin/bash", "-ec", "tail -f /dev/null"]
      imagePullSecrets:
        - name: regcred
  2. Start the DING Administration CLI using the following command:
    kubectl create -f ding-cli.yaml
  3. Connect to the running container using the command:
    kubectl exec -n dingns -ti oiri-ding-cli –- /bin/bash

    Note:

    When the examples say "issue the following command from within the OIRI-CLI, it means that you should connect to the running container as described here, and then run the commands as specified.

Copying the Kubernetes Certificate to DING

To enable interaction between DING and Kubernetes, you should copy the Kubernetes certificate to the DING container.

To copy the certificate:

  1. Obtain the Kubernetes certificate by running the following command on the deployment server. This command copies the certificate to the locally mounted DING persistent volume:
    grep certificate-authority-data $KUBECONFIG | tr -d " " | sed 's/certificate-authority-data://' | base64 -d > /workdir/OIRI/ca.crt
  2. Verify that the resulting file has a valid certificate that looks as follows:
    -----BEGIN CERTIFICATE-----
    RANDOM CHARACTERS
    RANDOM CHARACTERS
    RANDOM CHARACTERS
    -----END CERTIFICATE----- 
  3. Copy the certificate to the DING container using the command:
    kubectl cp <WORKDIR>/ca.crt <DINGNS>/oiri-ding-cli:/app/ca.crt
    For example:
    kubectl cp /workdir/OIRI/ca.crt dingns/oiri-ding-cli:/app/ca.crt

Verifying the DING Configuration

DING is configured as part of the setup to point to the OIG database. Validate that DING can connect to the database successfully by using the following command from the DING-CLI:
ding-cli --config=/app/data/conf/config.yaml data-ingestion verify /app/data/conf/data-ingestion-config.yaml
You should see the following message:
SUCCESS: Data Ingestion Config is valid

Running the Data Ingestion

To start the Data Ingestion, use the following command from the DING-CLI:
ding-cli --config=/app/data/conf/config.yaml data-ingestion start /app/data/conf/data-ingestion-config.yaml
You should see an entry that looks as follows:
INFO: 21/07/28 18:02:43 INFO LoggingPodStatusWatcherImpl: Application status for spark-901c5ed4ba4e4233b4501b8e1279a9cf (phase: Succeeded)

This entry indicates that the data load is successful.

Setting the Next Data Load to Incremental

The initial data load using Data Ingester loads all the OIG data into the OIRI database. See Performing an Initial Data Load Using the Data Ingester. Future loads should only change data. To change data, run the following command from the DING-CLI:
/ding-cli/scripts/updateDataIngestionConfig.sh \
         --entityusersenabled true --entityuserssyncmode incremental \
          --entityapplicationsenabled true --entityapplicationssyncmode incremental \
          --entityentitlementsenabled true --entityentitlementssyncmode incremental \
          --entityassignedentitlementsenabled true --entityassignedentitlementssyncmode incremental \
          --entityrolesenabled true --entityrolessyncmode incremental \
           --entityrolehierarchyenabled true --entityrolehierarchysyncmode incremental \
           --entityroleusermembershipsenabled true --entityroleusermembershipssyncmode incremental \
            --entityroleentitlementcompositionsenabled true --entityroleentitlementcompositionssyncmode incremental \
          --entityaccountsenabled true --entityaccountssyncmode