15 Installing and Configuring Oracle Unified Directory Services Manager

Oracle Unified Directory Service Manager (OUDSM) is a Graphical User Interface (GUI) tool that is used to manage Oracle Unified Directory. It is not mandatory to install OUDSM in the production environments. However, OUDSM makes managing Oracle Unified Directory easier.

Oracle recommends that if you are installing OUDSM, you should install it into a different Kubernetes namespace as OUD. OUDSM contains no data and separating it from OUD simplifies disaster recovery for OUD. You can access OUDSM directly through:
  • NodePort Services
  • Ingress Controller
  • Oracle HTTP Server if added to a virtual host such as iadadmin.example.com

This chapter includes the following topics:

Configuring Oracle Unified Directory Services Manager

Oracle Unified Directory Services Manager (OUDSM) is a tool that is used to manage the Oracle Unified Directory. It is optional.

This chapter describes how you can install OUDSM inside a Kubernetes cluster.

Kubernetes/Ingress Services

After you configure OUDSM, the following OUDSM service will be available on each worker node:

Table 15-1 OUDSM Service on Each Worker Node

Service Type Service Port Mapped Port

OUDSM

NodePort

30901

7001

Note:

OUDSM randomly picks its own Kubernetes service port. The number given in this table is only an example.
If you use an Ingress-based deployment, the following Ingress service is created as part of the deployment:

Table 15-2 Ingress Services

Service Name Host Name

oudsm-ingress

oudsm.example.com

Variables Used in this Chapter

The later sections of this chapter provide instructions to create a number of files. These sample files contain variables which you need to substitute with values applicable to your deployment.

Variables are formatted as <VARIABLE_NAME>. The following table provides the values you should set for each of these variables.

Table 15-3 The Variables to be Changed

Variable Sample Value Description

<WORKDIR>

/workdir/OUD

The location where you want to create the working directory for OUD.

<REGISTRY_ADDRESS>

iad.ocir.io/<mytenancy>

The location of the registry. If you use the Oracle container registry, the value will be container-registry.oracle.com/middleware/oud_cpu.

<REG_USER>

mytenancy/oracleidentitycloudservice/myemail@email.com

The user id you use to log in to the registry. If you are use the Oracle container registry, this value will be your Oracle single sign-on user name.

<REG_PWD>

<password>

The registry user password.

<OUDSM_REPOSITORY>

oracle/oudsm

local/oracle/oudsm

container-registry.oracle.com/middleware/oudsm_cpu

<REGISTRY_ADDRESS>/oracle/oudsm

The name of the OUDSM software repository.

If you have downloaded and staged a container image, this value will be: oracle/oudsm. If you are using OLCNE, the value will be local/oracle/oudsm.

If you are using the Oracle container registry, the value will be:container-registry.oracle.com/middleware/oudsm_cpu.

If you are using a container registry, the value will be the name of the registry with the product name: <REGISTRY_ADDRESS>/oracle/oudsm.

<OUDSM_VER>

12.2.1.4-jdk8-ol7-220411.1608 or latest

The version of the image you want to use. This will be the version you have downloaded and staged either locally or in your container registry

<PVSERVER>

mynfsserver.example.com

The name of the NFS server.

Note: This name should be resolvable inside the Kubernetes cluster.

<OUDSM_WEBLOGIC_USER>

weblogic

The name of the user who administers the OUDSM domain.

<OUDSM_WEBLOGIC_PWD>

-

The password assigned to the admin user (<AdminUser> account.

     
     

<OUDSM_SERVICE_PORT>

30901

Port to use for OUDSM requests.

Note: This value must be within the Kubernetes service port range.

<OUDSM_SHARE>

/exports/IAMPVS/oudsmpv

The NFS mount point for the share.

<OUSDM_INGRESS_HOST>

oudsm.example.com

If you are using a dedicated hostname for OUDSM, then set this value to the name of that host. For example: oudsm.example.com. If you are making OUDSM accessible through an existing virtual host, then set this value to that host name. For example: iadadmin.example.com.

<ELK_HOST>

https://elasticsearch-es-http.elkns.svc:9200

The host and port of the centralized Elasticsearch deployment. This host can be inside the Kubernetes cluster or external to it. This host is used only when Elasticsearch is used.

<ELK_VER>

8.11.0

The version of Elasticsearch you want to use.

<ELK_USER_PWD>

<password>

The password assigned to the ELK user. See Creating a Role and a User for Logstash.

Setting Up a Product Specific Work Directory

Before you begin the installation, you must have already downloaded and staged the Oracle Unified Directory Service Manager container image and the code repostory. See Identifying and Obtaining Software Distributions for an Enterprise Deployment.

This section describes the procedure to copy the downloaded sample deployment scripts to a temporary working directory for OUDSM.

  1. Create the working directory.
    mkdir <WORKDIR>
    For example:
    mkdir /workdir/OUDSM
  2. Change the directory to this location:
    cd /workdir/OUDSM

    Note:

    The same set of sample files are used by several products in this guide. To avoid having to download them each time, the files are staged in a non-product specific working directory.
  3. Copy the sample scripts to your work directory.
    cp -R <WORKDIR>/fmw-kubernetes/OracleUnifiedDirectorySM /<WORKDIR>/samples
    For example:
    cp -R /workdir/OUDSM/fmw-kubernetes/OracleUnifiedDirectorySM /workdir/OUDSM/samples

Creating a Kubernetes Namespace

You have to create a namespace to contain all the objects for Oracle Unified Directory Services Manager.

To create a namespace, run the following command:
kubectl create namespace oudsmns
The output appears as follows:
namespace/oudsmns created

Creating a Container Registry Secret

Oracle recommends that you use a container registry. If you use a container registry and want to pull the Oracle container images on demand, you must create a secret which contains the login details of the container registry.

If you have staged your container images locally, there is no need to perform this step.

To create a container registry secret, use the following command:
kubectl create secret -n <OUDSMNS> docker-registry regcred --docker-server=<REGISTRY_ADDRESS> --docker-username=<REG_USER> --docker-password=<REG_PWD>
For example:
kubectl create secret -n oudsmns docker-registry regcred --docker-server=iad.ocir.io/mytenancy --docker-username=mytenancy/oracleidentitycloudservice/myemail@email.com --docker-password=<password>

Creating OUDSM Containers

Before you create the OUDSM containers, you should create the Helm override file. This section describes the commands you need to use to perform these tasks.

Creating the Helm Overrides File

The OUDSM containers are deployed using Helm. Create a Helm override file to inform how you want to deploy OUDSM.

The following is a sample override file called /workdir/OUDSM/override_oudsm.yaml for OUDSM:

image:
  repository: <OUDSM_REPOSITORY>
  tag: <OUDSM_VER>
  pullPolicy: IfNotPresent

imagePullSecrets:
  - name: regcred
oudsm:
  adminUser: <OUDSM_WEBLOGIC_USER>
  adminPass: <OUDSM_WEBLOGIC_PWD>
  startupTime: 200
persistence:
  type: networkstorage
  networkstorage:
    nfs:
      server: <PVSERVER>
      path: <OUDSM_SHARE>
  size: 5Gi
replicaCount: 1

elk:
  enabled: false
ingress:
  enabled: false
  type: nginx
  tlsEnabled: true
For example:
image:
  repository: iad.ocir.io/mytenancy/oudsm
  tag: 12.2.1.4-jdk8-ol7-220119.2054
  pullPolicy: IfNotPresent

imagePullSecrets:
  - name: regcred
oudsm:
  adminUser: weblogic
  adminPass: password
  startupTime: 200
persistence:
  type: networkstorage
  networkstorage:
    nfs:
      server: mynfsserver.example.com
      path:/export/IAMPVS/oudsmpv
  size: 5Gi
replicaCount: 1

elk:
  enabled: false
ingress:
  enabled: false
  type: nginx
  tlsEnabled: true

Creating the Containers

After creating the overrides file, you can now create the container using the command:
cd /workdir/OUDSM/samples/kubernetes/helm
helm install --namespace <OUDSMNS> --values /workdir/OUDSM/override_oudsm.yaml oudsm oudsm
The output appears as follows:
NAME: oudsm
LAST DEPLOYED: Thu Jan 28 08:08:40 2021
NAMESPACE: oudsmns
STATUS: deployed
REVISION: 1
TEST SUITE: None

Troubleshooting the OUDSM Instances

You can monitor the creation of each OUDSM instance using the following commands:

Objects created in the namespace:
kubectl -n oudsmns get all -o wide

The installation and configuration is complete only when you see each container with the status READY 1/1 and Status = Running.

If you do not see objects being created, use the following command to check the issue:
kubectl get pod -n oudsmns
For a detailed description, use:
kubectl describe pod -n oudsmns

Container Logs

To view the progress of each container as it is being created, use the following command:
kubectl logs oudsm-1 -n oudsmns

Creating External Access to OUDSM

By default, the Oracle Unified Directory Service Manager deployment gets created with all the components configured as ClusterIP services. This means that OUDSM is visible only within the Kubernetes cluster.

To gain access to OUDSM, you should expose it outside of the Kuberenetes cluster. You can do this in one of two ways:
  • By using an Ingress controller
  • By using a Kubernetes NodePort Service

Creating the Kubernetes OUDSM NodePort Service

You have to create an OUDSM NodePort Service to connect to OUDSM from outside the Kubernetes cluster.

  1. Create a text file called /workdir/OUDSM/oudsm_nodeport.yaml with the following content:
    kind: Service
    apiVersion: v1
    metadata:
      name: oudsm-nodeport
      namespace: <OUDSMNS>
    spec:
      type: NodePort
      selector:
        app.kubernetes.io/instance: oudsm
        app.kubernetes.io/name: oudsm
      ports:
        - targetPort: 7001
          port: 7001
          nodePort: <OUDSM_SERVICE_PORT>
          protocol: TCP
    
    For example:
    kind: Service
    apiVersion: v1
    metadata:
      name: oudsm-nodeport
      namespace: oudsmns
    spec:
      type: NodePort
      selector:
        app.kubernetes.io/instance: oudsm
        app.kubernetes.io/name: oudsm
      ports:
        - targetPort: 7001
          port: 7001
          nodePort: 30901
          protocol: TCP

    Note:

    Ensure that the namespace is set to the namespace you want to use.
  2. Create the service using the following command:
    kubectl create -f /workdir/OUDSM/oudsm_nodeport.yaml
    The output appears as follows:
    service/oudsm-nodeport created
  3. Validate that you can access OUDSM by using the http://k8worker1.example.com:30901/oudsm URL.

Creating an Ingress Service for Oracle Unified Directory Services Manager

To access OUDSM through Ingress, you should create an Ingress service. Create the Ingress service inside the product namespace. The Ingress service tells the Ingress controller how to direct requests inside the namespace.

To create an Ingress service:

  1. Create a file called oudsm_ingress.yaml in the working directory, with the following contents:
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: oudsm-ingress
      namespace: <OUDSMNS>
      annotations:
        nginx.ingress.kubernetes.io/affinity: "cookie"
        nginx.ingress.kubernetes.io/proxy-buffer-size: "2000k"
        nginx.ingress.kubernetes.io/enable-access-log: "false"
    spec:
      ingressClassName: nginx
      rules:
      - host: <OUSDM_INGRESS_HOST>
        http:
          paths:
          - backend:
              service:
                name: oudsm-1
                port:
                  number: 7001
            path: /oudsm
            pathType: Prefix
  2. Create the Ingress service using the command:
    kubectl create -f /workdir/OUDSM/oudsm_ingress.yaml
  3. Validate that the Ingress service is been created correctly by using the following command:
    kubectl get ingress -n oudsmns

Configuring Oracle HTTP Server for Oracle Unified Directory Services Manager

It is not necessary to incorporate OUDSM into the Oracle HTTP server configuration. You can access OUDSM directly by using the Ingress or NodePort Services. However, if you want to incorporate OUDSM into the Oracle HTTP servers, you should perform the steps described in this section.

After you have configured your Oracle HTTP server as described in Installing and Configuring Oracle HTTP Server, you can configure the Oracle HTTP Server to route requests to the Oracle Unified Directory Services Manager.

To configure the Oracle HTTP Server:

  1. Add the following entries to the iadadmin_vh.conf or igdadmin_vh.conf files located at WEB_DOMAIN_HOME/config/fmwconfig/components/OHS/ohs1/moduleconf/:
    <Location /oudsm>
        WLSRequest ON
        DynamicServerList OFF
        WeblogicCluster k8worker1.example.com:30901, k8worker2.example.com:30901
    </Location>

    Note:

    • There are separate directories for configuration and runtime instance files. The runtime files under the .../OHS/instances/ohsn/* folder should not be edited directly. Edit only the .../OHS/ohsn/* configuration files.
    • iadadmin_vh.conf and igdadmin_vh.conf will only be available after you have configured Oracle Access Manager or Oracle Identity Governance.
    • If you are using an Ingress controller, the port should be the HTTP port assigned to the Ingress Controller. For example: 30777.
    • If you are using an Ingress controller, ensure that you add the directive into the OHS virtual host file that corresponds to the ingress host name.
  2. Copy the igdadmin_vh.conf or iadadmin_vh.conf file to the following configuration directory of the second Oracle HTTP Server instance (ohs2):
    OHS_DOMAIN_HOME/config/fmwconfig/components/ohs2/moduleconf/
  3. Restart the Oracle HTTP server instances on WEBHOST1 and WEBHOST2.

Centralized Log File Monitoring Using Elasticsearch and Kibana

If you are using Elasticsearch and Kibana, you can configure a Logstash pod to send the log files to the centralized Elasticsearch/Kibana console. Before you configure the Logstash pod, ensure that you have access to a centralized Elasticsearch deployment.
  • OUDSM persistent volume, so it can be loaded by the Logstash pod to hunt for log files.
  • The location of the log files in the persistent volumes.
  • The location of the centralized Elasticsearch.

To configure the Logstash pod, perform the following steps. The assumption is that you have an Elasticsearch running inside the Kubernetes cluster, in a namespace called elkns.

Creating a Secret for Elasticsearch

Logstash requires credentials to connect to the elasticsearch deployment. These credentials are stored in Kubernetes as a secret.

If your Elasticsearch uses an API key for authentication, then use the following command:
kubectl create secret generic elasticsearch-pw-elastic -n <OUDSMNS> --from-literal password=<ELK_APIKEY>
For example:
kubectl create secret generic elasticsearch-pw-elastic -n oudsmns --from-literal password=afshfashfkahf5f
If Elasticsearch uses a user name and password for authentication, then use the following command:
kubectl create secret generic elasticsearch-pw-elastic -n <OUDSMNS> --from-literal password=<ELK_PWD>
For example:
kubectl create secret generic elasticsearch-pw-elastic -n oudsmns --from-literal password=mypassword
You can find the Elasticsearch password using the following command:
kubectl get secret elasticsearch-es-elastic-user -n <ELKNS> -o go-template='{{.data.elastic | base64decode}}'

Creating a Configuration Map for ELK Certificate

If you have configured a production ready Elasticsearch deployment, you would have configured SSL. Logstash needs to trust the Elasticsearch certificate to be able to communicate with it. To enable this trust, you should create a configuration map with the contents of the Elasticsearch certificate.

You would have already saved the Elasticsearch self-signed certificate. See Copying the Elasticsearch Certificate. If you have a production certificate you can use that instead.

Create the configuration map using the certificate, run the following command:

kubectl create configmap elk-cert --from-file=<WORKDIR>/ELK/elk.crt -n <OUDSMNS>
For example:
kubectl create configmap elk-cert --from-file=/workdir/ELK/elk.crt -n oudsmns

Configuring Log File Monitoring for OUDSM

Complete the following steps to configure log file monitoring:

Creating a Configuration Map for Logstash

Logstash looks for log files in the OUDSM installations and sends them to the centralized Elasticsearch. The configuration map is used to instruct Logstash where the log files reside and where to send them.

  1. Create a file called <WORKDIR>/OUDSM/logstash_cm.yaml with the following contents:
    apiVersion: v1
    kind: ConfigMap
    metadata:
     name: oudsm-logstash-configmap
     namespace: <OUDSMNS>
    data:
     logstash.yaml: |
     #http.host: "0.0.0.0"
     logstash-config.conf: |
       input {
          file {
            path => "/u01/oracle/user_projects/domains/oudsmdomain-1/servers/AdminServer/logs/*.log"
            type => "setup-logs"
            start_position => beginning
            sincedb_path => "/dev/null"
          }
        }
        filter {
          if [type] == "setup-logs" {
            grok {
              match => [ "message", "<%{DATA:log_timestamp}> <%{WORD:log_level}> <%{WORD:thread}> <%{HOSTNAME:hostname}> <%{HOSTNAME:hostserver}> %{GREEDYDATA:message}" ]
              }
          }
          if "_grokparsefailure" in [tags] {
            mutate {
              remove_tag => [ "_grokparsefailure" ]
              }
            }
          }
       output {
        elasticsearch {
         hosts => ["<ELK_HOST>"]
         cacert => '/usr/share/logstash/config/certs/elk.crt'
         user => "<ELK_USER>"
         password => "<ELK_USER_PWD>"
         index => "oudsmlogs-000001"
         ssl => true
         ssl_certificate_verification => false
        }
       }
  2. Save the file.
  3. Create the configuration map using the following command:
    kubectl create -f <WORKDIR>/OUDSM/logstash_cm.yaml
    For example:
    kubectl create -f /workdir/OUDSM/logstash_cm.yaml
  4. Validate that the configuration map has been created by using the following command:
    kubectl get cm -n <OUDSMNS>

    You should see oudsm-logstash-configmap in the list of configuration maps.

Creating a Logstash Deployment
After you create the configuration map, you can create the Logstash deployment. This deployment resides in the OUD namespace.
  1. Create a file called <WORKDIR>/OUDSM/logstash.yaml with the following contents:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: oudsm-logstash
      namespace: <OUDSMNS>
    spec:
      selector:
        matchLabels:
          k8s-app: logstash
      template: # create pods using pod definition in this template
        metadata:
          labels:
            k8s-app: logstash
        spec:
          imagePullSecrets:
          - name: dockercred
          containers:
          - command:
            - logstash
            image: logstash:<ELK_VER>
            imagePullPolicy: IfNotPresent
            name: oudsm-logstash
            env:
            - name: ELASTICSEARCH_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: elasticsearch-pw-elastic
                  key: password
            ports:
            - containerPort: 5044
              name: logstash
            volumeMounts:
            - mountPath: /u01/oracle/user_projects
              name: oudsm-storage-volume
            - name: shared-logs
              mountPath: /shared-logs
            - mountPath: /usr/share/logstash/pipeline/
              name: oudsm-logstash-pipeline
            - mountPath: /usr/share/logstash/config/certs
              name: elk-cert
          volumes:
          - configMap:
              defaultMode: 420
              items:
              - key: logstash-config.conf
                path: logstash-config.conf
              name: oudsm-logstash-configmap
            name: oudsm-logstash-pipeline
          - configMap:
              defaultMode: 420
              items:
              - key: ca.crt
                path: elk.crt
              name: elk-cert
          - name: oudsm-storage-volume
            persistentVolumeClaim:
              claimName: oudsm-pvc
          - name: shared-logs
            emptyDir: {}

    Note:

    If you are using your own registry, include the registry name in the image tag. If you have created a regcred secret for your registry, replace the imagePullSecrets name with the secret name you created. For example: regcred.
  2. Save the file.
  3. Create the Logstash deployment by using the following command:
    kubectl create -f <WORKDIR>/OUDSM/logstash.yaml
    For example:
    kubectl create -f /workdir/OUDSM/logstash.yaml
  4. You can now create a pod called logstash by using the following command:
    kubectl get pod -n oudsmns

    Your logs will now be available in the Kibana console.