Oracle by Example brandingDeploying Oracle Unified Directory on Oracle Cloud Infrastructure

section 0Before You Begin

This tutorial shows you how to deploy the Oracle Unified Directory (OUD)12.2.1.4.0 application on Oracle Cloud Infrastructure.

Background

The Oracle Unified Directory application is available on the Oracle Cloud Infrastructure Marketplace and allows you to quickly deploy an instance of Oracle Unified Directory for testing and development only.

OUD and OUDSM on OCI is deployed in a Kubernetes (K8S) cluster using the Oracle Kubernetes Engine (OKE).

The Oracle Unified Directory application provided on OCI Marketplace uses a Helm chart to deploy the following Kubernetes objects:

  • Service Account
  • Secret
  • Persistent Volume and Persistent Volume Claim
  • Pod(s)/Container(s) for OUD Instances : 3 OUD pods and 1 OUDSM pod are deployed
  • Services for interfaces exposed through OUD Instances
  • Ingress configuration

What Do You Need?

  • An Oracle Cloud Infrastructure account.
  • A basic understanding of OCI.

section 1OCI Prerequisites

Before deploying OUD on OCI MarketPlace the following OCI prerequisites must be met:

  1. OUD requires two (2) or more compute nodes for the Node Pool and three (3) compute nodes for the Bastion in the respective Availability Domain. To check you have enough compute nodes available:
    1. Access the OCI console and from the Navigation Menu navigate to Governance > Limits, Quota and Usage.
    2. From the SERVICE drop down menu select Compute.
    3. From the SCOPE menu select the appropriate Availability Domain.
    4. From the COMPARTMENT menu select the compartment you wish to deploy to.
    5. Check the Available column and make sure you have sufficient computes available for the Node Shape you wish to deploy.
    For more information on the compute shapes available refer to Compute Shapes.
  2. OUD requires two (2) Virtual Cloud Network (VCN) resources to be available:
    1. From the SERVICE drop down menu select Virtual Cloud Network.
    2. From the SCOPE menu select your tenancy.
    3. From the RESOURCE menu select Virtual Cloud Networks Count.
    4. Check the Available column and make sure at least two (2) VCN's are available.
  3. OUD requires one 100Mbps Load Balancer resource to be available:
    1. From the SERVICE drop down menu select LbaaS.
    2. From the SCOPE menu select your tenancy.
    3. From the RESOURCE menu select 100Mbps Load Balancer Count.
    4. Check the Available column and make sure at least one (1) Load Balancer is available.
  4. OUD requires one (1) OKE Cluster resource to be available.
    1. From the SERVICE drop down menu select Container Engine.
    2. From the SCOPE menu select your tenancy.
    3. From the RESOURCE menu select Cluster Count.
    4. Check the Available column and make sure one (1) or more are available.
  5. OUD requires one (1) File System resource and one (1) Mount Target resource to be available:
    1. From the SERVICE drop down menu select File Storage.
    2. From the SCOPE menu select your Availability Domain.
    3. Check the Available column and make sure at least one (1) resource is available for both File System Count and Mount Target Count.

    For more information on resources see Compartment Quotas.

section 2Provision OUD on OCI Marketplace

  1. Access the OCI Marketplace.
  2. Search for the listing Oracle Unified Directory. Select the Oracle Unified Directory application.
  3. In the Oracle Unified Directory listing, check the Version Details show Version: 12.2.1.4.0.
  4. Launch the installation into your tenancy by selecting Get App.
  5. Select the OCI region and sign-in to your tenancy.
  6. Select the compartment where the OUD Instance will be deployed. Note: Do not select the default (root) compartment.
  7. Click Launch Stack.
  8. In the Stack Information screen, edit the Name as appropriate and add a Description if required and click Next.
  9. In the Configure Variables screen enter the variables as below: .
    Name Value
    REGION Select the Region from the drop down list to identify where all the resources need to be created.
    NODE_POOL_NAME Name for the node pool.
    NODE_POOL_NODE_SHAPE Select the shape for the node pool computes: for example VM.Standard.E2.2
    NUMBER_OF_NODES Select the number of nodes to be provisioned.
    BASTION_SHAPE Choose the shape for the bastion: for example
    VM.Standard2.1

    As per the OCI Prerequisites section, make sure you check the availability of the compute resources in the Availability Domain chosen for the bastion node. Three (3) compute resources of the chosen VM Shape are required.

    AVAILABILITY_DOMAINS The Availability Domain where OUD will be deployed.
    BASEDN Provide the value of the Directory Base DN for the instance: for example dc=example,dc=com.
    ROOTUSERDN Provide the value of the DN of the Root user for OUD.
    ROOTUSERPASSWORD Provide the value of the root user bind password.
    OUD_ADDITIONAL_CONFIG_YAML YAML formatted text for any additional OUD configuration required.
    ADMINUSER Provide the value of the Oracle Unified Directory Services Manager (OUDSM) WebLogic username: for example weblogic.
    ADMINPASSWORD Provide the value of the Oracle Unified Directory Services Manager (OUDSM) WebLogic user password.
    OUD_NAMESPACE Provide the value of the namespace to be used for OUD and OUDSM: for example oudns
    INGRESS_NAMESPACE Provide the value of the namespace to be used for Ingress: for example oudingressns
    INGRESS_NGINX_ADDITIONAL_CONFIG_YAML YAML formatted text for any additional INGRESS NGINX configuration required.
    Show Advanced Options

    Select the checkbox to show the advanced options.

    If you want to auto-generate the SSH keys then leave the
    Auto-generate public ssh
     checkbox selected, otherwise uncheck it if you want to use your own SSH key pair.

    SSH_PRIVATE_KEY_PATH This field should only to be updated if Auto-generate public ssh is unchecked.

    Enter the key in base64 format:
    -----BEGIN OPENSSH PRIVATE KEY-----
    b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABA
    AACFwAAAAdzc2gtcnNhAAAAAwEAAQAAAgEAvGcnE/MsJP7rwZG6m
    etc..
    JkmR/dwfVoEZjhXJrhGIgtaHMKpCyFSeN6wZ1gkDmKiUMU54IJ4Em
    -----END OPENSSH PRIVATE KEY-----
    SSH PUBLIC KEY This field should only to be updated if Auto-generate public ssh is unchecked.

    Select Choose SSH Key Files if you want to either drop the public key file or
    upload it from the local machine.

    Select Paste SSH Keys to paste in the key. Enter the key in the following format:
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAA..etc..

    Once the variables are updated as per above, click Next.
  10. In the Review screen verify the configuration variables and select Create. This will trigger a job that will be displayed in the OCI console. The job will show as IN PROGRESS and in the Logs section at the bottom of the screen, details of the deployment progress are displayed. The job will take approximately 15 minutes to complete. Once the job is successful the job status will show SUCCEEDED.
  11. If the job fails with status FAILED, refer to the Troubleshooting section later in this tutorial.


section 3Connecting via SSH to the Bastion Host

To connect via SSH to the bastion host:

  1. Access the OCI console, and from the Navigation Menu select Resource Manager > Stacks.
  2. Select the relevant compartment from the drop down list. A list of stacks in the compartment will be displayed. Select the stack just created e.g. OUD.
  3. In the Jobs section click the View State link for the job.
  4. In the View State window search for the string ‘bastion_ip’ using the browser search (CTRL F). Note down the ipaddress, for example: bastion_ip": "X.X.X.X".
  5. If you used your own SSH key pair move to step 7. If you used the auto-generated SSH keys, in the same window search for ‘private_key_path’. Copy the value in between the quotes, and save to a file (e.g priv.key), for example:
    -----BEGIN RSA PRIVATE KEY-----\nXXXXXXXXXXXXYYYYYYYYYYYYYYPPPPPPPPPPP++YYYYY
    YYYYTTTTTTTTTTT\IIIIIIIIIhhhhjjuuuuuu/VVVuuuuuuuuuuu908888/zdS1UCdLr/Q2q9yd12
    etc.....................................................................
    +Qjl5\nxOByCUtZ8TbRbQMgEA/6G6wzVQ+mjCPy0n0ykxhWaHVj22ytfxKtApNLjwhtlZm
    \npD58jI0CgYEAv3GvEcfVPg92KmN8OH+hSrkLzz22bemNqioRvKi2mXBwfk0xu0kK\nvTdjVqwbD
    lCeAhISJxXdsT3J83pyeaGm6TrxBwUptJ8SzlZgFptpJffE1acAq8m\nXd7RoF2rBqZ5HHYYYYYkl
    kkk90zedjxPoJW6XxC3ljingatsgJzAixIMSc8=\n-----END RSA PRIVATE KEY-----\n
  6. Issue the following command to remove the “\n” characters:
    $ sed -i 's/\\n/\n/g' priv.key
    $ chmod 400 priv.key
    Note: Use GitBash to perform the above command if running on a Windows environment, , or edit in a text editor and remove all "\n" characters.
    Note
    : On Windows do not run chmod 400 in GitBash. Change the properties of the file to read-only instead.

    Open the priv.key file and validate the “\n” characters are removed.


  7. Connect to the bastion host using the priv.key file:
    $ ssh -i priv.key opc@<bastion_ip>
    The login should be successful.

section 4Validating the OUD Setup

In this section you confirm everything is running and that you can access the OUDSM console.

Validate Kubernetes Objects

  1. Run the following commands from the bastion host to check everything in the OUD stack is running correctly:
    $ kubectl get pod,service,pv,pvc,ingress,secret -o wide --namespace oudns
    If the cluster is up and running, the output should look similar to the following:
    NNAME                                  READY   STATUS    RESTARTS   AGE   IP             NODE          NOMINATED NODE   READINESS GATES
    pod/oud-ds-rs-0                       1/1     Running   0          30m   10.244.0.130   10.0.10.227              
    pod/oud-ds-rs-1                       1/1     Running   0          30m   10.244.0.2     10.0.10.13               
    pod/oud-ds-rs-2                       1/1     Running   0          30m   10.244.0.131   10.0.10.227              
    pod/oudsm-1                           1/1     Running   0          29m   10.244.0.134   10.0.10.227              
    pod/oudsm-es-cluster-0                0/1     Pending   0          29m                               
    pod/oudsm-kibana-665c9699f4-48c2j     1/1     Running   0          29m   10.244.0.6     10.0.10.13               
    pod/oudsm-logstash-6954697777-mxbbm   1/1     Running   0          29m   10.244.0.133   10.0.10.227              
    
    NAME                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
    service/oud-ds-rs-0              ClusterIP   10.96.214.20            1444/TCP,1888/TCP,1898/TCP   30m   app.kubernetes.io/instance=oud-ds-rs,app.kubernetes.io/name=oud-ds-rs,oud/instance=oud-ds-rs-0
    service/oud-ds-rs-1              ClusterIP   10.96.47.223            1444/TCP,1888/TCP,1898/TCP   30m   app.kubernetes.io/instance=oud-ds-rs,app.kubernetes.io/name=oud-ds-rs,oud/instance=oud-ds-rs-1
    service/oud-ds-rs-2              ClusterIP   10.96.66.217            1444/TCP,1888/TCP,1898/TCP   30m   app.kubernetes.io/instance=oud-ds-rs,app.kubernetes.io/name=oud-ds-rs,oud/instance=oud-ds-rs-2
    service/oud-ds-rs-http-0         ClusterIP   10.96.157.186           1080/TCP,1081/TCP            30m   app.kubernetes.io/instance=oud-ds-rs,app.kubernetes.io/name=oud-ds-rs,oud/instance=oud-ds-rs-0
    service/oud-ds-rs-http-1         ClusterIP   10.96.238.48            1080/TCP,1081/TCP            30m   app.kubernetes.io/instance=oud-ds-rs,app.kubernetes.io/name=oud-ds-rs,oud/instance=oud-ds-rs-1
    service/oud-ds-rs-http-2         ClusterIP   10.96.243.158           1080/TCP,1081/TCP            30m   app.kubernetes.io/instance=oud-ds-rs,app.kubernetes.io/name=oud-ds-rs,oud/instance=oud-ds-rs-2
    service/oud-ds-rs-lbr-admin      ClusterIP   10.96.94.192            1888/TCP,1444/TCP            30m   app.kubernetes.io/instance=oud-ds-rs,app.kubernetes.io/name=oud-ds-rs
    service/oud-ds-rs-lbr-http       ClusterIP   10.96.255.108           1080/TCP,1081/TCP            30m   app.kubernetes.io/instance=oud-ds-rs,app.kubernetes.io/name=oud-ds-rs
    service/oud-ds-rs-lbr-ldap       ClusterIP   10.96.88.239            1389/TCP,1636/TCP            30m   app.kubernetes.io/instance=oud-ds-rs,app.kubernetes.io/name=oud-ds-rs
    service/oud-ds-rs-ldap-0         ClusterIP   10.96.92.122            1389/TCP,1636/TCP            30m   app.kubernetes.io/instance=oud-ds-rs,app.kubernetes.io/name=oud-ds-rs,oud/instance=oud-ds-rs-0
    service/oud-ds-rs-ldap-1         ClusterIP   10.96.242.83            1389/TCP,1636/TCP            30m   app.kubernetes.io/instance=oud-ds-rs,app.kubernetes.io/name=oud-ds-rs,oud/instance=oud-ds-rs-1
    service/oud-ds-rs-ldap-2         ClusterIP   10.96.61.180            1389/TCP,1636/TCP            30m   app.kubernetes.io/instance=oud-ds-rs,app.kubernetes.io/name=oud-ds-rs,oud/instance=oud-ds-rs-2
    service/oudsm-1                  ClusterIP   10.96.34.40             7001/TCP,7002/TCP            29m   app.kubernetes.io/instance=oudsm,app.kubernetes.io/name=oudsm,oudsm/instance=oudsm-1
    service/oudsm-elasticsearch      ClusterIP   None                    9200/TCP,9300/TCP            29m   app=oudsm-elasticsearch
    service/oudsm-kibana             NodePort    10.96.113.251           5601:31199/TCP               29m   app=kibana
    service/oudsm-lbr                ClusterIP   10.96.9.176             7001/TCP,7002/TCP            29m   app.kubernetes.io/instance=oudsm,app.kubernetes.io/name=oudsm
    service/oudsm-logstash-service   NodePort    10.96.29.204            9600:31256/TCP               29m   app=logstash
    
    NAME                            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS   REASON   AGE   VOLUMEMODE
    persistentvolume/oud-ds-rs-pv   20Gi       RWX            Delete           Bound    oudns/oud-ds-rs-pvc   oci                     30m   Filesystem
    persistentvolume/oudsm-pv       20Gi       RWX            Delete           Bound    oudns/oudsm-pvc       oci                     29m   Filesystem
    
    NAME                                            STATUS    VOLUME         CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
    persistentvolumeclaim/data-oudsm-es-cluster-0   Pending                                            elk            29m   Filesystem
    persistentvolumeclaim/oud-ds-rs-pvc             Bound     oud-ds-rs-pv   20Gi       RWX            oci            30m   Filesystem
    persistentvolumeclaim/oudsm-pvc                 Bound     oudsm-pv       20Gi       RWX            oci            29m   Filesystem
    
    NAME                                               CLASS    HOSTS                                                               ADDRESS           PORTS     AGE
    ingress.extensions/oud-ds-rs-admin-ingress-nginx      oud-ds-rs-admin-0,oud-ds-rs-admin-1,oud-ds-rs-admin-2 + 2 more...   150.136.133.207   80, 443   30m
    ingress.extensions/oud-ds-rs-http-ingress-nginx       oud-ds-rs-http-0,oud-ds-rs-http-1,oud-ds-rs-http-2 + 3 more...      150.136.133.207   80, 443   30m
    ingress.extensions/oudsm-ingress-nginx                oudsm-1,oudsm                                                       150.136.133.207   80, 443   29m
    
    NAME                                     TYPE                                  DATA   AGE
    secret/default-token-2w99m               kubernetes.io/service-account-token   3      31m
    secret/oud-ds-rs-creds                   opaque                                8      30m
    secret/oud-ds-rs-tls-cert                kubernetes.io/tls                     2      30m
    secret/oud-ds-rs-token-5s4qh             kubernetes.io/service-account-token   3      30m
    secret/oudsm-creds                       opaque                                2      29m
    secret/oudsm-tls-cert                    kubernetes.io/tls                     2      29m
    secret/oudsm-token-j9glg                 kubernetes.io/service-account-token   3      29m
    secret/sh.helm.release.v1.oud-ds-rs.v1   helm.sh/release.v1                    1      30m
    secret/sh.helm.release.v1.oudsm.v1       helm.sh/release.v1                    1      29m
    Note: It may take a few minutes for all the pods to appear as above and be in READY 1/1 state. By default OUDSM, and 3 replicated OUD servers (oud-ds-rs-0, oud-ds-rs-1 and oud-ds-rs-2 ) are started.

    The following table describes the main Kubernetes objects displayed in the output above:

    Type Name Description
    Pod pod/oud-ds-rs-0 Pod/Container for the base OUD Instance. This instance will be populated first with base configuration (for example, a number of sample entries)
    Pod pod/oud-ds-rs-1, pod/oud-ds-rs-2 Pod(s)/Container(s) for replicated OUD Instances. Each has replication enabled against the base OUD instance pod/oud-ds-rs-0
    Pod pod/oudsm-1 Pod(s)/Container(s) for OUDSM
    Service service/oud-ds-rs-0,
    service/oud-ds-rs-1,
    service/oud-ds-rs-2
    Services for LDAPS Admin, REST Admin and Replication interfaces from individual instances
    Service service/oud-ds-rs-http-0,
    service/oud-ds-rs-http-1,
    service/oud-ds-rs-http-2
    Services for HTTP and HTTPS interfaces from individual instances
    Service service/oud-ds-rs-ldap-0,
    service/oud-ds-rs-ldap-1,
    service/oud-ds-rs-ldap-2
    Services for LDAP and LDAPS interfaces from individual instances
    Service service/oud-ds-rs-lbr-admin Service for LDAPS Admin, REST Admin and Replication interfaces from all OUD instances via LoadBalancer Service
    Service service/oud-ds-rs-lbr-http Service for HTTP and HTTPS interfaces from all OUD instances via LoadBalancer Service
    Service service/oud-ds-rs-lbr-ldap Service for LDAP and LDAPS interfaces from all OUD instances via LoadBalancer Service
    Service service/oudsm-1 Service for OUDSM interface from individual instance
    Service service/oudsm-lbr Service for OUDSM interface via LoadBalancer Service
    Ingress ingress.extensions/oud-ds-rs-admin-ingress-nginx Ingress Rules for HTTP Admin interfaces
    Ingress ingress.extensions/oud-ds-rs-http-ingress-nginx Ingress Rules for HTTP (Data/REST) interfaces
    Ingress ingress.extensions/oudsm-ingress-nginx Ingress Rules for OUDSM interfaces


    To check the Ingress load balancer run the following:

    $ kubectl get pod,service,pv,pvc,ingress,secret -o wide --namespace ingressns
    NAME                                            READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
    pod/ingress-nginx-controller-7f8ccb8fc9-6fjdj   1/1     Running   0          50m   10.244.0.3   10.0.10.13              
    
    NAME                               TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                                                                                                                                                                                                          AGE   SELECTOR
    service/ingress-nginx-controller   LoadBalancer   10.96.120.247   150.136.133.207   80:30080/TCP,443:30443/TCP,1389:31389/TCP,1444:31444/TCP,1636:31636/TCP,3890:30890/TCP,3891:30891/TCP,3892:30892/TCP,4440:30440/TCP,4441:30441/TCP,4442:30442/TCP,6360:30360/TCP,6361:30361/TCP,6362:30362/TCP   50m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
    
    NAME                            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS   REASON   AGE   VOLUMEMODE
    persistentvolume/oud-ds-rs-pv   20Gi       RWX            Delete           Bound    oudns/oud-ds-rs-pvc   oci                     50m   Filesystem
    persistentvolume/oudsm-pv       20Gi       RWX            Delete           Bound    oudns/oudsm-pvc       oci                     49m   Filesystem
    
    NAME                                         TYPE                                  DATA   AGE
    secret/default-token-szdzr                   kubernetes.io/service-account-token   3      51m
    secret/ingress-nginx-token-kxz5c             kubernetes.io/service-account-token   3      50m
    secret/sh.helm.release.v1.ingress-nginx.v1   helm.sh/release.v1                    1      50m>

    Make a note of the EXTERNAL-IP of the nginx-ingress-controller service. This public IP address is used to connect to Oracle Unified Directory and its associated consoles. In the case above, the external IP is 150.136.133.207.


    Note: HTTP/HTTPS communication is exposed via the LoadBalancer Service through port 80 and 443 only.

    Note: You can also find the public IP address of the load balancer by navigating to Networking > Load Balancers in the OCI console.

    Ingress does not support non-HTTP traffic by default. To allow access via LDAP/LDAPS protocols, the NGINX configuration (which supplies the Ingress implementation) has been updated in the Oracle Unified Directory application to support additional ports for LDAP and LDAPS communication. The mapping of ports to Kubernetes services is shown in the table below:

    External Port Service Internal Port Type
    1389 service/oud-ds-rs-lbr-ldap ldap
    1636 service/oud-ds-rs-lbr-ldap ldaps
    1444 service/oud-ds-rs-lbr-admin adminldaps
    3890 service/oud-ds-rs-ldap-0 ldap
    6360 service/oud-ds-rs-ldap-0 ldaps
    3891 service/oud-ds-rs-ldap-1 ldap
    6361 service/oud-ds-rs-ldap-1 ldaps
    3892 service/oud-ds-rs-ldap-2 ldap
    6362 service/oud-ds-rs-ldap-2 ldaps
    4440 service/oud-ds-rs-0 adminldaps
    4441 service/oud-ds-rs-1 adminldaps
    4442 service/oud-ds-rs-2 adminldaps

Validate Consoles

  1. Launch a browser, access the following URL’s and log in with the associated username and passwords:
    Console URL Login Details
    WebLogic Console http://<external-ip>/consoleweblogic/<password>
    Oracle Unified Directory Services Manager http://<external-ip>/oudsmcn=Directory Manager/<password>

    * Note: WebLogic Console should only be used to monitor the OUDSM domain in the OUD install. To control the AdminServer (start/stop) you must use kubectl commands.

section 5Access OUD Using REST and LDAP

Access OUD Using REST

OUD provides support for REST Application Programming Interface (API)

REST APIs are provided for the following:

  • OUD Administration : /rest/v1/admin
  • OUD Data Managment : /rest/v1/directory
  • OUD SCIM Data Management : /iam/directory/oud/scim/v1

Details of how to access the OUD REST APIs via Postman can be found in the tutorial Using Oracle Unified Directory REST APIs with Postman. Use this tutorial to install and configure Postman to access the OUD on OCI.

  1. To update the environment variables in Postman, enter the following values for Initial Value and Current Value, then click Update and then X:
    Modified Postman environment variables
    Description of the illustration postmanenv.jpg

    Note: HOST and ADMINHOST should use the IP address and port values of the Ingress load balancer.

    Note: To prevent SSL certificate verifcation errors, navigate to File > Settings, and in the General tab set SSL certificate verification to OFF.

  2. Verify access to OUD by following the steps in the tutorial Using Oracle Unified Directory REST APIs with Postman to access the REST APIs for OUD Administration, OUD Data Managment and OUD SCIM Data Management.

Access OUD Using LDAP/S

To access OUD you will need an OUD client installed with access to the ldapsearch and dsconfig commands.

  1. Return data from the OUD replicated instances using ldapsearch using the load balancer external IP and the LDAPS external port:
    ./ldapsearch \
    --hostname <LoadBalancer Service External IP> \
    --port 1636 \
    --useSSL \
    --trustAll \
    -D "cn=Directory Manager" \
    -w Oracle123 \
    -b "" \
    -s sub "(objectclass=*)" dn
    You should see output similar to the following:
    dn: dc=example,dc=com

    Note: the result of the ldapsearch will be returned from any available service or pod exposing LDAPS to the service/oud-ds-rs-lbr-ldap service. The mapping is port 1626 -> service/oud-ds-rs-lbr-ldap -> any available OUD pod (service/oud-ds-rs-ldap-0, service/oud-ds-rs-ldap-1, or service/oud-ds-rs-ldap-2).


  2. Access Replication Server details for a specific pod using dsconfig over LDAP:
    ./dsconfig \
    --hostname <LoadBalancer Service External IP> \
    --port 4442 \
    --portProtocol LDAP \
    -D "cn=Directory Manager" \
    -j pwd.txt \
    --trustAll \
    --no-prompt \
    list-replication-server \
    --provider-name Multimaster\ Synchronization

    You should see output similar to the following:

    Replication Server : Type    : replication-server-id : replication-port : replication-server
    -------------------:---------:-----------------------:------------------:-----------------------------------------------------
    replication-server : generic : 25910                 : 1898             : oud-ds-rs-0:1898, oud-ds-rs-1:1898, oud-ds-rs-2:1898

    Note: by referring to the port mapping table you can see that the external port 4442 maps to the service/oud-ds-rs-2 service, which is in turn mapped to the adminldaps port of the pod/oud-ds-rs-2 pod.

Access OUD Using HTTP/S

  1. Access Replication Server via the load balancer using dsconfig over HTTPS:
    ./dsconfig \
    --hostname <LoadBalancer Service External IP> \
    --port 443 \
    --portProtocol HTTP \
    -D "cn=Directory Manager" \
    -j pwd.txt \
    --trustAll \
    --no-prompt \
    list-replication-server \
    --provider-name Multimaster\ Synchronization

    You should see output similar to the following if you run the command a number of times:

    Replication Server : Type    : replication-server-id : replication-port : replication-server
    -------------------:---------:-----------------------:------------------:-----------------------------------------------------
    replication-server : generic : 20360                 : 1898             : oud-ds-rs-0:1898, oud-ds-rs-1:1898, oud-ds-rs-2:1898
    [opc@bastion bin]$ ./dsconfig --hostname 150.136.22.61 --port 443 --portProtocol HTTP -D "cn=Directory Manager" -j pwd.txt --trustAll --no-prompt list-replication-server --provider-name Multimaster\ Synchronization
    Replication Server : Type    : replication-server-id : replication-port : replication-server
    -------------------:---------:-----------------------:------------------:-----------------------------------------------------
    replication-server : generic : 18459                 : 1898             : oud-ds-rs-0:1898, oud-ds-rs-1:1898, oud-ds-rs-2:1898
    [opc@bastion bin]$ ./dsconfig --hostname 150.136.22.61 --port 443 --portProtocol HTTP -D "cn=Directory Manager" -j pwd.txt --trustAll --no-prompt list-replication-server --provider-name Multimaster\ Synchronization
    Replication Server : Type    : replication-server-id : replication-port : replication-server
    -------------------:---------:-----------------------:------------------:-----------------------------------------------------
    replication-server : generic : 25910                 : 1898             : oud-ds-rs-0:1898, oud-ds-rs-1:1898, oud-ds-rs-2:1898
    [opc@bastion bin]$ ./dsconfig --hostname 150.136.22.61 --port 443 --portProtocol HTTP -D "cn=Directory Manager" -j pwd.txt --trustAll --no-prompt list-replication-server --provider-name Multimaster\ Synchronization
    Replication Server : Type    : replication-server-id : replication-port : replication-server
    -------------------:---------:-----------------------:------------------:-----------------------------------------------------
    replication-server : generic : 20360                 : 1898             : oud-ds-rs-0:1898, oud-ds-rs-1:1898, oud-ds-rs-2:1898

    Notice how the replication-server-id changes as the LoadBalancer Service distributes requests across the OUD instances in the cluster.


section 6Layout of Resources Created in OCI for the OUD Stack

This section provides details on how you can view the resources created in OCI for the OUD Stack.

  1. To see the list of resources created in OCI for the OUD stack, access the OCI console and from the Navigation Menu select Developer Services >Resource Manager > Stacks > Stack_Name >Stack Details > Job_Name > Job Details > Associated Resources.
  2. To see the Virtual Machines created while using the OUD stack, from the Navigation Menu select Compute > Instances.
  3. The following table shows the mapping of OCI resources to VCN and Subnets:

    Input VM Shapes

    Resources

    VCN

    Subnets

    NODE_POOL_NODE_SHAPE

    2 Kubernetes worker nodes(VM)

    OudVcnForClusters

    OudRegionalSubnetForNodePool

     

    BASTION_SHAPE

    1 monitoringnode(VM)

    1 okeoud-bastion(VM)

    okeoud-oke vcn

    okeoud-bastion

    1 mountnode
    (This is where the FS is mounted) (VM)

    OudVcnForClusters

     OudlbRegionalSubNet

    N/A

    1 Load Balancer (Default 100 MBPS)*

    OudVcnForClusters

    OudlbRegionalSubNet


    The Resource, VCN, and Subnets names assume the default values were not changed during the OUD provisioning configuration.


    The following diagram outlines the OUD OCI Resources:


    Description of LayoutOUD.jpg follows
    Description of the illustration LayoutOUD.jpg


section 7Troubleshooting

To troubleshoot a failed OUD deployment:

  1. Revisit the OCI Prerequisites section and make sure all steps have been followed and enough resources are available.
  2. Delete the failed deployment by following the section Deleting the OUD Stack.
  3. Retry the deployment by following the section Provision OUD on OCI Marketplace.
  4. In case further debugging is required check the following:

    Note: Some of the steps below may not be possible depending at what stage the deployment failed.

    1. In the Job that failed, view the Logs section at the bottom of the screen.
    2. Connect to the bastion host as per the section Connecting via SSH to the Bastion Host and view logs for an individual pod.
      Run the following on the bastion host to list the pods:
      kubectl get pods -n oudns
      Run the following command to view the log for the desired pod:
      kubectl logs <pod> -n oudns
    3. To view the OUD/OUDSM logs connect to any of the pods in the oudns namespace by running the following on the bastion host:
      kubectl -n oudns exec -it <pod> bash 
      For example:
      kubectl -n oudns exec -it oud-ds-rs-0 bash 

      This will take you into a bash shell inside the oud-ds-rs-0 pod. From here you can navigate to the logs directory and look at the OUD and OUDSM logs. As the pods share storage it does not matter which pod you shell into, each will have access to the logs.

      For OUD logs, navigate to:

      [oracle@<pod>]$ cd /u01/oracle/user_projects/<pod>/OUD/logs
      For example:
      [oracle@oud-ds-rs-0]$ cd /u01/oracle/user_projects/oud-ds-rs-0/OUD/logs
      For OUDSM logs, navigate to:
      [oracle@<pod>]$ cd /u01/oracle/user_projects/domains/oudsmdomain-1/servers/AdminServer/logs

section 7Deleting the OUD Stack

If you need to delete the OUD stack, or if you need to clean up from a failed deployment, perform the following operations:

  1. Access the OCI console and from the Navigation Menu select Developer Services > Resource Manager > Stacks. Click the OUD stack to delete.
  2. In the OUD stack page select Destroy. A Destroy job will be created and run.
  3. Once the job has completed successfully, select More Actions > Delete Stack.
  4. Ensure that clusters are deleted by navigating to Developer Services > Kubernetes Clusters (OKE). If not, manually delete the clusters.
  5. When the clusters are terminated, the instances should also be automatically terminated. Navigate to Compute > Instances and check the instances are deleted. If not deleted, terminate those manually.
  6. Check the load balancer and file storage by navigating to Networking > Load Balancers. If resources still exist, manually terminate them.
  7. Check the VCN’s are destroyed by clicking on the Virtual Cloud Network link on the same page. If not destroyed, then manually terminate the VCNs. If there are problems terminating the VCN’s then follow the Troubleshooting section Subnet or VCN Deletion.
  8. Check if the Identity Policies are removed by navigating to Identity & Security > Policies. If not destroyed then delete manually.
  9. On the same page click the Dynamic Groups link and check the Dynamic Groups are removed. If not destroyed then delete manually.
  10. When the above steps are completed, wait a few minutes to ensure all resources are cleaned up. Then check the limits to ensure those resources are now free by navigating to Governance > Limits, Quota and Usage.

    Note
    : If the deletion does not complete successfully, the following commands can be run to remove the OUD and OUDSM containers.
    $ helm delete ingress-nginx -n ingressns 
    $ helm delete oud-ds-rs oudsm -n oudns

more informationWant to Learn More?


feedbackFeedback

To provide feedback on this tutorial, please contact Identity Management User Assistance.