9 Integrating UIM

Typical usage of UIM involves the UIM application coordinating activities across multiple peer systems. Several systems interact with UIM for various purposes. This chapter examines the considerations involved in integrating UIM cloud native instances into a larger solution ecosystem.

This chapter describes the following topics and tasks:

  • Integration with UIM cloud native
  • Configuring SAF
  • Applying the WebLogic patch for external systems
  • Configuring SAF for External Systems
  • Setting up Secure Communication with SSL

Integrating with UIM Cloud Native

Functionally, the integration requirements of UIM do not change when UIM is running in a cloud native environment. All of the categories of integrations that are applicable to traditional UIM instances are applicable and must be supported for UIM cloud native.

Connectivity Between the Building Blocks

The following diagram illustrates the connectivity between the building blocks in a UIM cloud native environment using an example:

Figure 9-1 Integration Across Building Blocks in UIM Cloud Native Environment



Invoking the UIM cloud native Helm chart creates a new UIM instance. In the above illustration, the name of the instance is "quick" and the name of the project is "sr". The instance consists of the WebLogic cluster that has one Admin Server and three Managed Servers and a Kubernetes Cluster Service.

The Cluster Service contains endpoints for both HTTP and T3 traffic. The instance creation script creates the UIM cloud native Ingress object. The Ingress object has metadata to trigger the Traefik ingress controller as a sample. Traefik responds by creating new front-ends with the configured "hostnames" for the cluster (quick.sr.uim.org and t3.quick.sr.uim.org in the illustration) and the admin server (admin.quick.sr.uim.org) and links them up to new back-end constructs. Each back-end routes to each member of the Cluster Service (MS1, MS2, and MS3 in the example) or to the Admin Server. The quick.sr.uim.org front-end is linked to the back-end pointing to the HTTP endpoint of each managed server, while the t3.quick.sr.uim.org front-end links to the back-end pointing to the T3 endpoint of each managed server.

The prior installation of Traefik has already exposed Traefik itself via a selected port number (30305 in the example) on each worker node.

Inbound HTTP Requests

A UIM instance is exposed outside of the Kubernetes cluster for HTTP access via an Ingress Controller and potentially a Load Balancer.

Because the Traefik port (30305) is common to all UIM cloud native instances in the cluster, Traefik must be able to distinguish between the incoming messages headed for different instances. It does this by differentiating on the basis of the "hostname" mentioned in the HTTP messages. This means that a client (User Client B in the illustration) must believe it is talking to the "host" quick.sr.uim.org when it sends HTTP messages to port 30305 on the access IP. This might be the Master node IP, or IP address of one of the worker nodes, depending on your cluster setup. The "DNS Resolver" provides this mapping.

In this mode of communication, there are concerns around resiliency and load distribution. For example, If the DNS Resolver always points to the IP address of Worker Node 1 when asked to resolve quick.sr.uim.org, then that Worker node ends up taking all the inbound traffic for the instance. If the DNS Resolver is configured to respond to any *.sr.uim.org requests with that IP, then that worker node ends up taking all the inbound traffic for all the instances. Since this latter configuration in the DNS Resolver is desired, to minimize per-instance touches, the setup creates a bottleneck on Worker node 1. If Worker node 1 were to fail, the DNS Resolver would have to be updated to point *.sr.uim.org to Worker node 2. This leads to an interruption of access and requires intervention. The recommended pattern to avoid these concerns is for the DNS Resolver to be populated with all the applicable IP addresses as resolution targets (in our example, it would be populated with the IPs of both Worker node 1 and node 2), and have the Resolver return a random selection from that list.

An alternate mode of communication is to introduce a load balancer configured to balance incoming traffic to the Traefik ports on all the worker nodes. The DNS Resolver is still required, and the entry for *.sr.uim.org points to the load balancer. Your load balancer documentation describes how to achieve resiliency and load management. With this setup, a user (User Client A in our example) sends a message to quick.sr.uim.org, which actually resolves to the load balancer - for instance, http://sr.quick.uim.org:8080/Inventory/faces/login.jspx. Here, 8080 is the public port of the load balancer. The load balancer sends this to Traefik, which routes the message, based on the "hostname" targeted by the message to the HTTP channel of the UIM cloud native instance.

By adding the hostname resolution such that admin.quick.sr.uim.org also resolves to the Kubernetes cluster access IP (or Load Balancer IP), User Client B can access the WebLogic console via http://admin.quick.sr.uim.org/console and the credentials specified while setting up the "wlsadmin" secret for this instance.

Note:

Access to the WebLogic Admin console is provided for review and debugging use only. Do not use the console to change the system state or configuration. These are maintained independently in the WebLogic Operator, based on the specifications provided when the instance was created or last updated by the UIM cloud native toolkit. As a result, any such manual changes (whether using the console or using WLST or other such mechanisms) are liable to be overwritten without notice by the Operator. The only way to change state or configuration is through the tools and scripts provided in the toolkit.

Inbound JMS Requests

JMS messages use the T3 protocol. Since Ingress Controllers and Load Balancers do not understand T3 for routing purposes, UIM cloud native requires all incoming JMS traffic to be "T3 over HTTP". Hence, the messages are still HTTP, but contain a T3 message as payload. UIM cloud native requires the clients to target the "t3 hostname" of the instance - t3.quick.sr.uim.org, in the example. This "t3 hostname" should behave identically as the regular "hostname" in terms of the DNS Resolver and the Load Balancer. Traefik however not only identifies the instance this message is meant for (quick.sr) but also that it targets the T3 channel of instance.

The "T3 over HTTP" requirement applies for all inbound JMS messages - whether generated by direct or foreign JMS API calls or generated by SAF. The procedure in SAF QuickStart explains the setup required by the message producer or SAF agent to achieve this encapsulation. If SAF is used, the fact that T3 is riding over HTTP does not affect the semantics of JMS. All the features such as reliable delivery, priority, and TTL, continue to be respected by the system. See "Applying the WebLogic Patch for External Systems" for more information.

A UIM instance can be configured for secure access, which includes exposing the T3 endpoint outside the Kubernetes cluster for HTTPS access. See "Configuring Secure Incoming Access with SSL" for details on enabling SSL.

Inbound JMS Requests Within the Same Kubernetes Cluster

There can be situations where UIM cloud native needs to be accessed from within the same Kubernetes cluster where it is deployed. For example, in a Service and Network Orchestration (SNO) an upstream application (OSM) and downstream application UIM could be deployed in the same Kubernetes cluster. For such requirements, there is no need for the request to be routed via an Ingress Controller or a load balancer and resolved via a DNS Resolver.

UIM cloud native exposes a T3 channel exclusively for such connections and can be accessed via t3://project-instance-cluster-uimcluster.project.svc.cluster.local:31313.

This saves the various network hops typically involved in routing a request from an external client to UIM cloud native deployed in a Kubernetes cluster. The following diagram illustrates inbound JMS requests within the same Kubernetes cluster using an example. For the example, the URL is t3://sr-quick-cluster-uimcluster.sr.svc.cluster.local:31313.

Note:

The protocol is T3 as there is no need for wrapping in HTTP; the port is different.

Figure 9-2 Inbound JMS Integration in a Kubernetes Cluster



If SSL is enabled for domains, communication between the domains within the Kubernetes cluster is not secured because the ingress is not involved. See "Setting Up Secure Communication with SSL" for further details.

Outbound HTTP Requests

No specific action is required to ensure the HTTP messages from UIM cloud native instance reach out of the Kubernetes Cluster.

When a domain inside a Kubernetes cluster sends REST API or Web Service requests over HTTP to a domain that is outside the cluster that is enabled with SSL, then you should set up some required configuration. For instructions, see "Configuring Access to External SSL-Enabled Systems".

Outbound JMS Connectivity

JMS messages originating from the UIM cloud native instance such as requests to peer systems always end up on local queues. The UIM cloud native Helm chart allows for the specification of SAF connections to remote systems in order to get these messages to their destinations. Custom Templates can be used to create SAF connections in UIM. This allows for a canonical expression of the SAF connectivity requirements, which are uniquely fulfilled by each project by pointing to the appropriate upstream, downstream, peer systems or emulators, and so on.

When a domain inside a Kubernetes cluster sends JMS messages to a domain that is outside the cluster that is SSL-enabled, then see "Configuring Access to External SSL-Enabled Systems" for instructions on setting up some required configuration.

Configuring SAF

UIM cloud native requires SAF to send messages to external systems through JMS. The SAF configuration in UIM cloud native is configured at project specification level. The project specification can be used to define all the SAF connections that any UIM cloud native instance must make. Each of these SAF connections must be given a specific remote endpoint. See "Adding a Store-and-Forward-Agent and SAF Resources" for more information on configuring SAF templates.

Configuring the Project Specification

The project specification lists out all the SAF connections and endpoint for each of these SAF connections that are required. These are listed under the safDestinationConfig element of the project specification. The following sample shows a basic SAF specification that describes the need to interact with external_system_identifier through SAF. The project specification contains the T3 URL of the external system along with the name of a Kubernetes secret that provides the credentials required to interact with that system. The T3 URL can be specified using any of the standard mechanisms supported by WebLogic. The Kubernetes secret must contain the fields username and password, carrying credentials which have permissions to include JMS messages into the remote system. It specifies that the project accesses two queues on that remote system: remote_queue_1 and remote_queue_2. These queues can be addressed using the JNDI prefix prefix_1 on the system. Further, remote_queue_1 is also mapped locally as local_queue_1. The mapping depends on the addressing system coded into the UIM cartridge's external sender automation plugins. UIM cloud native supports both local names and remote names for SAF destinations.

If the external system is a UIM cloud native instance deployed in the same Kubernetes cluster, use the T3 URL as described "Inbound JMS Requests Within the Same Kubernetes Cluster".

If SSL is enabled for the external system, use the T3 URL as described in "Configuring Access to External SSL-Enabled Systems".

safDestinationConfig:
  - name: external_system_identifier
    t3Url: t3_url
    secretName: secret_t3_user_pass
    destinations:
      - jndiPrefix: prefix_1
        queues:
         - queue:
            remoteJndi: remote_queue_1
            localJndi: local_queue_1
         - queue:
            remoteJndi: remote_queue_2

If the queues of an external system are spread across more than one JNDI prefix, the jndiPrefix element can be repeated as many times as necessary. In this example, prefix_1 applies to remote_queue_1 and remote_queue_2, while prefix_2 applies to remote_queue_3.

The following sample shows SAF project specification with multiple JNDIs:

safDestinationConfig:
    - name: external_system_identifier
      t3Url: t3_url
      secretName: secret_t3_user_pass
      destinations:
        - jndiPrefix: prefix_1
          queues:
            - queue:
                remoteJndi: remote_queue_1
                localJndi: local_queue_1
            - queue:
                remoteJndi: remote_queue_2
        - jndiPrefix: prefix_2
          queues:
            - queue:
                remoteJndi: remote_queue_3

It is possible for an external system to not use a JNDI prefix, which is configured by leaving the value empty for jndiPrefix. However, at most, one of the jndiPrefix entries in a destinations list can be empty, as the jndiPrefixes in this list have to be unique. If there are more than one external system that the project's solution cartridges interact with via SAF, these can be named and listed as follows:

safDestinationConfig:
    - name: external_system_identifier_1
      t3Url: t3_url
      secretName: secret_t3_user_pass
      destinations:
        - jndiPrefix: prefix_1
          queues:
            - queue:
                remoteJndi: remote_queue_1
    - name: external_system_identifier_2
      t3Url: t3_url
      secretName: secret_t3_user_pass
      destinations:
        - jndiPrefix: prefix_2
          queues:
            - queue:
                remoteJndi: remote_queue_2

Note:

Using the provided configuration, UIM cloud native automatically computes names for some entities required for completing the SAF setup. You may find such entities when you log into WebLogic Administration Console for troubleshooting purposes and are not to be confused.

Configuring Domain Trust

For details about global trust, see "Enabling Global Trust" in Oracle Fusion Middleware Administering Security for Oracle WebLogic Server.

Because the shared password provides access to all domains that participate in the trust, strict password management is critical. Trust should be enabled when SAF is configured as it is needed for inter-domain communication using distributed destinations. In a Kubernetes cluster where the pods are transient, it is possible that a SAF sender will not know where it can forward messages unless domain trust is configured.

If trust is not configured when using SAF, you may experience unstable SAF behavior when your environment has pods that are growing, shrinking, or restarting.

To enable domain trust, in your instance specification file, for domainTrust, change the default value to true:

domainTrust: 
 enabled: true 
If you are enabling domain trust, then you must create a Kubernetes secret (exactly as specified) to store the shared trust password by running the following command:

Note:

This step is not required if you are not enabling domain trust in the instance specification.
kubectl create secret generic -n project project-instance-global-trust-credentials --from-literal=password=pwd

The same password must be used in all domains that connect to this one through SAF.

Applying the WebLogic Patch for External Systems

When an external system is configured with a SAF sender towards UIM cloud native, using HTTP tunneling, a patch is required to ensure the SAF sender can connect to the UIM cloud native instance. This is regardless of whether the connection resolves to an ingress controller or to a load balancer. Each such external system that communicates with UIM through SAF must have the WebLogic patch 30656708 installed and configured, by adding -Dweblogic.rjvm.allowUnknownHost=true to the WebLogic startup parameters.

For environments where it is not possible to apply and configure this patch, a workaround is available. On each host running a Managed Server of the external system, add the following entries to the /etc/hosts file:
0.0.0.0 project-instance-ms1
0.0.0.0 project-instance-ms2
0.0.0.0 project-instance-ms3
0.0.0.0 project-instance-ms4
0.0.0.0 project-instance-ms5
0.0.0.0 project-instance-ms6
0.0.0.0 project-instance-ms7
0.0.0.0 project-instance-ms8
0.0.0.0 project-instance-ms9
0.0.0.0 project-instance-ms10
0.0.0.0 project-instance-ms11
0.0.0.0 project-instance-ms12
0.0.0.0 project-instance-ms13
0.0.0.0 project-instance-ms14
0.0.0.0 project-instance-ms15
0.0.0.0 project-instance-ms16
0.0.0.0 project-instance-ms17
0.0.0.0 project-instance-ms18
You should add these entries for all the UIM cloud native instances that the external system interacts with. Set the IP address to 0.0.0.0. All the managed servers possible in the UIM cloud native instance must be listed regardless of how many are actually configured in the instance specification.

Configuring SAF on External Systems

To create SAF and JMS configuration on your external systems to communicate with the UIM cloud native instance, use the configuration samples provided as part of the SAF sample as your guide.

It is important to retain the "Per-JVM" and "Exactly-Once" flags as provided in the sample.

All connection factories must have the "Per-JVM" flag, as must SAF foreign destinations.

Each external queue that is configured to use SAF must have its QoS set to "Exactly-Once".

Enabling Domain Trust

To enable domain trust, in your domain configuration, under Advanced, edit the Credential and ConfirmCredential fields with the same password you used to create the global trust secret in UIM cloud native.

Setting Up Secure Communication with SSL

When UIM cloud native is involved in secure communication with other systems, either as the server or as the client, you should additionally configure SSL/TLS. The configuration may involve the WebLogic domain, the ingress controller or the URL of remote endpoints, but it always involves participating in an SSL handshake with the other system. The procedures for setting up SSL use self-signed certificates for demonstration purposes. However, replace the steps as necessary to use signed certificates.

If a UIM cloud native domain is in the role of the client and the server, where secure communications are coming in as well as going out, then both of the following procedures need to be performed:

  • Configuring Secure Incoming Access with SSL
  • Configuring Access to External SSL-enabled Systems

Configuring Secure Incoming Access with SSL

This section demonstrates how to secure incoming access to UIM cloud native. You can set up both SSL TERMINATE and REENCRYPT strategy at ingress. In the TERMINATE strategy, SSL termination happens at the ingress. The traffic coming in from external clients must use one of the HTTPS endpoints. When SSL terminates at the ingress, it implies that communication within the cluster, such as SAF between the UIM cloud native instances, is not secured. In the REENCRYPT strategy, SSL is re-encrypted at ingress so that the communication within the cluster such as SAF between the UIM cloud native instances, is secured.

The UIM cloud native toolkit provides the sample configuration for Traefik ingress. If you use Voyager or other ingress, you can look at the $UIM_CNTK/samples/charts/ingress-per-domain/templates/traefik-ingress.yaml file to understand the configuration that is applied.

Generating SSL Certificates for Incoming Access

The following illustration shows when certificates are generated.

Figure 9-3 Generating SSL Certificates



When UIM cloud native dictates secure communication, then it is responsible for generating the SSL certificates. These must be provided to the appropriate client. When a UIM cloud native instance in a different Kubernetes cluster acts as the external client (Domain Z in the illustration), it loads the T3 certificate from Domain A as described in "Configuring Access to External SSL-Enabled Systems".

Setting Up UIM Cloud Native for Incoming Access
The ingress controller routes unique hostnames to different backend services. You can see this if you look at the ingress controller YAML file (obtained by running kubectl get ingressroute -n project ingress_name -o yaml):
Kind: Rule
Match: Host(`instance.project.uim.org`)
Services:
  Name: project-instance-cluster-uimcluster
  Port: 8502
  Sticky:
    Cookie:
      Http Only: true
Kind: Rule
Match: Host(`t3.instance.project.uim.org`)
Services:
  Name: project-instance-cluster-uimcluster
  Port: 30303
  Sticky:
    Cookie:
      Http Only: true
Kind: Rule
Match: Host(`admin.instance.project.uim.org`)
Services:
  Name: project-instance-admin
  Port: 8501
  Sticky:
    Cookie:
      Http Only: true

To set up UIM cloud native for incoming access:

  1. Use Common certificate and key created while deploying UTIA application to create commoncert.pem and commonkey.pem. See Unifiled Inventory and Topology Deplpoyment Guide for more information:
    # Create a directory to copy your common keys and certificates. This is for sample only. Proper management policies should be used to store private keys.
    mkdir $SPEC_PATH/ssl
    copy commoncert.pem and commonkey.pem to $SPEC_PATH/ssl location
    # Create secrets to hold each of the certificates. The secret name must be in the format below. Do not change the secret names
    
    kubectl create secret -n project tls project-instance-uim-tls-cert --key $SPEC_PATH/ssl/commonkey.pem --cert $SPEC_PATH/ssl/commoncert.pem
    kubectl create secret -n project tls project-instance-admin-tls-cert --key $SPEC_PATH/ssl/commonkey.pem --cert $SPEC_PATH/ssl/commoncert.pem
    kubectl create secret -n project tls project-instance-t3-tls-cert --key $SPEC_PATH/ssl/commonkey.pem --cert $SPEC_PATH/ssl/commoncert.pem
    
    
    # Create truststore secret if not created earlier
    kubectl create secret generic <truststore-secret-name> -n project --from-file=$SPEC_PATH/ssl/<turststore-secret-name>.jks --from-literal=passphrase=<password>
    
    # For REENCRYPT strategy you need to create custom identity keystore
    
    #create the keystore in pkcs format using certificate and key
    openssl pkcs12 -export -in $SPEC_PATH/ssl/commoncert.pem -inkey $SPEC_PATH/ssl/commonkey.pem -out $SPEC_PATH/ssl/keyStore.p12 -name "uimcommon"
    
    #Covert the pkcs keystore to jks format
    keytool -importkeystore -srckeystore $SPEC_PATH/ssl/keyStore.p12 -srcstoretype PKCS12 -destkeystore $SPEC_PATH/ssl/keystorecommon.jks -deststoretype JKS
    
    #create secrete to store identity keystore
    kubectl create secret generic <identity-keystore-secret> -n project --from-file=$SPEC_PATH/ssl/<identity-keystore-secret>.jks --from-literal=passphrase=<password>

    Note:

    Ensure that the truststore and keystore secret names are same as the corresponding truststore and keystore jks file names.
  2. Edit the instance specification and set incoming to use the appropriate strategy as follows:
    • ssl.incoming: Set this value to true.
    • ssl.strategy: Set this value to TERMINATE or REENCRYPT.
    • ssl.ignoreHostnameVerification: Set this value to true. However, if FQDN is not specified in keystore, you should disable the hostname verification.

      Note:

      Use this only in development environments.
    • ssl.trust.name: The secret name that contains the truststore file.
    • ssl.identity.useDemoIdentity: For REENCRYPT strategy, you cannot use DemoIdentity. Therefore, set this value to false.
    • ssl.identity.name: The identity keystore secret name.
    • ssl.identity.alias: The alias name used in keystore.

    A sample for SSL reencrypt strategy is as follows

    # SSL Configuration
    ssl:
      incoming: true
      strategy: REENCRYPT
      ignoreHostnameVerification: true 
      trust:  
        name: <trust-store-secret>
      identity:
        useDemoIdentity: false 
        name: <key-store-secret>  
        alias: <alias>
  3. Create Ingress as follows:
    $UIM_CNTK/scripts/create-ingress.sh -i instance -p project -s $SPEC_PATH
  4. After running create-ingress.sh, you can validate the configuration by describing the ingress controller for your instance:
    kubectl get ingressroute -n project
     
    NAME                                 AGE
    project-instance-traefik             22h
    project-instance-traefik-admin-tls   22h
    project-instance-traefik-t3-tls      22h
    project-instance-traefik-uim-tls     22h
  5. Create your instance as usual.
Configuring Incoming HTTP and JMS Requests for External Clients

This section describes how to configure incoming HTTP and JMS requests for external clients.

Note:

Remember to have your DNS resolution set up on any remote hosts that will connect to the UIM cloud native instance.

Incoming HTTPS Requests

External Web clients that are connecting to UIM cloud native must be configured to accept the certificates from UIM cloud native. They will then connect using the HTTPS endpoint and port 30443.

Incoming JMS Requests

For external servers that are connected to UIM cloud native through SAF, the certificate for the t3 endpoint needs to be copied to the host where the external domain is running.

If your external WebLogic configuration uses "CustomIdentityAndJavaSTandardTrust", then you can follow these instructions exactly to upload the certificate to the Java Standard Trust. If, however, you are using a CustomTrust, then you must upload the certificate into the custom trust keystore.

The keytool is found in the bin directory of your jdk installation. The alias should uniquely describe the environment where this certificate is from.

./keytool -importcert -v -trustcacerts -alias alias -file /path-to-copied-t3-certificate/t3.crt -keystore /path-to-jdk/jre/lib/security/cacerts -storepass default_password
 
# For example
./keytool -importcert -v -trustcacerts -alias uimcn -file /scratch/t3.crt -keystore /path-to-jdk/jre/lib/security/cacerts -storepass default_password

Update the SAF remote endpoint (on the external UIM instance) to use HTTPS and 30443 port (still t3 hostname).

From the SAF sample provided with the toolkit, the external system would configure the following remote endpoint URL:
https://t3.quick.sr.uim.org:30443/ResponseQueue

Configuring Access to External SSL-Enabled Systems

In order for UIM cloud native to participate successfully in a handshake with an external server for SAF integration, the SSL certificates from the external domain must be made available to the UIM cloud native setup. See "Enabling SSL on an External WebLogic Domain" for details about how you could do this for an on-premise WebLogic domain. If you have an external system that is already configured for SSL and working properly, you can skip this procedure and proceed to "Setting Up UIM Cloud Native for Outgoing Access".

Loading Certificates for Outgoing Access

In outgoing SSL, the certificates come from the external domain, whether on-premise or in another Kubernetes cluster. These certificates are then loaded into the UIM cloud native trust.

The following illustration shows information about loading certificates into UIM cloud native setup.

Figure 9-4 SSL Certificates for Outgoing Requests



Enabling SSL on an External WebLogic Domain

These instructions are specific to enabling SSL on a WebLogic domain that is external to the Kubernetes cluster where UIM cloud native is running.

To enable SSL on an external WebLogic domain:

  1. Create the certificates. Perform the following steps on the Linux host that has the on-premise WebLogic domain:
    1. Use the Java keytool to generate public and private keys for the server. When the tool asks for your username, use the FQDN for your server.
      path-to-jdk/bin/keytool -genkeypair -keyalg RSA -keysize 1024 -alias alias -keystore keystore file -keypass private key password -storepass keystore password -validity 360
    2. Export the public key. This certificate will then be used in the UIM cloud native setup.
      path-to-jdk/bin/keytool -exportcert -rfc -alias alias -storepass password -keystore keystore -file certificate
  2. Configure WebLogic server for SSL. Follow steps 3 to 17 (skip step 7) in the following KM note from My Oracle Support: Set up SSL
  3. Validate that SSL is configured properly on this server by importing the certificate to a trust store. For this example, the Java trust store is used.
    path-to-jdk/bin/keytool -importcert -trustcacerts -alias alias -file certificate -keystore path-to-jdk/jre/lib/security/cacerts -storepass default_password
  4. Verify that t3s over the specified port is working by connecting using WLST.
    Navigate to the directory where the WLST scripts are located.
    # Set the environment variables. Some shells don't set the variables correctly so be sure to check that they are set afterward
    path-to-FMW/Oracle/Middleware/Oracle_Home/oracle_common/common/bin/setWlsEnv.sh
     
    # ensure CLASSPATH and PATH are set
    echo $CLASSPATH
     
    java -Dweblogic.security.JavaStandardTrustKeyStorePassPhrase=default_password weblogic.WLST
     
    # once wlst starts, connect using t3s
    wls:offline> connect('<admin user>','<admin password>','t3s://<server>:<port>')
     
    # If successful you will see the prompt
    wls:>domain_name/serverConfig>
     
    #when finished disconnect
    disconnect()
Setting Up UIM Cloud Native for Outgoing Access

To set up UIM cloud native for outgoing access:

  1. Set up custom trust using the following steps:
    1. Load the certificate from your remote server into a trust store and make it available to the UIM cloud native instance.
      Use the Java keytool to create a jks file (truststore) that holds the certificate from your SSL server:
      keytool -importcert -v -alias alias -file /path-to/certificate.cer -keystore /path-to/truststore.jks -storepass password

      Note:

      Repeat this step to add as many trusted certificates as required.
    2. Create a Kubernetes secret to hold the truststore file and the passphrase. The secret name should match the truststore name.
      # manually
      kubectl create secret generic trust_secret_name -n project --from-file=trust_secret_name.jks=</path-to/truststore.jks> --from-literal=passphrase=password
       
      # verify
      kubectl get secret -n project trust_secret_name -o yaml
    3. Edit the instance specification, setting the trust name.
      # SSL trust and identity
      ssl:
        trust:
         name: truststore # truststore filename without extension (truststore.jks). This must be commented out or a fixed value of "truststore" as this is deprecated. 
        identity:
          userDemoIdentity: true

    When custom trust is enabled, the useDemoIdentity field can be left to true for development instances. This configures the WebLogic server to use the demo identity that is shipped with WebLogic. For production instances, follow the additional steps for custom identity in the next step.

  2. (Optional) Set up custom identity using the following steps:
    1. Create the keystore.
      keytool -genkeypair -keyalg RSA -keysize 1024 -alias <alias> -keystore identity.jks -keypass private_key_password -storepass keystore_password -validity 360
    2. Create the secret.
      kubectl create secret generic secretName -n project --from-file=secretName.jks=</path-to/keystore.jks> --from-literal=passphrase=password
        
      # verify
      kubectl get secret -n project secretName -o yaml
    3. Edit the specification file:
      identity:
          useDemoIdentity: false  # set to false and specify the below parameters to use custom identity
          name: keystore  # only valid when useDemoIdentity is false. Identity store filename without extension (keystore.jks).
          #alias: ssl_key # only valid when useDemoIdentity is false. This must be commented out as it is now defined in the wlstls secret.
  3. Configure SAF by updating the SAF connection configuration in the UIM cloud native instance specification file to reflect t3s and the SSL port:
    safConnectionConfig:
      - name: simple
        t3Url: t3s://remote_server:7002
        secretName: simplesecret
  4. Create the UIM cloud native instance as usual.

Adding Additional Certificates to an Existing Trust

You can add additional certificates to an existing trust while a UIM cloud native instance is up and running.

To add additional certificates to an existing trust:

  1. Set up UIM cloud native for outgoing access. See "Configuring Access to External SSL-Enabled Systems" for instructions.
  2. Copy the certificates from your remote server and load them into the existing truststore.jks file you had created:
    keytool -importcert -v -alias alias -file /path-to/certificate.cer -keystore /path-to/truststore.jks -storepass password
  3. Re-create your Kubernetes secret using the same name as you did previously:
    # manually 
    kubectl create secret generic trust_secret_name -n project --from-file=trust_secret_name.jks=</path-to/truststore.jks> --from-literal=passphrase=password 
    
    # verify 
    kubect1 get secret -n project trust_secret_name -o yaml
  4. Upgrade the instance to force WebLogic Operator to re-evaluate:
    $UIM_CNTK/scripts/upgrade-instance.sh -p project -i instance -s $SPEC_PATH 

Debugging SSL

To debug SSL, do the following:

  • Verify Hostname
  • Enable SSL logging

Verifying Hostname

When the keystore is generated for the on-premise server, if FQDN is not specified, then you may have to disable hostname verification. This is not secure and should only be done in development environments.

To do so, add the following Java option to the managed server in the project specification:
managedServers:
 
  project:
    #JAVA_OPTIONS for all managed servers at project level
    java_options: "-Dweblogic.security.SSL.ignoreHostnameVerification=true"

Enabling SSL Logging

When trying to establish the handshake between servers, it is important to enable SSL specific logging.

Add the following Java options to your managed server in the project specification. This should be done for your external server as well.
managedServers:
 
  project:
    #JAVA_OPTIONS for all managed servers at project level
    java_options: "-Dweblogic.StdoutDebugEnabled=true -Dssl.debug=true -Dweblogic.security.SSL.verbose=true -Dweblogic.debug.DebugSecuritySSL=true -Djavax.net.debug=ssl"

Using Wild Card SSL Certificates

UIM cloud native supports wildcard certificates. You can generate wildCard Certificates with the loadBalancerDomainName value provided in spec files. The default is uim.org.

To use Wild Card certificates:

  1. To create a self-signed wild card certificate, run the following command:
    openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $UIM_CNTK/ssl/wildcardkey.pem -out $UIM_CNTK/ssl/wildcardcert.pem -subj "/CN=*.uim.org" -extensions san -config <(echo '[req]'; echo 'distinguished_name=req';
    echo '[san]';echo 'subjectAltName=@alt_names'; \echo '[alt_names]'; \
    echo 'DNS.1=*.uim.org'; \
    )
  2. Change the subDomainNameSeperator value from period (.) to hyphen (-) so that incoming hostnames match the wild card DNS pattern and update the $SPEC_PATH/instance.yaml file as follows:
    #Uncooment and provide the value of subDomainNameSeparator, default is "."
    #Value can be changed as "-" to match wild-card pattern of ssl certificates.
    #Example hostnames for "-" admin-quick-sr.uim.org, quick-sr.uim.org, t3-quick-sr.uim.org
    subDomainNameSeparator: "-"
  3. For the above configured settings, use the following hostnames to access UIM application for project:sr, instance:quick and loadBalancerDomainName: uim.org:
    uim-admin hostname: admin-quick-sr.uim.org
    uim hostname:  quick-sr.uim.org
    uim-t3 hostname: t3-quick-sr.uim.org