9 Integrating OSM

Typical usage of OSM involves the OSM application coordinating activities across multiple peer systems. Several systems interact with OSM for various purposes. This chapter examines the considerations involved in integrating OSM cloud native instances into a larger solution ecosystem.

This section describes the following topics and tasks:

  • Connectivity with traditional OSM instances
  • Connectivity with OSM cloud native instances
  • Configuring SAF
  • Applying the WebLogic patch for external systems
  • Configuring SAF for External Systems
  • Setting up Secure Communication with SSL/TLS

Connectivity With Traditional OSM Instances

OSM interacts with external systems that fall broadly in the following categories:

  • Human user interaction
  • Upstream systems that inject orders and check status
  • Peer systems and downstream systems that receive requests and provide updates

Human User Interaction

Human users interact with OSM using the following user interfaces:
  • Task Web Client
  • Order Management Web Client
These user interfaces connect to OSM through HTTP and HTTPS. Some deployments involve custom user interfaces built for specific purposes. These too interact with OSM using the Web Services API (WSAPI) or XML API (XMLAPI), with requests and responses transmitted over HTTP and HTTPS.

Order Submission and Status Check

Order capture systems, CRM systems, and middleware applications such as Application Integration Architecture (AIA) submit orders into OSM. They can sign up for order updates through the event/milestone framework. This interaction can theoretically happen through Web Services API,XML API calls over HTTP/HTTPS. However, for reasons of scalability, resilience and load management, the strong recommendation is to conduct this interaction over JMS. This typically involves SAF as well, to avoid foreign JMS injection. JMS, whether native or with SAF, runs over the T3 protocol.

OSM itself can be the upstream system here. For instance, consider an OSM instance functioning as Central Order Management (COM). This would need to send orders to another OSM instance functioning as Service Order Management (SOM) and receive updates from it. This too would be via JMS with SAF, running over T3.

There are additional use cases where monitoring systems (or similarly tasked components) can query OSM. These typically take the form of searches for orders that fit some business criteria, and reporting back status and perhaps some additional operationally significant information. OSM is optimized to process orders and therefore processes such requests at some impact. However, many deployments still opt for such interactions. These typically happen as WSAPI or XMLAPI calls over HTTP/HTTPS.

Connectivity with Peer Systems

As OSM processes orders, the logic encoded in the cartridges drives requests to other systems, such as those for billing or inventory or work-force management. These requests can be one-way messages but are much more likely to follow a "request - response" pattern, where the remote system sends one or more responses back to OSM. These responses can arrive immediately or at a later (perhaps much later) time. The communication model OSM recommends for this is JMS (with SAF), which runs over T3.

Technical Connectivity

Over the three categories of interaction, we can distill the following connectivity types:
  • OSM APIs invoked via HTTP/HTTPS
  • OSM APIs invoked via JMS and SAF
  • OSM conversing via JMS and SAF

OSM initiates HTTP/HTTPS messages if explicitly coded to do so in cartridges. This is an anti-pattern for OSM cartridge development as it causes high impact to the throughput capability of OSM. Normally, OSM responds to incoming requests over HTTP/HTTPS (API call responses).

With JMS messages, OSM can be both the originator of a "request-response" transaction or the recipient of one. To support this, OSM can host SAF agents that provide the ability to send JMS messages to remote systems, and OSM can host queues that are targeted by SAF agents on those remote systems.

Security Requirements

OSM Cloud Native supports HTTP and T3. In addition, SAF configuration from one WebLogic domain to another domain very often requires additional security arrangements, including the availability of credentials to authenticate such a connection.

Connectivity With OSM Cloud Native

Functionally, the interaction requirements of OSM do not change when OSM is run in a cloud native environment. All of the categories of interaction that are applicable for connectivity with traditional OSM instances are applicable and must be supported for OSM cloud native.

Connectivity Between the Building Blocks

The following diagram illustrates the connectivity between the building blocks in an OSM cloud native environment using an example:

Figure 9-1 Connectivity Between Building Blocks in OSM Cloud Native Environment



Invoking the OSM cloud native Helm chart creates a new OSM instance. In the above illustration, the name of the instance is "dev2" in the project "mobilecom". The instance consists of the WebLogic cluster that has one Admin Server and three Managed Servers and a Kubernetes Cluster Service.

The Cluster Service contains endpoints for both HTTP and T3 traffic. The instance creation script creates the OSM cloud native Ingress object. The Ingress object has metadata to trigger the Traefik ingress controller as a sample. Traefik responds by creating new front-ends with the configured "hostnames" for the cluster (dev2.mobilecom.osm.org and t3.dev2.mobilecom.osm.org in the illustration) and the admin server (admin.dev2.mobilecom.osm.org) and links them up to new back-end constructs. Each back-end routes to each member of the Cluster Service (MS1, MS2, and MS3 in the example) or to the Admin Server. The dev2.mobilecom.osm.org front-end is linked to the back-end pointing to the HTTP endpoint of each managed server, while the t3.dev2.mobilecom.osm.org front-end links to the back-end pointing to the T3 endpoint of each managed server.

The prior installation of Traefik has already exposed Traefik itself via a selected port number (30305 in the example) on each worker node.

Inbound HTTP Connectivity

An OSM instance is exposed outside of the Kubernetes cluster for HTTP access via an Ingress Controller and potentially a Load Balancer.

Because the Traefik port (30305) is common to all OSM cloud native instances in the cluster, Traefik must be able to distinguish between the incoming messages headed for different instances. It does this by differentiating on the basis of the "hostname" mentioned in the HTTP messages. This means that a client (User Client B in the illustration) must believe it is talking to the "host" dev2.mobilecom.osm.org when it sends HTTP messages to port 30305 on the access IP. This might be the Master node IP, or IP address of one of the worker nodes, depending on your cluster setup. The "DNS Resolver" provides this mapping.

In this mode of communication, there are concerns around resiliency and load distribution. For example, If the DNS Resolver always points to the IP address of Worker Node 1 when asked to resolve dev2.mobilecom.osm.org, then that Worker node ends up taking all the inbound traffic for the instance. If the DNS Resolver is configured to respond to any *.mobilecom.osm.org requests with that IP, then that worker node ends up taking all the inbound traffic for all the instances. Since this latter configuration in the DNS Resolver is desired, to minimize per-instance touches, the setup creates a bottleneck on Worker node 1. If Worker node 1 were to fail, the DNS Resolver would have to be updated to point *.mobilecom.osm.org to Worker node 2. This leads to an interruption of access and requires intervention. The recommended pattern to avoid these concerns is for the DNS Resolver to be populated with all the applicable IP addresses as resolution targets (in our example, it would be populated with the IPs of both Worker node 1 and node 2), and have the Resolver return a random selection from that list.

An alternate mode of communication is to introduce a load balancer configured to balance incoming traffic to the Traefik ports on all the worker nodes. The DNS Resolver is still required, and the entry for *.mobilecom.osm.org points to the load balancer. Your load balancer documentation describes how to achieve resiliency and load management. With this setup, a user (User Client A in our example) sends a message to dev2.mobilecom.osm.org, which actually resolves to the load balancer - for instance, http://dev2.mobilecom.osm.org:8080/OrderManagement/Login.jsp. Here, 8080 is the public port of the load balancer. The load balancer sends this to Traefik, which routes the message, based on the "hostname" targeted by the message to the HTTP channel of the OSM cloud native instance.

By adding the hostname resolution such that admin.dev2.mobilecom.osm.org also resolves to the Kubernetes cluster access IP (or Load Balancer IP), User Client B can access the WebLogic console via http://admin.dev2.mobilecom.osm.org/console and the credentials specified while setting up the "wlsadmin" secret for this instance.

Note:

Access to the WebLogic Admin console is provided for review and debugging use only. Do not use the console to change the system state or configuration. These are maintained independently in the WebLogic Operator, based on the specifications provided when the instance was created or last updated by the OSM cloud native toolkit. As a result, any such manual changes (whether using the console or using WLST or other such mechanisms) are liable to be overwritten without notice by the Operator. The only way to change state or configuration is through the tools and scripts provided in the toolkit.

Inbound JMS Connectivity

JMS messages use the T3 protocol. Since Ingress Controllers and Load Balancers do not understand T3 for routing purposes, OSM cloud native requires all incoming JMS traffic to be "T3 over HTTP". Hence, the messages are still HTTP, but contain a T3 message as payload. OSM cloud native requires the clients to target the "t3 hostname" of the instance - t3.dev2.mobilecom.osm.org, in the example. This "t3 hostname" should behave identically as the regular "hostname" in terms of the DNS Resolver and the Load Balancer. Traefik however not only identifies the instance this message is meant for (dev2.mobilecom) but also that it targets the T3 channel of instance.

The "T3 over HTTP" requirement applies for all inbound JMS messages - whether generated by direct or foreign JMS API calls or generated by SAF. The procedure in SAF QuickStart explains the setup required by the message producer or SAF agent to achieve this encapsulation. If SAF is used, the fact that T3 is riding over HTTP does not affect the semantics of JMS. All the features such as reliable delivery, priority, and TTL, continue to be respected by the system. See "Applying the WebLogic Patch for External Systems".

An OSM instance can be configured for secure access, which includes exposing the T3 endpoint outside the Kubernetes cluster for HTTPS access. See "Configuring Secure Incoming Access with SSL" for details on enabling SSL.

Inbound JMS Connectivity Within the Same Kubernetes Cluster

For all inbound JMS connectivity, use the T3 hostname: t3.dev2.mobilecom.osm.org. This URL applies to clients outside of the Kubernetes cluster in which OSM cloud native is deployed. This requires configuring Ingress Controller and DNS Resolver to access the URL.

However, there can be situations where OSM cloud native needs to be accessed from within the same Kubernetes cluster where it is deployed. For example, an upstream application sending orders or a downstream application sending status updates could be deployed in the same Kubernetes cluster. It could also be another OSM cloud native instance deployed in the same Kubernetes cluster either sending or receiving Create Order requests. For such requirements, there is no need for the request to be routed via an Ingress Controller or a load balancer and resolved via a DNS Resolver.

OSM cloud native exposes a T3 channel exclusively for such connections and can be accessed via t3://project-instance-cluster-c1.project.svc.cluster.local:31313.

This saves the various network hops typically involved in routing a request from an external client to OSM cloud native deployed in a Kubernetes cluster.

The following diagram illustrates inbound JMS connectivity within the same Kubernetes cluster using an example.

For the example, the URL is t3://mobilecom-dev2-cluster-c1.mobilecom.svc.cluster.local:31313.

Note:

The protocol is T3 as there is no need for wrapping in HTTP. Note that the port is different.

Figure 9-2 Inbound JMS Connectivity in a Kubernetes Cluster



If SSL is enabled for domains, communication between the domains within the Kubernetes cluster is not secured because the ingress is not involved. See "Setting Up Secure Communication with SSL" for further details.

Outbound HTTP Connectivity

No specific action is required to ensure the HTTP messages from OSM cloud native instance reach out of the Kubernetes Cluster.

When a domain inside a Kubernetes cluster sends REST API or Web Service requests over HTTP to a domain that is outside the cluster that is enabled with SSL, then you should set up some required configuration. For instructions, see "Configuring Access to External SSL-Enabled Systems".

Outbound JMS Connectivity

JMS messages originating from the OSM cloud native instance such as requests to peer systems from cartridge automation plug-ins or event notifications to upstream system from notification plug-ins, always end up on local queues. The OSM cloud native Helm chart allows for the specification of SAF connections to remote systems in order to get these messages to their destinations. The project specification contains all the SAF connections that must exist for the cartridge(s) to do their job. The instance specification provides a specific endpoint for each of these SAF connections. This allows for a canonical expression of the SAF connectivity requirements, which are uniquely fulfilled by each instance by pointing to the appropriate upstream, downstream, peer systems or emulators, and so on.

When a domain inside a Kubernetes cluster sends JMS messages to a domain that is outside the cluster that is SSL-enabled, then see "Configuring Access to External SSL-Enabled Systems" for instructions on setting up some required configuration.

Configuring SAF

OSM cloud native requires SAF for the OSM cartridge automation functionality to send messages to external systems through JMS. The SAF configuration in OSM cloud native has two distinct aspects - the project and the instance. At the project level, the project specification can be used to define all the SAF connections that any OSM cloud native instance must make. This list is governed by the cartridges that constitute the project. At the instance level, each of these SAF connections must be given a specific remote endpoint.

Configuring the Project Specification

The project specification lists out all the SAF connections that are required for the set of solution cartridges that the project requires in order to function. These are listed under the safDestinationConfig element of the project specification.

The following sample shows a basic SAF specification that describes the need to interact via SAF with external-system-identifier. It specifies that the project is interested in accessing two queues on that remote system: remote-queue-1 and remote-queue-2. On that system, these queues can be addressed using the JNDI prefix prefix-1. Further, remote-queue-1 is also mapped locally as local-queue-1. Whether this is necessary or not depends on the addressing system coded into the OSM cartridge's external sender automation plugins. OSM cloud native supports both local names and remote names for SAF destinations.
safDestinationConfig:
  - name: external_system_identifier
    destinations:
      - jndiPrefix: prefix_1
        queues:
          - queue:
              remoteJndi: remote_queue_1
              localJndi: local_queue_1
          - queue:
              remoteJndi: remote_queue_2

If the queues of an external system are spread across more than one JNDI prefix, the jndiPrefix element can be repeated as many times as necessary. In this example, prefix_1 applies to remote_queue_1 and remote_queue_2, while prefix_2 applies to remote_queue_3.

The following sample shows SAF project specification with multiple JNDIs:
safDestinationConfig:
  - name: external_system_identifier
    destinations:
      - jndiPrefix: prefix_1
        queues:
          - queue:
              remoteJndi: remote_queue_1
              localJndi: local_queue_1
          - queue:
              remoteJndi: remote_queue_2
      - jndiPrefix: prefix_2
        queues:
          - queue:
              remoteJndi: remote_queue_3
It is possible for an external system to not use a JNDI prefix, which is configured by leaving the value empty for jndiPrefix. However, at most, one of the jndiPrefix entries in a destinations list can be empty, as the jndiPrefixes in this list have to be unique. If there are more than one external system that the project's solution cartridges interact with via SAF, these can be named and listed as follows:
safDestinationConfig:
  - name: external_system_identifier_1
    destinations:
      - jndiPrefix: prefix_1
        queues:
          - queue:
              remoteJndi: remote_queue_1
  - name: external_system_identifier_2
    destinations:
      - jndiPrefix: prefix_2
        queues:
          - queue:
              remoteJndi: remote_queue_2

Note:

Using the provided configuration, OSM cloud native automatically computes names for some entities required for completing the SAF setup. You may find such entities when you log into WebLogic Administration Console for troubleshooting purposes and are not to be confused.

Configuring the Instance Specification

The project specification lays out the connectivity requirements of the solution cartridges in the project. However, each instance needs to provide its own set of endpoints to satisfy those connections. For example, the project specification may require connectivity to a remote UIM system to send inventory related commands via JMS and SAF. It is the instance specification that directs this requirement to a specific UIM installation valid for use with this instance. Another instance of the same project might target a different UIM installation or an emulator.

The instance specification contains the T3 URL of the external system along with the name of a Kubernetes secret that provides the credentials required to interact with that system. The T3 URL can be specified using any of the standard mechanisms supported by WebLogic. The Kubernetes secret must contain the fields username and password, carrying credentials which have permission to inject JMS messages into the remote system.
safConnectionConfig:
  - name: external_system_identifier
    t3Url: t3_url
    secretName: secret_t3_user_pass
Here, the external_system_identifier needs to match the external_system_identifier specified in the project specification. The instance specification must have an entry for each of the external_system_identifier entries listed in the project specification.

If the external system is an OSM cloud native instance deployed in the same Kubernetes cluster, use the T3 URL as described in "Inbound JMS Connectivity Within the Same Kubernetes Cluster".

If SSL is enabled for the external system, use the T3 URL as described in "Configuring Access to External SSL-Enabled Systems".

Configuring Domain Trust

For details about global trust, see "Enabling Global Trust" in Oracle Fusion Middleware Administering Security for Oracle WebLogic Server.

Because the shared password provides access to all domains that participate in the trust, strict password management is critical. Trust should be enabled when SAF is configured as it is needed for inter-domain communication using distributed destinations. In a Kubernetes cluster where the pods are transient, it is possible that a SAF sender will not know where it can forward messages unless domain trust is configured.

If trust is not configured when using SAF, you may experience unstable SAF behavior when your environment has pods that are growing, shrinking, or restarting.

To enable domain trust, in your instance specification file, for domainTrust, change the default value to true:

domainTrust: 
 enabled: true 
If you are enabling domain trust, then you must create a Kubernetes secret (exactly as specified) to store the shared trust password by running the following command:

Note:

This step is not required if you are not enabling domain trust in the instance specification.
kubectl create secret generic -n project project-instance-global-trust-credentials --from-literal=password=pwd

The same password must be used in all domains that connect to this one through SAF.

Usage in OSM Cartridge Automation

The OSM cartridge automation external sender plugins are unaffected by the switch to OSM cloud native. The plugins continue to address their destinations as before, using JNDI prefix and remote queue name, or JNDI prefix and local queue name. The project specification must reflect what the cartridge developer has actually coded into the automation plug-in in Design Studio.

Inbound SAF Requirements

The OSM cloud native Helm charts create all the entities required for inbound SAF to be processed as T3 over HTTP. No additional configuration is required in the OSM cloud native specification files. However, if the OSM cartridge automation receiver plugins are set up to read from local JNDI prefix and queue name, these must be added to the project specification as standard solution queues under uniformDistributedQueues (not as safConnectionConfig).

Applying the WebLogic Patch for External Systems

When an external system is configured with a SAF sender towards OSM cloud native, using HTTP tunneling, a patch is required to ensure the SAF sender can connect to the OSM cloud native instance. This is regardless of whether the connection resolves to an ingress controller or to a load balancer. Each such external system that communicates with OSM through SAF must have the WebLogic patch 30656708 installed and configured, by adding -Dweblogic.rjvm.allowUnknownHost=true to the WebLogic startup parameters.

For environments where it is not possible to apply and configure this patch, a workaround is available. On each host running a Managed Server of the external system, add the following entries to the /etc/hosts file:
0.0.0.0 project-instance-ms1
0.0.0.0 project-instance-ms2
0.0.0.0 project-instance-ms3
0.0.0.0 project-instance-ms4
0.0.0.0 project-instance-ms5
0.0.0.0 project-instance-ms6
0.0.0.0 project-instance-ms7
0.0.0.0 project-instance-ms8
0.0.0.0 project-instance-ms9
0.0.0.0 project-instance-ms10
0.0.0.0 project-instance-ms11
0.0.0.0 project-instance-ms12
0.0.0.0 project-instance-ms13
0.0.0.0 project-instance-ms14
0.0.0.0 project-instance-ms15
0.0.0.0 project-instance-ms16
0.0.0.0 project-instance-ms17
0.0.0.0 project-instance-ms18
You should add these entries for all the OSM cloud native instances that the external system interacts with. Set the IP address to 0.0.0.0. All the eight managed servers possible in the OSM cloud native instance must be listed regardless of how many are actually configured in the instance specification.

Configuring SAF On External Systems

To create SAF and JMS configuration on your external systems to communicate with the OSM cloud native instance, use the configuration samples provided as part of the SAF sample as your guide.

It is important to retain the "Per-JVM" and "Exactly-Once" flags as provided in the sample.

All connection factories must have the "Per-JVM" flag, as must SAF foreign destinations.

Each external queue that is configured to use SAF must have its QoS set to "Exactly-Once".

Enabling Domain Trust

To enable domain trust, in your domain configuration, under Advanced, edit the Credential and ConfirmCredential fields with the same password you used to create the global trust secret in OSM cloud native.

Setting Up Secure Communication with SSL

When OSM cloud native is involved in secure communication with other systems, either as the server or as the client, you should additionally configure SSL/TLS. The configuration may involve the WebLogic domain, the ingress controller or the URL of remote endpoints, but it always involves participating in an SSL handshake with the other system. The procedures for setting up SSL use self-signed certificates for demonstration purposes. However, replace the steps as necessary to use signed certificates.

If an OSM cloud native domain is in the role of the client and the server, where secure communications are coming in as well as going out, then both of the following procedures need to be performed:

  • Configuring Secure Incoming Access with SSL
  • Configuring Access to External SSL-enabled Systems

Configuring Secure Incoming Access with SSL

This section demonstrates how to secure incoming access to OSM cloud native. In this scenario, SSL termination happens at the ingress. The traffic coming in from external clients must use one of the HTTPS endpoints. When SSL terminates at the ingress, it also means that communication within the cluster, such as SAF between the OSM cloud native instances, is not secured.

The OSM cloud native toolkit provides the sample configuration for Traefik ingress. If you use Voyager or other ingress, you can look at the $OSM_CNTK/samples/charts/ingress-per-domain/templates/traefik-ingress.yaml file to see what configuration is applied.

Generating SSL Certificates for Incoming Access

The following illustration shows when certificates are generated.

Figure 9-3 Generating SSL Certificates



When OSM cloud native dictates secure communication, then it is responsible for generating the SSL certificates. These must be provided to the appropriate client. When an OSM cloud native instance in a different Kubernetes cluster acts as the external client (Domain Z in the illustration), it loads the T3 certificate from Domain A as described in "Configuring Access to External SSL-Enabled Systems".

Setting Up OSM Cloud Native for Incoming Access
The ingress controller routes unique hostnames to different backend services. You can see this if you look at the ingress controller YAML file (obtained by running kubectl get ingress -n project ingress_name -o yaml):

Note:

Traefik 2.x moved to using IngressRoute (a CustomResourceDefinition) instead of the Ingress object. If you are using Traefik, in the following commands, change all references of ingress to ingressroute.
rules:
- host: instance.project.osm.org
  http:
    paths:
    - backend:
        serviceName: project-instance-cluster-c1
        servicePort: 8001
- host: t3.instance.project.osm.org
  http:
    paths:
    - backend:
        serviceName: project-instance-cluster-c1
        servicePort: 30303
- host: admin.instance.project.osm.org
  http:
    paths:
    - backend:
        serviceName: project-instance-admin
        servicePort: 7001

To set up OSM cloud native for incoming access:

  1. Generate key pairs for each hostname corresponding to an endpoint that OSM cloud native exposes to the outside world:
    # Create a directory to save your keys and certificates. This is for sample only. Proper management policies should be used to store private keys.
     
    mkdir $SPEC_PATH/ssl
     
    # Generate key and certificates
    openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $SPEC_PATH/ssl/osm.key -out $SPEC_PATH/ssl/osm.crt -subj "/CN=instance.project.osm.org"
    openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $SPEC_PATH/ssl/admin.key -out $SPEC_PATH/ssl/admin.crt -subj "/CN=admin.instance.project.osm.org"
    openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $SPEC_PATH/ssl/t3.key -out $SPEC_PATH/ssl/t3.crt -subj "/CN=t3.instance.project.osm.org"
     
    # Create secrets to hold each of the certificates. The secret name must be in the format below. Do not change the secret names
     
    kubectl create secret -n project tls project-instance-osm-tls-cert --key $SPEC_PATH/ssl/osm.key --cert $SPEC_PATH/ssl/osm.crt
    kubectl create secret -n project tls project-instance-admin-tls-cert --key $SPEC_PATH/ssl/admin.key --cert $SPEC_PATH/ssl/admin.crt
    kubectl create secret -n project tls project-instance-t3-tls-cert --key $SPEC_PATH/ssl/t3.key --cert $SPEC_PATH/ssl/t3.crt
  2. Edit the instance specification and set incoming to true:
    ssl:
      incoming: true
  3. After running create-ingress.sh, you can validate the configuration by describing the ingress controller for your instance. You should see each of the certificates you generated, terminating one of the hostnames:
    kubectl get ingress -n project

    Once you have the name of your ingress, run the following command:

    kubectl describe ingress -n project ingress
    
    TLS:
      project-instance-osm-tls-cert terminates instance.project.osm.org
      project-instance-t3-tls-cert terminates t3.instance.project.osm.org
      project-instance-admin-tls-cert terminates admin.instance.project.osm.org
  4. Create your instance as usual.
Configuring Incoming HTTP and JMS Connectivity for External Clients

This section describes how to configure incoming HTTP and JMS connectivity for external clients.

Note:

Remember to have your DNS resolution set up on any remote hosts that will connect to the OSM cloud native instance.

Incoming HTTPS Connectivity

External Web clients that are connecting to OSM cloud native must be configured to accept the certificates from OSM cloud native. They will then connect using the HTTPS endpoint and port 30443.

Incoming JMS Connectivity

For external servers that are connected to OSM cloud native through SAF, the certificate for the t3 endpoint needs to be copied to the host where the external domain is running.

If your external WebLogic configuration uses "CustomIdentityAndJavaSTandardTrust", then you can follow these instructions exactly to upload the certificate to the Java Standard Trust. If, however, you are using a CustomTrust, then you must upload the certificate into the custom trust keystore.

The keytool is found in the bin directory of your jdk installation. The alias should uniquely describe the environment where this certificate is from.

./keytool -importcert -v -trustcacerts -alias alias -file /path-to-copied-t3-certificate/t3.crt -keystore /path-to-jdk/jdk1.8.0_202/jre/lib/security/cacerts -storepass default_password
 
 
# For example
./keytool -importcert -v -trustcacerts -alias osmcn -file /scratch/t3.crt -keystore /jdk1.8.0_202/jre/lib/security/cacerts -storepass default_password

Update the SAF remote endpoint (on the external OSM instance) to use HTTPS and 30443 port (still t3 hostname).

From the SAF sample provided with the toolkit, the external system would configure the following remote endpoint URL:
https://t3.dev.supracom.osm.org:30443/oracle.communications.ordermanagement.SimpleResponseQueue

Configuring Access to External SSL-Enabled Systems

In order for OSM cloud native to participate successfully in a handshake with an external server for SAF connectivity, the SSL certificates from the external domain must be made available to the OSM cloud native setup. See "Enabling SSL on an External WebLogic Domain" for details about how you could do this for an on-premise WebLogic domain. If you have an external system that is already configured for SSL and working properly, you can skip this procedure and proceed to "Setting Up OSM Cloud Native for Outgoing Access".

Loading Certificates for Outgoing Access

In outgoing SSL, the certificates come from the external domain, whether on-premise or in another Kubernetes cluster. These certificates are then loaded into the OSM cloud native trust.

The following illustration shows information about loading certificates into OSM cloud native setup.

Figure 9-4 SSL Certificates for Outgoing Connectivity



Enabling SSL on an External WebLogic Domain

These instructions are specific to enabling SSL on a WebLogic domain that is external to the Kubernetes cluster where OSM cloud native is running.

To enable SSL on an external WebLogic domain:

  1. Create the certificates. Perform the following steps on the Linux host that has the on-premise WebLogic domain:
    1. Use the Java keytool to generate public and private keys for the server. When the tool asks for your username, use the FQDN for your server.
      path-to-jdk/jdk1.8.0_202/bin/keytool -genkeypair -keyalg RSA -keysize 1024 -alias alias -keystore keystore file -keypass private key password -storepass keystore password -validity 360
    2. Export the public key. This certificate will then be used in the OSM cloud native setup.
      path-to-jdk/jdk1.8.0_202/bin/keytool -exportcert -rfc -alias alias -storepass password -keystore keystore -file certificate
  2. Configure WebLogic server for SSL. Follow steps 3 to 17 (skip step 7) in the OSM - Encrypting Database Tablespaces and WebLogic Protocols (Doc ID 2399723.1) KM note on My Oracle Support.
  3. Validate that SSL is configured properly on this server by importing the certificate to a trust store. For this example, the Java trust store is used.
    path-to-jdk/jdk1.8.0_202/bin/keytool -importcert -trustcacerts -alias alias -file certificate -keystore path-to-jdk/jdk1.8.0_202/jre/lib/security/cacerts -storepass default_password
  4. Verify that t3s over the specified port is working by connecting using WLST.
    Navigate to the directory where the WLST scripts are located.
    # Set the environment variables. Some shells don't set the variables correctly so be sure to check that they are set afterward
    path-to-FMW/Oracle/Middleware/Oracle_Home/oracle_common/common/bin/setWlsEnv.sh
     
    # ensure CLASSPATH and PATH are set
    echo $CLASSPATH
     
    java -Dweblogic.security.JavaStandardTrustKeyStorePassPhrase=default_password weblogic.WLST
     
    # once wlst starts, connect using t3s
    wls:offline> connect('<admin user>','<admin password>','t3s://<server>:7002')
     
    # If successful you will see the prompt
    wls:>domain_name/serverConfig>
     
    #when finished disconnect
    disconnect()
Setting Up OSM Cloud Native for Outgoing Access

To set up OSM cloud native for outgoing access:

  1. Set up custom trust using the following steps:
    1. Load the certificate from your remote server into a trust store and make it available to the OSM cloud native instance.
      Use the Java keytool to create a jks file (truststore) that holds the certificate from your SSL server:
      keytool -importcert -v -alias alias -file /path-to/certificate.cer -keystore /path-to/truststore.jks -storepass password

      Note:

      Repeat this step to add as many trusted certificates as required.
    2. Create a Kubernetes secret to hold the truststore file and the passphrase. The secret name should match the truststore name.
      # manually
      kubectl create secret generic trust_secret_name -n project --from-file=truststore.jks --from-literal=passphrase=password
       
      # verify
      k get secret -n project trust_secret_name -o yaml
    3. Edit the instance specification, setting the trust name.
      # SSL trust and identity
      ssl:
        trust:
          name: trust_secret_name    # The name of the secret holding the remote server truststore contents and passphrase
        identity:
          useDemoIdentity: true
       
      # leave remaining fields commented out

    When custom trust is enabled, the useDemoIdentity field can be left to true for development instances. This configures the WebLogic server to use the demo identity that is shipped with WebLogic. For production instances, follow the additional steps for custom identity in the next step.

  2. (Optional) Set up custom identity using the following steps:
    1. Create the keystore.
      keytool -genkeypair -keyalg RSA -keysize 1024 -alias <alias> -keystore identity.jks -keypass private_key_password -storepass keystore_password -validity 360
    2. Create the secret.
      kubectl create secret generic secretName -n project --from-file=keystore.jks --from-literal=passphrase=password
        
      # verify
      k get secret -n project secretName -o yaml
    3. Edit the specification file:
      identity:
        useDemoIdentity: false
        name: alias        # only valid when useDemoIdentity is false. Secret name that contains the identity store file.
        alias: secretName  # only valid when useDemoIdentity is false.
  3. Configure SAF by updating the SAF connection configuration in the OSM cloud native instance specification file to reflect t3s and the SSL port:
    safConnectionConfig:
      - name: simple
        t3Url: t3s://remote_server:7002
        secretName: simplesecret
  4. Create the OSM cloud native instance as usual.

Adding Additional Certificates to an Existing Trust

You can add additional certificates to an existing trust while an OSM cloud native instance is up and running.

To add additional certificates to an existing trust:

  1. Set up OSM cloud native for outgoing access. See "Configuring Access to External SSL-Enabled Systems" for instructions.
  2. Copy the certificates from your remote server and load them into the existing truststore.jks file you had created:
    keytool -importcert -v -alias alias -file /path-to/certificate.cer -keystore /path-to/truststore.jks -storepass password
  3. Re-create your Kubernetes secret using the same name as you did previously:
    # manually 
    kubectl create secret generic trust_secret_name -n project --from-file=truststore.jks --from-literal=passphrase=password 
    
    # verify 
    k get secret -n project trust_secret_name -o yaml
    4. Upgrade the instance to force WebLogic Operator to re-evaluate:
    $OSM_CNTK/scripts/upgrade-instance.sh -p project -i instance -s $SPEC_PATH 

Debugging SSL

To debug SSL, do the following:

  • Verify Hostname
  • Enable SSL logging

Verifying Hostname

When the keystore is generated for the on-premise server, if FQDN is not specified, then you may have to disable hostname verification. This is not secure and should only be done in development environments.

To do so, add the following Java option to the managed server in the project specification:
managedServers:
 
  project:
    #JAVA_OPTIONS for all managed servers at project level
    java_options: "-Dweblogic.security.SSL.ignoreHostnameVerification=true"

Enabling SSL Logging

When trying to establish the handshake between servers, it is important to enable SSL specific logging.

Add the following Java options to your managed server in the project specification. This should be done for your external server as well.
managedServers:
 
  project:
    #JAVA_OPTIONS for all managed servers at project level
    java_options: "-Dweblogic.StdoutDebugEnabled=true -Dssl.debug=true -Dweblogic.security.SSL.verbose=true -Dweblogic.debug.DebugSecuritySSL=true -Djavax.net.debug=ssl"