1 Overview of the OSM Cloud Native Deployment

Get an overview of Oracle Communications Order and Service Management (OSM) cloud native deployment, architecture, and the OSM cloud native toolkit.

This chapter provides an overview of Oracle Communications Order and Service Management (OSM) deployed in a cloud native environment using container images and a Kubernetes cluster.

About the OSM Cloud Native Deployment

You can deploy OSM in a Kubernetes-based shared cloud (cluster) while implementing modern DevOps “Configuration as Code” principles to manage system configuration in a consistent manner. You can automate system lifecycle management. You set up your own cloud native environment and can then use the OSM cloud native toolkit to automate the deployment of OSM instances. By leveraging the pre-configured Helm charts, you can deploy OSM instances quickly ensuring your services are up and running in far less time than a traditional deployment.

OSM cloud native supports the following deployment models:

  • On Private Kubernetes Cluster: OSM cloud native is certified for a general deployment of Kubernetes.

  • On Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE): OSM cloud native is certified to run on Oracle's hosted Kubernetes OKE service.

OSM Cloud Native Architecture

This section describes and illustrates the OSM cloud native architecture and the deployment environment.

The following diagram illustrates the OSM cloud native architecture.

Figure 1-1 OSM Cloud Native Architecture



The OSM cloud native architecture requires components such as the Kubernetes cluster and WebLogic Kubernetes Operator, which are under your control to install and configure. A single WebLogic Operator can manage multiple OSM domains in multiple namespaces. Each domain is a dynamic cluster with multiple managed servers that is configured for integration with both optional and required components. The OSM cloud native artifacts include two container images built using Docker and the OSM cloud native toolkit.

About the WebLogic Domain

The following diagram illustrates the OSM cloud native deployment environment and important concepts about producing a WebLogic domain that is capable of supporting OSM cloud native.

Figure 1-2 OSM Cloud Native Deployment Environment



In the deployment environment, the Helm chart that is provided with the OSM cloud native toolkit is deployed into the Kubernetes cluster producing two Kubernetes resources. These resources are then consumed by the WebLogic Kubernetes Operator (WKO).

About Kubernetes Custom Resource Definitions (CRD) and Domain Configuration Config Map

The Kubernetes API provides extensions called custom resources. To understand more about a Custom Resource Definition (CRD) and why it might be used, see the Kubernetes CustomResourceDefinition (CRD) documentation at: https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/

To configure the operation of your WebLogic domain, you set up and configure your own domain resource. The domain resource does not replace the traditional configuration of the WebLogic domains found in the domain configuration files, but instead co-operates with those files to describe the Kubernetes artifacts of the corresponding domain. Refer to the Oracle WebLogic Kubernetes Operator User Guide to understand how to use a CRD to describe a WebLogic domain resource.

While the domain resource describes much of the operational details for a domain such as domain identification, secrets, pod creation, server instances, startup and shutdown, security, logging, clusters, admin and managed servers, and JVM options, the details about the more traditional configuration (deployed applications, JMS Queues, data sources and so on) are provided in a configuration map and are described using a metadata model specified by the Weblogic Deploy Tooling (WDT). The OSM cloud native toolkit provides the base configuration to produce these resources.

About Oracle WebLogic Server Deploy Tooling (WDT)

The WebLogic Server Deploy Tooling (WDT) has the following main purposes:

  • It provides a metadata model that describes a WebLogic Server domain configuration.
  • It provides scripts that perform domain lifecycle operations, simplifying the definition and the creation of domains. This capability provides an alternative to programmatic ways of defining domain configuration such as WebLogic Scripting Tool (WLST) or Java Mbeans manipulation.

The OSM cloud native toolkit leverages the WDT metadata model only. It does not use the scripting capabilities directly.

The toolkit provides the WDT metadata for a domain that is capable of supporting OSM. The toolkit enables you to easily override much of the base configuration through the use of Helm charts. Additionally, the toolkit framework allows you to add supplementary WDT metadata fragments to the domain. WDT provides tools that help with this task by inspecting an existing domain to produce the WDT metadata required for the configuration.

For more details about WDT, see the Oracle WebLogic Server Deploy Tooling documentation on GitHub at: https://github.com/oracle/weblogic-deploy-tooling

About Projects and Instances

A project is a function of OSM. Examples of OSM functions include order management roles such as SOM and COM. For example, in a COM role, a solution cartridge contains configuration requirements that dictate how COM processes orders. This might include the JMS queues for messaging, credentials for communication with external systems, additional applications deployed to the WebLogic server (external system emulators), or SAF setup for connectivity to peer systems. All of these configuration requirements can be scoped to a project.

An instance is a specific flavor of OSM for a given project. Test, development, and production are all instances of an OSM COM project. Some bits of the configuration makes more sense to be applied on a per-instance basis. The production instance of OSM in a COM role uses different values for tuning parameters and may employ a different logging and metrics strategy than a development instance of COM.

In order to create a running WebLogic domain, the target project and instance must be determined so that the appropriate configuration can be assembled.

About Specification Layers

The OSM configuration defines the footprint, layout and tuning of OSM. Treating this as one monolithic configuration is not optimal for sustainability or risk management. The result is a layered approach to the configuration.

There are three layers defined, each scoping a set of values that are specific to the function of that layer:

  • Project: The project layer contains configuration that is common and applicable for all instances of an OSM project. Examples of content in this layer are JMS Queues and external authentication details.
  • Instance: The instance layer contains configuration that is unique to each OSM instance, such as database identity and cluster size.
  • Shape: The shape layer defines the hardware resource utilization and the resulting tuning. Java Heap Size is an example of a configuration value found in the shape specification.

The layers are implemented as specification files written in YAML:

  • project-instance.yaml
  • project.yaml
  • shape.yaml

You can build a palette of re-usable, common portions of a configuration for a shape and project. When a new environment is needed, you can pick from this palette, adding an instance specification, which is unique to a single instance of OSM.

About Helm Overrides

The specification files are consumed in a hierarchical fashion. If a value is found in multiple specification files (layers), the one further up the hierarchy takes precedence. This allows the instance specification to have the final control over its configuration by being able to override a value that is prescribed in either the shape or project specifications. This also allows Oracle to define sealed, base configuration, while still providing you the control over the values used for any specific OSM instance.

Following are the specification files, listed in the order of the highest priority to the lowest:

  • project-instance.yaml
  • project.yaml
  • shape.yaml
  • values.yaml

While the specification for an instance points to the specification for the shape to be used (implying the order here may be out of sequence), the values found in the specification for the shape are actually loaded for processing before the values in the specification for the instance.

The instance specification remains the final authority on any values that are found in multiple specification files.

About the OSM Cloud Native Toolkit

The OSM cloud native toolkit is an archive file that includes the default configuration files, utility scripts, and samples to deploy OSM in a cloud native environment. With OSM cloud native, managing the domain configuration as code (CaC) is paramount. OSM cloud native provides guidance on effective management of this configuration to ensure that instances can be created in a standardized and repeatable fashion.

Contents of the OSM Cloud Native Toolkit

The OSM cloud native toolkit contains the following artifacts:

  • Helm charts for OSM and OSM database installer:
    • The Helm chart for OSM is located in $OSM_CNTK/charts/osm.
    • The Helm chart for the OSM DB Installer is located in $OSM_CNTK/charts/osm-dbinstaller.
  • WebLogic Server Deploy Tooling (WDT) metadata model for an OSM WebLogic domain
  • Mechanism to extend the domain and WDT samples and scripts for some common use cases
  • Utility scripts to help with the lifecycle of WebLogic Kubernetes Operator
  • Sample scripts to manage pre-requisite secrets. These are not pipeline-friendly.
  • Scripts to manage the lifecycle of an OSM instance. These are pipeline friendly.