2 Planning and Validating Your Cloud Environment

In preparation for Oracle Communications Order and Service Management (OSM) cloud native deployment, you must set up and validate pre-requisite software. This chapter provides information about planning, setting up, and validating the environment for OSM cloud native deployment.

If you are already familiar with traditional OSM, for important information on the differences introduced by OSM cloud native, see "Differences Between OSM Cloud Native and OSM Traditional Deployments".

Required Components for OSM Cloud Native

In order to run, manage, and monitor the OSM cloud native deployment, the following components and capabilities are required. These must be configured in the cloud environment:

  • Kubernetes Cluster
  • Oracle Multitenant Container Database (CDB)
  • Container Image Management
  • Helm
  • Oracle WebLogic Server Kubernetes Operator
  • Load Balancer
  • Domain Name System (DNS)
  • Persistent Volumes
  • Authentication
  • Secrets Management
  • Kubernetes Monitoring Toolchain
  • Application Logs and Metrics Toolchain

For details about the required versions of these components, see OSM Compatibility Matrix.

In order to utilize the full flexibility, reliability and value of the deployment, the following aspects must also be set up:

  • Continuous Integration (CI) pipelines for custom images and cartridges
  • Continuous Delivery (CD) pipelines for creating, scaling, updating, and deleting instances of the cloud native deployment

Planning Your Cloud Native Environment

This section provides information about planning and setting up OSM cloud native environment. As part of preparing your environment for OSM cloud native, you choose, install, and set up various components and services in ways that are best suited for your cloud native environment. The following sections provide information about each of those required components and services, the available options that you can choose from, and the way you must set them up for your OSM cloud native environment.

Setting Up Your Kubernetes Cluster

For OSM cloud native, Kubernetes worker nodes must be capable of running Linux 7.x pods with software compiled for Intel 64-bit cores. A reliable cluster must have multiple worker nodes spread over separate physical infrastructure and a very reliable cluster must have multiple master nodes spread over separate physical infrastructure.

The following diagram illustrates Kubernetes cluster and the components that it interacts with.

OSM cloud native requires:

  • Kubernetes
    To check the version, run the following command:
    kubectl version
  • Flannel
    To check the version, run the following command on the master node running the kube-flannel pod:
    docker images | grep flannel
    kubectl get pods --all-namespaces | grep flannel
  • Docker
    To check the version, run the following command:
    docker version

Typically, Kubernetes nodes are not used directly to run or monitor Kubernetes workloads. You must reserve worker node resources for the execution of Kubernetes workload. However, multiple users (manual and automated) of the cluster require a point from which to access the cluster and operate on it. This can be achieved by using kubectl commands (either directly on command line and shell scripts or through Helm) or Kubernetes APIs. For this purpose, set aside a separate host or set of hosts. Operational and administrative access to the Kubernetes cluster can be restricted to these hosts and specific users can be given named accounts on these hosts to reduce cluster exposure and promote traceability of actions.

Typically, the Continuous Delivery pipeline automation deploys directly on a set of such operations hosts (as in the case of Jenkins) or leverage runners deployed on such operations hosts (as in the case of GitLab CI). These hosts must run Linux, with all interactive-use packages installed to support tools such as Bash, Wget, cURL, Hostname, Sed, AWK, cut, and grep. An example of this is the Oracle Linux 7.6 image (Oracle-Linux-7.6-2019.08.02-0) on Oracle Cloud Infrastructure.

In addition, you need the appropriate tools to connect to your overall environment, including the Kubernetes cluster. For instance, for a Container Engine for Kubernetes (OKE) cluster, you must install and configure the Oracle Cloud Infrastructure Command Line Interface.

Additional integrations may need to include LDAP for users to be able to login to this host, appropriate NFS mounts for home directories, security lists and firewall configuration for access to overall environment, and so on.

Kubernetes worker nodes should be configured with the recommended operating system kernel parameters listed in "Preparing the Operating System" in the OSM Installation Guide, or if they are engineered systems, "Installing OSM on Engineered Systems" of the OSM Installation Guide. Use the documented values as the minimum values to set for each parameter. Ensure that OS kernel parameter configuration is persistent, so as to survive a reboot.

The basic OSM cloud native instance, for which specification files are provided with the toolkit, requires up to 12 GB of RAM and 3 CPUs, in terms of Kubernetes worker node capacity. A small increment is needed for WebLogic Kubernetes Operator and Traefik. Refer to those projects for details. For detailed breakdown of CPU and memory capacity requirements, see "Working with Shapes."

Synchronizing Time Across Servers

It is important that you synchronize the date and time across all machines that are involved in testing, including client test drivers and Kubernetes worker nodes. Oracle recommends that you do this using Network Time Protocol (NTP), rather than manual synchronization, and strongly recommends it for Production environments. Synchronization is important in inter-component communications and in capturing accurate run-time statistics.

Provisioning Oracle Multitenant Container Database (CDB)

OSM cloud native architecture is best supported by the multitenant architecture that enables an Oracle database to function as a multitenant container database (CDB). A container database is either a Pluggable Database (PDB) or the root container. The root container is a collection of schemas, schema objects, and non-schema objects to which all PDBs belong. A PDB container for OSM cloud native contains the OSM schema and RCU schema. Each instance of OSM has its own PDB. OSM cloud native requires access to PDBs in an Oracle 19c Multitenant database. For more information about the benefits of Oracle Multitenant Architecture for database consolidation, see Oracle Database Concepts.

You can provision a CDB in an on-premise installation by following the instructions in Oracle Database Installation Guide for Linux. Alternatively, you can set it up as an Oracle Cloud Infrastructure DB system. For details on the supported versions, see OSM Compatibility Matrix. The provisioning process can vary based on the needs and the setup of your organization.

OSM cloud native requires certain settings to be configured at the CDB level. You can find those details in "Database Parameters" in OSM Installation Guide.

CDB hosts should be configured with OS kernel parameters as per Knowledge Article 1587357.1 on My Oracle Support. Use the recommended values specified in the KM article as the minimum values. Ensure that OS parameter configuration is persistent so as to survive a reboot.

Once the CDB is ready, you can follow one of the following strategies for the PDB:

Provisioning an Empty PDB
To create an empty PDB:
  1. Run the following SQL commands using the sys dba account for the CDB:
    CREATE PLUGGABLE DATABASE _replace_this_text_with_db_service_name_ ADMIN USER _replace_this_text_with_admin_name_ IDENTIFIED BY
    "_replace_this_text_with_real_admin_password_" DEFAULT TABLESPACE "USERS" DATAFILE '+DATA' SIZE 5M REUSE
    AUTOEXTEND ON;
    ALTER PLUGGABLE DATABASE _replace_this_text_with_db_service_name_ open instances = all;
    ALTER PLUGGABLE DATABASE _replace_this_text_with_db_service_name_ save state instances = all;
    alter session set container=_replace_this_text_with_db_service_name_;
    GRANT CREATE ANY CONTEXT TO SYS WITH ADMIN OPTION;
    GRANT CREATE ANY CONTEXT TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT CREATE ANY VIEW TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT CREATE SNAPSHOT TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT CREATE SYNONYM TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT CREATE TABLE TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT CREATE USER TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT CREATE VIEW TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT CREATE materialized view to _replace_this_text_with_admin_name_;
    GRANT GRANT ANY PRIVILEGE TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT QUERY REWRITE TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT UNLIMITED TABLESPACE TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT SELECT ON SYS.DBA_TABLESPACES TO _replace_this_text_with_admin_name_ WITH GRANT OPTION;
    GRANT SELECT ON SYS.V_$PARAMETER TO _replace_this_text_with_admin_name_ WITH GRANT OPTION;
    GRANT SELECT on SYS.dba_jobs to _replace_this_text_with_admin_name_ with grant option;
    GRANT "CONNECT" TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT "DBA" TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT "EXP_FULL_DATABASE" TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT "IMP_FULL_DATABASE" TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT "RESOURCE" TO _replace_this_text_with_admin_name_ WITH ADMIN OPTION;
    GRANT EXECUTE ON SYS.DBMS_LOCK TO _replace_this_text_with_admin_name_ WITH GRANT OPTION;
    grant execute on utl_file to _replace_this_text_with_admin_name_ with grant option;
    grant sysdba to _replace_this_text_with_admin_name_;
    ADMINISTER KEY MANAGEMENT SET KEY USING TAG 'tag' FORCE KEYSTORE IDENTIFIED BY "sys_password" WITH BACKUP USING 'db_service_name_backup';
  2. Log into the PDB as the sys dba account for the PDB (defined by the "_replace_this_text_with_admin_name_" parameter in the above commands) and adjust the PDB tablespace by running the following command:

    Note:

    In the command, replace DATA with the proper name from v$asm_diskgroup.
    create tablespace osm datafile '+DATA' size 1024m reuse autoextend on next 64m;
    ALTER PLUGGABLE DATABASE DEFAULT TABLESPACE OSM;

Choosing Tablespaces

OSM cloud native supports the OSM best-practice of separate tablespaces for order data, order data indexes, OSM model data, and OSM model data indexes. Production and production-like instances must utilize this separation.

For a simple instance, such as a developer instance, separate tablespaces are not necessary. The default tablespace can be named as the tablespace for each of these categories in the OSM cloud native specification files.

To create PDBs for such instances, additional tablespaces can be added using the "sys dba" account for the PDB:

create tablespace osm_model datafile '+DATA' size 1024m reuse autoextend on next 64m;
create tablespace osm_model_index datafile '+DATA' size 1024m reuse autoextend on next 64m;
create tablespace osm_order datafile '+DATA' size 1024m reuse autoextend on next 64m;
create tablespace osm_order_index datafile '+DATA' size 1024m reuse autoextend on next 64m;

Choose tablespace names and datafiles as per your database management guidelines. Choose the initial tablespace size depending on the desired OSM partition size as per the following table:

Table 2-1 Partition Sizes and Tablespace Sizes

Partition Size Tablespace Size
2000000 (2 million) > or = 1024 MB
10000000 (10 million) > or = 10240 MB
20000000 (20 million) > or = 20480 MB

The tablespace names and the partition size chosen will be required to populate the OSM cloud native specification files for the instance that connects to this PDB.

Oracle recommends using the smaller partition size for developer instances and small test instances. Larger partition sizes are applicable for heavy-duty test instances (for example, for stress tests and performance tests) and production-grade instances.

If securing OSM data is a requirement, the recommended approach is to use transparent data encryption (TDE) to encrypt the tablespaces used to store OSM and WebLogic data. For more details, see OSM - Encrypting Database Tablespaces and WebLogic Protocols (Doc ID 2399723.1) knowledge article on My Oracle Support.

In that context, note that all OSM data is stored in tablespaces and, as a result, it is not necessary to supplement TDE encryption by setting the database parameter db_securefile to PREFERRED. While OSM supports PREFERRED, which has been the default since 12c, it is sufficient to set db_securefile to PERMITTED.

Provisioning a Seed OSM PDB

You can create a "master PDB" for OSM cloud native for a particular project or a subset of users by cloning a seed PDB and then running the OSM cloud native DB installer on it to deploy the OSM schema. At this point, you can deploy your cartridges to this PDB. The resulting PDB can serve as a master that you can clone for each instance that needs those set of cartridges.

You can also add the Fusion MiddleWare RCU DB schema to the master PDB. However, the master PDB must never be directly used in an OSM cloud native instance, as the RCU DB schema contents are inextricably linked to that instance. OSM cloud native instances must only use clones of the master PDB.

The advantage of a master PDB for OSM cloud native is that it standardizes a PDB for a significant number of users, and eliminates the need to perform some of the tasks related to creating instances in pipeline.

About Container Image Management

An OSM cloud native deployment generates container images for OSM and OSM database installer. Additionally, images are downloaded for WebLogic Kubernetes Operator and Traefik (depending on the choice of Ingress controllers).

Oracle highly recommends that you create a private container repository and ensure that all nodes have access to that repository. Images are saved in this repository and all nodes would then have access to the repository. This may require networking changes (such as routes and proxy) and include authentication for logging in to the repository. Oracle recommends that you choose a repository that provides centralized storage and management of not just container images, but also other artifacts such as OSM cartridge PAR files, Fusion MiddleWare patch ZIP files, and so on, as needed.

Failing to ensure that all nodes have access to a centralized repository will mean that images have to be synced to the hosts manually or through custom mechanisms (for example, using scripts), which are error-prone operations as worker nodes are commissioned, decommissioned or even rebooted. When an image on a particular worker node is not available, then the pods using that image are either not scheduled to that node, wasting resources, or fail on that node. If image names and tags are kept constant (such as myapp:latest), the pod may pick up a pre-existing image of the same name and tag, leading to unexpected and hard to debug behaviors.

Installing Helm

OSM cloud native requires Helm, which delivers reliability, productivity, consistency, and ease of use.

In an OSM cloud native environment, using Helm enables you to achieve the following:

  • You can apply custom domain configuration by using a single and consistent mechanism, which leads to an increase in productivity. You no longer need to apply configuration changes through multiple interfaces such as WebLogic Console, WLST, and WebLogic Server MBeans.
  • Changing the OSM domain configuration in the traditional installations is a manual and multi-step process which may lead to errors. This can be eliminated with Helm because of the following features:
    • Helm Lint allows pre-validation of syntax issues before changes are applied
    • Multiple changes can be pushed to the running instance with a single upgrade command
    • Configuration changes may map to updates across multiple Kubernetes resources (such as domain resources, config maps and so on). With Helm, you merely update the Helm release and its responsibility to determine which Kubernetes resources are affected.
  • Including configuration in Helm charts allows the content to be managed as code, through source control, which is a fundamental principle of modern DevOps practices.

In order to co-exist with older Helm versions in production environments, OSM requires Helm 3.1.3 or later saved as helm in PATH.

The following text shows sample commands for installing and validating Helm:

$ cd some-tmp-dir
$ wget https://get.helm.sh/helm-v3.4.1-linux-amd64.tar.gz
$ tar -zxvf helm-v3.4.1-linux-amd64.tar.gz
 
# Find the helm binary in the unpacked directory and move it to its desired destination. You need root user.
$ sudo mv linux-amd64/helm /usr/local/bin/helm
 
# Optional: If access to the deprecated Helm repository "stable" is required, uncomment and run
# helm repo add stable https://charts.helm.sh/stable
 
# verify Helm version
$ helm version
version.BuildInfo{Version:"v3.4.1", GitCommit:"c4e74854886b2efe3321e185578e6db9be0a6e29", GitTreeState:"clean", GoVersion:"go1.14.11"}

Helm leverages kubeconfig for users running the helm command to access the Kubernetes cluster. By default, this is $HOME/.kube/config. Helm inherits the permissions set up for this access into the cluster. You must ensure that if RBAC is configured, then sufficient cluster permissions are granted to users running Helm.

Setting Up Oracle WebLogic Server Kubernetes Operator

Oracle WebLogic Server Kubernetes Operator provides WebLogic servers and clusters in a manner that is compatible with Kubernetes. The WebLogic Server Kubernetes Operator software is available as a container image. For OSM cloud native, you must download the WebLogic Server Kubernetes Operator container image and clone the WebLogic Server Kubernetes Operator GitHub repository.

To clone the repository, run the following commands:
$ cd path_to_wlsko_repository
$ git clone https://github.com/oracle/weblogic-kubernetes-operator.git
$ cd weblogic-kubernetes-operator
$ git checkout tags/v3.1.0
# This is the tag of v3.1.0 GA

For details about the required version of WKO, see OSM Compatibility Matrix.

After cloning the repository, set the WLSKO_HOME environment variable to the location of the WKO git repository, by running the following command:
$ export WLSKO_HOME=path-to-wlsko-repo/weblogic-kubernetes-operator

Note:

Developers can add the export command to ~/.bashrc or ~/.profile so that it is always set.

For more details on WKO, see Oracle WebLogic Kubernetes Operator User Guide.

For instructions on validating the operation of the WebLogic Server Kubernetes Operator on your Kubernetes cluster, see "Validating Your Cloud Environment".

About Load Balancing and Ingress Controller

Each OSM cloud native instance is a WebLogic cluster running in Kubernetes. To access application endpoints, you must enable HTTP/S connectivity to the cluster through an appropriate mechanism. This mechanism must be able to route traffic to the appropriate OSM cloud native instance in the Kubernetes cluster (as there can be many) and must be able to distribute traffic to the multiple Managed Server pods within a given instance. Each instance must be insulated from the traffic of the other instance. Distribution within an instance must allow for session stickiness so that OSM client UIs bind to a managed server wherever possible and therefore not require arbitrary re-authentication by the user. In the case of HTTPS, the load balance mechanism must enable TLS and handle it appropriately.

For OSM cloud native, an ingress controller is required to expose appropriate services from the OSM cluster and direct traffic appropriately to the cluster members. An external load balancer is an optional add-on.

The ingress controller monitors the ingress objects created by the OSM cloud native deployment, and acts on the configuration embedded in these objects to expose OSM HTTP and HTTPS services to the external network. This is achieved using NodePort services exposed by the ingress controller.

The ingress controller must support:
  • Sticky routing (based on standard session cookie).
  • Load balancing across the OSM managed servers (back-end servers).
  • SSL termination and injecting headers into incoming traffic.
Examples of such ingress controllers include Traefik, Voyager, and Nginx. The OSM cloud native toolkit provides samples and documentation that use Traefik as the ingress controller.

An external load balancer serves to provide a highly reliable singe-point access into the services exposed by the Kubernetes cluster. In this case, this would be the NodePort services exposed by the ingress controller on behalf of the OSM cloud native instance. Using a load balancer removes the need to expose Kubernetes node IPs to the larger user base, and insulates the users from changes (in terms of nodes appearing or being decommissioned) to the Kubernetes cluster. It also serves to enforce access policies. The OSM cloud native toolkit includes samples and documentation that show integration with Oracle Cloud Infrastructure LBaaS when Oracle OKE is used as the Kubernetes environment.

Using Traefik as the Ingress Controller

If you choose to use Traefik as the ingress controller, the Kubernetes environment must have the Traefik ingress controller installed and configured.

For details about the required version of Traefik, see OSM Compatibility Matrix.

To install and configure Traefik, do the following:

Note:

Set kubernetes.namespaces and the chart version specifically using command-line.
  1. Ensure that the following tasks are completed:
    • Docker daemons in your Kubernetes environment are configured for access to Docker Hub.
    • The Helm repository is updated successfully as per the Helm section in this chapter.
  2. Run the following commands:
    $ export TRAEFIK_NS=traefik 
    $ kubectl create namespace $TRAEFIK_NS 
    $ helm repo add traefik https://helm.traefik.io/traefik
    $ helm install traefik-operator traefik/traefik \
     --namespace $TRAEFIK_NS \
     --version 9.11.0 \
     --values $OSM_CNTK/samples/charts/traefik/values.yaml \
      --set "kubernetes.namespaces={$TRAEFIK_NS}"

Once the installation of Helm succeeds, the Traefik operator monitors the namespaces listed in its kubernetes.namespaces field for Ingress objects.

Using Domain Name System (DNS)

A Kubernetes cluster can have many routable entrypoints. Common choices are:

  • External load balancer (IP and port)
  • Ingress controller service (master node IPs and ingress port)
  • Ingress controller service (worker node IPs and ingress port)

You must identify the proper entrypoint for your Kubernetes cluster.

OSM cloud native requires hostnames to be mapped to routable entrypoints into the Kubernetes cluster. Regardless of the actual entrypoints (external load balancer, Kubernetes master node, or worker nodes), users who need to communicate with the OSM cloud native instances require name resolution.

The access hostnames take the prefix.domain form. prefix and domain are determined by the specifications of the OSM cloud native configuration for a given deployment. prefix is unique to the deployment, while domain is common for multiple deployments.

The default domain in OSM cloud native toolkit is osm.org.

For a particular deployment, as an example, this results in the following addresses:

  • dev1.wireless.osm.org (for HTTP access)
  • admin.dev1.wireless.osm.org (for WebLogic Console access)
  • t3.dev1.wireless.osm.org (for T3 JMS/SAF access)

These "hostnames" must be routable to the entry point of your Ingress Controller or Load Balancer. For a basic validation, on the systems that access the deployment, edit the local hosts file to add the following entry:

Note:

The hosts file is located in /etc/hosts on Linux and MacOS machines and in C:\Windows\System32\drivers\etc\hosts on Windows machines.
ip_address  dev1.wireless.osm.org   admin.dev1.wireless.osm.org  t3.dev1.wireless.osm.org

However, the solution of editing the hosts file is not easy to scale and co-ordinate across multiple users and multiple access environments. A better solution is to leverage DNS services at the enterprise level.

With DNS servers, a more efficient mechanism can be adopted. The mechanism is the creation of a domain level A-record:
A-Record: *.osm.org IP_address

If the target is not a load balancer, but the Kubernetes cluster nodes themselves, a DNS service can also insulate the user from relying on any single node IP. The DNS entry can be configured to map *.osm.org to all the current Kubernetes cluster node IP addresses. You must update this mapping as the Kubernetes cluster changes with adding a new node, removing an old node, reassigning the IP address of a node, and so on.

With these two approaches, you can set up an enterprise DNS once and modify it only infrequently.

Configuring Kubernetes Persistent Volumes

Typically, runtime artifacts in OSM cloud native are created within the respective pod filesystems. As a result, they are lost when the pod is deleted. These artifacts include application logs, Fusion MiddleWare logs, and JVM Java Flight Recorder data.

While this impermanence may be acceptable for highly transient environments, it is typically desirable to have access to these artifacts outside of the lifecycle of the OSM could native instance. It is also highly recommended to deploy a toolchain for logs to provide a centralized view with a dashboard. To allow for artifacts to survive independent of the pod, OSM cloud native allows for them to be maintained on Kubernetes Persistent Volumes.

OSM cloud native does not dictate the technology that supports Persistent Volumes, but provides samples for NFS-based persistence. Additionally, for OSM cloud native on an Oracle OKE cloud, you can use persistence based on File Storage Service (FSS).

Regardless of the persistence provider chosen, persistent volumes for OSM cloud native use must be configured:
  • With accessMode ReadWriteMany
  • With capacity to support intended workload

Log size and retention policies can be configured as part of the shape specification.

About NFS-based Persistence

For use with OSM cloud native, one or more NFS servers must be designated.

It is highly recommended to split the servers as follows:

  • At least one for the development instances and the non-sensitive test instances (for example, for Integration testing)
  • At least one for the sensitive test instances (for example, for Performance testing, Stress testing, and production staging)
  • One for the production instance

In general, ensure that the sensitive instances have dedicated NFS support, so that they do not compete for disk space or network IOPS with others.

The exported filesystems must have enough capacity to support the intended workload. Given the dynamic nature of the OSM cloud native instances, and the fact that the OSM logging volume is highly dependent on cartridges and on the order volume, it is prudent to put in place a set of operational mechanisms to:

  • Monitor disk usage and warn when the usage crosses a threshold
  • Clean out the artifacts that are no longer needed

If a toolchain such as ELK Stack picks up this data, then the cleanup task can be built into this process itself. As artifacts are successfully populated into the toolchain, they can be deleted from the filesystem. You must take care to only delete log files that have rolled over.

About Authentication

OSM cloud native requires the use of two-level LDAP with embedded first and then external next. All OSM system users are created in embedded LDAP during instance creation. It is highly recommended that all system users and all users configured for automation tasks and API servicing be created in embedded LDAP for performance and reliability reasons. Human users are recommended to be served via access to an external (corporate) LDAP system.

For complete details on the requirement of an external authenticator, see "Using WebLogic Server Authenticators with OSM" in OSM System Administrator's Guide. When OSM cloud instances use external authentication, ensure that you create separate users and groups for each environment (or class of environments) in the external LDAP service. The specifications of this depend on the LDAP service provider.

OSM cloud native toolkit provides a sample configuration that uses OpenLDAP to demonstrate how to integrate with external LDAP server for human users. For details on setting up the OpenLDAP server and the layout of the data within it, see "Setting Up Authentication."

Management of Secrets

OSM cloud native leverages Kubernetes Secrets to store sensitive information securely. This sensitive information is, at a minimum, the database credentials and the WebLogic administrator credentials. Additional credentials may be stored to authenticate with the external LDAP system. Your custom cartridges may need to communicate with other systems, such as Unified Inventory Management (UIM). The credentials for such systems too are managed as Kubernetes Secrets.

These secrets need to be secured over their lifecycle by the Kubernetes cluster administration. RBAC should be used to restrict the entities that can describe, view, or mount these credentials.

OSM cloud native scripts assume that a set of pre-requisite secrets exist when they are invoked. As such, creation of the secrets is a pre-requisite step in the pipeline. OSM cloud native toolkit provides a sample script to create some of the common secrets it needs, but this script is interactive and therefore not suitable for Continuous Delivery (CD) automation pipelines. The sample script serves to provide a basic mechanism to add secrets and illustrates the names and structure of the secrets that OSM cloud native requires.

You can create the secrets manually by using the sample script for each instance. The sample can be augmented to include additional custom secrets. This method requires exposing RBAC for creating secrets for a larger group of users, which might not be desirable. It can also result in human errors, such as mistyping a password, which will only be detected during the runtime of the OSM instance.

A more sustainable and scalable option is using a secrets management system. There are several secrets management systems available for use with Kubernetes. Choose a system that offers a secure API (to be called from the CD pipeline) and populates the sensitive information as secrets into Kubernetes, as opposed to populating into pods through environment variables. The installation, configuration, and validation of such a secrets management system is a pre-requisite to uptake OSM cloud native. For details on setting up the secrets management system, see the documentation of the system that you adopt.

Using Kubernetes Monitoring Toolchain

A multi-node Kubernetes cluster with multiple users and an ever-changing workload requires a capable set of tools to monitor and manage the cluster. There are tools that provide data, rich visualizations and other capabilities such as alerts. OSM cloud native does not require any particular system to be used, but recommends using such a monitoring, visualization and alerting capability.

For OSM cloud native, the key aspects of monitoring are:

  • Worker capacity in CPU and memory. The pods take up non-trivial amount of worker resources. For example, pods configured for production performance use 32 GB of memory. Monitoring the free capacity leads to predictable OSM instance creation and scale-up.
  • Worker node disk pressure
  • Worker node network pressure
  • Health of the core Kubernetes services
  • Health of WebLogic Kubernetes Operator
  • Health of Traefik (or other load balancer in the cluster)
The namespaces and pods that OSM cloud native uses provide a cross instance view of OSM cloud native.

About Application Logs and Metrics Toolchain

OSM cloud native generates all logs that traditional OSM and WebLogic Server typically generate. The logs can be sent to a shared filesystem for retention and for retrieval by a toolchain such as Elastic Stack.

In addition, OSM cloud native generates metrics and JVM Java Flight Recorder (JFR) data. OSM cloud native exposes metrics for scraping by Prometheus. These can then be processed by a metrics toolchain, with visualizations like Grafana dashboards. Dashboards and alerts can be configured to enable sustainable monitoring of multiple OSM cloud native instances throughout their lifecycles. The OSM JFR data can be retrieved by Java Mission Control or such similar tools to analyze the performance of OSM at the JVM level. Performance metrics include heap utilization, threads stuck, garbage collection, and so on.

Oracle highly recommends using a toolchain to effectively monitor OSM cloud native instances. The dynamic lifecycle in OSM cloud native, in terms of deploying, scaling and updating an instance, requires proper monitoring and management of the database resources as well. For non-sensitive environments such as development instances and some test instances, this largely implies monitoring the tablespace usage and the disk usage, and adding disk space as needed.

Another important facet is to track PDB usage to ensure PDBs that are no longer required are deleted so that the resources are freed up. Sensitive environments such as production and stress test instances require close monitoring of the database resources such as CPU, SGA/PGA, top-runner SQLs, and IOPS.

A key implication of the dynamic behavior of OSM cloud native on the database is when the instances are dehydrated. Very often, there is a requirement to have an OSM instance kept around even when it is not being actively used. This is because it captures a particular state (for example, cartridge lineup or order load) or is non-trivial to recreate. Such an environment lies idle until it is needed again. With OSM cloud native, there is no retained state within the run-time instance. The information on creating the instance is in the CD artifacts (the various specification files), and all the OSM application information is in the PDB. As a result, when the instance is not actively needed, all Kubernetes resources for it can be freed up by deleting the instance. This does not delete the PDB. The CD artifacts and the PDB can be used to rehydrate the instance when required. In the mean time, if the instance is not required for a while (or if there is database capacity pressure), the PDB can be unplugged to no longer consume any run-time resources. An unplugged PDB can even be transferred to another CDB and plugged in there.

Role of Continuous Integration (CI) Pipelines

The roles of CI pipelines in an OSM cloud native environment are as follows:

  • To generate standard OSM cartridge PAR files and store them in a central location with appropriate path and naming convention for deployment. Developers run this automation as they modify cartridges for testing. Standalone mechanisms that generate "official" cartridge builds for testing and production use also run automation.
  • To generate custom OSM cloud native images. The OSM cloud native images contain all the components needed to run OSM cloud native. However, you may require additional applications to be co-hosted by the OSM WebLogic cluster. Examples of such applications include MDBs to mediate communication with an external system and third-party Java EE monitoring tools. These applications must be layered on top of the OSM cloud native image to generate a custom image. Automation can accomplish this by using the file samples that are provided in the toolkit. The generated images must be uploaded to the internal container repository for use by deployment. The path and naming convention must be followed to designate images that are in development versus images that are ready for testing; and to version the images themselves.

OSM cloud native does not mandate the use of a specific set of tools for CI automation. Common choices are GitLab CI and Jenkins. As part of preparing for OSM cloud native, you must evaluate CI automation tools and choose one that fits your business needs and the desired source control mechanisms.

Role of Continuous Delivery (CD) Pipelines

The role of CD pipelines in an OSM cloud native environment is to perform operations on the target Kubernetes cluster to automate the full lifecycle of an OSM cloud native instance.

The following are the main operations you must implement:

  • Create instance: This must drive off the source-controlled OSM cloud native specification files and run through the various stages (secrets creation, PDB creation, OSM database installation, OSM instance creation, load balancer creation, and cartridge deployment) to create a new OSM cloud native instance. Variability should be built in for some key phases as secrets may already exist and may need to be updated, or PDB may already exist with or without OSM schema, and so on. As a result, this automation is written to a "create-or-update" pattern.

  • Update instance: This must be a variant of the instance creation automation, skipping the PDB creation and perhaps the load balancer (Ingress) creation. The automation takes the source-controlled OSM cloud native specification files, which have presumably been modified in some way since the instance was created, and runs through the steps to make those changes appear in the provisioned OSM instance. The specification changes could be as simple as a change in the number of desired Managed Servers, or could be as complex as introducing a new OSM container image. On the other hand, the only change might be a new version of the cartridge to be deployed.

  • Delete instance: This must clean up the Kubernetes resources used by the instance. Typically, the PDB is left alone to be handled separately, but it is possible to chain its deletion to the clean up operation as well.

OSM cloud native does not mandate the use of a particular set of tools for CD automation. Common choices are GitLab CD and Jenkins. As part of preparing for OSM cloud native, you must evaluate CD automation tools and choose one that fits your business needs and the target Kubernetes environment.

Planning Your Container Engine for Kubernetes (OKE) Cloud Environment

This section provides information about planning your cloud environment if you want to use Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) for OSM cloud native. Some of the components, services, and capabilities that are required and recommended for a cloud native environment are applicable to the Oracle OKE cloud environment as well.

  • Kubernetes and Container Images: You can choose from the version options available in OKE as long as the selected version conforms to the range described in the section about planning cloud native environment.
  • Container Image Management: OSM cloud native recommends using Oracle Cloud Infrastructure Registry with OKE. Any other repository that you use must be able to serve images to the OKE environment in a quick and reliable manner. The OSM cloud native images are of the order of 3 GB each.
  • Oracle Multitenant Database: It is strongly recommended to run Oracle DB outside of OKE, but within the same Oracle Cloud Infrastructure tenancy and the region as an Oracle DB service (BareMetal, VM, or ExaData). The database version should be 19c. You can choose between a standalone DB or a multi-node RAC.
  • Helm and Oracle WebLogic Kubernetes Operator: Install Helm and Oracle WebLogic Kubernetes Operator as described for the cloud native environment into the OKE cluster.
  • Persistent Volumes: Use NFS-based persistence. OSM cloud native recommends the use of Oracle Cloud Infrastructure File Storage service in the OKE context.
  • Authentication and Secrets Management: These aspects are common with the cloud native environment. Choose your mechanisms to deliver these capabilities and implement them in your OKE instance.
  • Monitoring Toolchains: While the Oracle Cloud Infrastructure Console provides a view of the resources in the OKE cluster, it also enables you to use the Kubernetes Dashboard. Any additional monitoring capability must be built up.
  • CI and CD Pipelines: The considerations and actions described for CI and CD pipelines in the cloud native environment apply to the OKE environment as well.

Compute Disk Space Requirements

Given the size of the OSM cloud native container images (approximately 2 GB), the size of the OSM cloud native containers, and the volume of the OSM logs generated, it is recommended that the OKE worker nodes have at least 40 GB of free space that the /var/lib filesystem can use. Add disk space if the worker nodes do not have the recommended free space in the /var/lib filesystem.

Work with your Oracle Cloud Infrastructure OKE administrator to ensure worker nodes have enough disk space. Common options are to use Compute shapes with larger boot volumes or to mount an Oracle Cloud Infrastructure Block Volume to /var/lib/docker.

Note:

The reference to logs in this section applies to the container logs and other infrastructure logs. The space considerations still apply even if the OSM cloud native logs are being sent to an NFS Persistent Volume.

Connectivity Requirements

OSM cloud native assumes the connectivity between the OKE cluster and the Oracle CDBs is a LAN-equivalent in reliability, performance and throughput. This can be achieved by creating the Oracle CDBs within the same tenancy as the OKE cluster, and in the same Oracle Cloud Infrastructure region.

OSM cloud native allows for the full range of Oracle Cloud Infrastructure "cloud-to-ground" connectivity options for integrating the OKE cluster with on-premise applications and users. Selecting, provisioning, and testing such connectivity is a critical part of adopting Oracle Cloud Infrastructure OKE.

Using Load Balancer as a Service (LBaaS)

For load balancing, you have the option of using the services available in OKE. The infrastructure for OKE is provided by Oracle's IaaS offering, Oracle Cloud Infrastructure. In OKE, the master node IP address is not exposed to the tenants. The IP addresses of the worker nodes are also not guaranteed to be static. This makes DNS mapping difficult to achieve. Additionally, it is also required to balance the load between the worker nodes. In order to fulfill these requirements, you can use Load Balancer as a Service (LBaaS) of Oracle Cloud Infrastructure.

The load balancer can be created using the service descriptor in $OSM_CNTK/samples/oci-lb-traefik.yaml. The subnet ID referenced in this file must be filled in from your Oracle Cloud Infrastructure environment (using the subnet configured for your LBaaS). The port values assume you have installed Traefik using the unchanged sample values.

The configuration can be applied using the following command (or for traceability, by wrapping it into a Helm chart):

$ kubectl apply -f oci-lb-traefik.yaml
service/oci-lb-service-traefikconfigured

The Load Balancer service is created for Traefik pods in the Traefik namespace. Once the Load Balancer service is created successfully, an external IP address is allocated. This IP address must be used for DNS mapping.

$ kubectl get svc -n traefik oci-lb-service-traefik
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)       
oci-lb-service-traefik     LoadBalancer   10.96.103.118   100.77.24.178   80:32006/TCP,443:32307/TCP

For additional details, see the following:

About Using Oracle Cloud Infrastructure Domain Name System (DNS) Zones

While a custom DNS service can provide the addressing needs of OSM cloud native even when OSM is running in OKE, you can evaluate the option of Oracle Cloud Infrastructure Domain Name System (DNS) zones capability. Configuration of DNS zones (and integration with on-premise DNS systems) is not within the scope of OSM cloud native.

Using Persistent Volumes and File Storage Service (FSS)

In the OKE cluster, OSM cloud native can leverage the high performance, high capacity, high reliability File Storage Service (FSS) as the backing for the persistent volumes of OSM cloud native. There are two flavors of FSS usage in this context:
  • Allocating FSS by setting up NFS mount target
  • Native FSS

To use FSS through an NFS mount target, see instructions for allocating FSS and setting up a Mount Target in "Creating File Systems" in the Oracle Cloud Infrastructure documentation. Note down the Mount Target IP address and the storage path and use these in the OSM cloud native instance specification as the NFS host and path. This approach is simple to set up and leverages the NFS storage provisioner that is typically available in all Kubernetes installations. However, the data flows through the mount target, which models an NFS server.

FSS can also be used natively, without requiring the NFS protocol. This can be achieved by leveraging the FSS storage provisioner supplied by OKE. The broad outline of how to do this is available in the blog post "Using File Storage Service with Container Engine for Kubernetes" on the Oracle Cloud Infrastructure blog.

Leveraging Oracle Cloud Infrastructure Services

For your OKE environment, you can leverage existing services and capabilities that are available with Oracle Cloud Infrastructure. The following table lists the Oracle Cloud Infrastructure services that you can leverage for your OKE cloud environment.

Table 2-2 Oracle Cloud Infrastructure Services for OKE Cloud Environment

Type of Service Service Indicates Mandatory / Recommended / Optional
Developer Service Container Clusters Mandatory
Developer Service Registry Recommended
Core Infrastructure Compute Instances Mandatory
Core Infrastructure File Storage Recommended
Core Infrastructure Block Volumes Optional
Core Infrastructure Networking Mandatory
Core Infrastructure Load Balancers Recommended
Core Infrastructure DNS Zones Optional
Database BareMetal, VM, and ExaData Recommended

Validating Your Cloud Environment

Before you start using your cloud environment for deploying OSM cloud native instances, you must validate the environment to ensure that it is set up properly and that any prevailing issues are identified and resolved. This section describes the tasks that you should perform to validate your cloud environment.

You can validate your cloud environment by:

  • Performing a smoke test of the Kubernetes cluster
  • Validating the common building blocks in the Kubernetes cluster
  • Running tasks and procedures in Oracle WebLogic Kubernetes Operator Quickstart

Performing a Smoke Test

You can perform a smoke test of your Kubernetes cloud environment by running nginx. This procedure validates basic routing within the Kubernetes cluster and access from outside the environment. It also allows for initial RBAC examination as you need to have permissions to perform the smoke test. For the smoke test, you need nginx 1.14.2 container image.

Note:

The requirement of the nginx container image for the smoke test can change over time. See the content of the deployment.yaml file in step 3 of the following procedure to determine which image is required. Alternatively, ensure that you have logged in to Docker Hub so that the system can download the required image automatically.

To perform a smoke test:

  1. Download the nginx container image from Docker Hub.

    For details on managing container images, see "Container Image Management."

  2. After obtaining the image from Docker Hub, upload it into your private container repository and ensure that the Kubernetes worker nodes can access the image in the repository.

    Oracle recommends that you download and save the container image to the private Docker repository even if the worker nodes can access Docker Hub directly. The images in the OSM cloud native toolkit are available only through your private Docker repository.

  3. Run the following commands:

    kubectl apply -f https://k8s.io/examples/application/deployment.yaml # the deployment specifies two replicas
    kubectl get pods     # Must return two pods in the Running state
    kubectl expose deployment nginx-deployment --type=NodePort --name=external-nginx
    kubectl get service external-nginx    # Make a note of the external port for nginx

    These commands must run successfully and return information about the pods and the port for nginx.

  4. Open the following URL in a browser:

    http://master_IP:port/
    where:
    • master_IP is the IP address of the master node of the Kubernetes cluster or the external IP address for which routing has been set up
    • port is the external port for the external-nginx service
  5. To track which pod is responding, on each pod, modify the text message in the web page served by nginx. In the following example, this is done for a deployment of two pods:
    $ kubectl get pods -o wide | grep nginx
    nginx-deployment-5c689d88bb-g7zvh   1/1     Running   0          1d     10.244.0.149   worker1   <none>
    nginx-deployment-5c689d88bb-r68g4   1/1     Running   0          1d     10.244.0.148   worker2   <none>
    $ cd /tmp
    $ echo "This is pod A - nginx-deployment-5c689d88bb-g7zvh - worker1" > index.html
    $ kubectl cp index.html nginx-deployment-5c689d88bb-g7zvh:/usr/share/nginx/html/index.html
    $ echo "This is pod B - nginx-deployment-5c689d88bb-r68g4 - worker2" > index.html
    $ kubectl cp index.html nginx-deployment-5c689d88bb-r68g4:/usr/share/nginx/html/index.html
    $ rm index.html
  6. Check the index.html web page to identify which pod is serving the page.

  7. Check if you can reach all the pods by running refresh (Ctrl+R) and hard refresh (Ctrl+Shift+R) on the index.html Web page.

  8. If you see the default nginx page, instead of the page with your custom message, it indicates that the pod has restarted. If a pod restarts, the custom message in the page gets deleted.

    Identify the pod that restarted and apply the custom message for that pod.

  9. Increase the pod count by patching the deployment.

    For instance, if you have three worker nodes, run the following command:

    Note:

    Adjust the number as per your cluster. You may find you have to increase the pod count to more than your worker node count until you see at least one pod on each worker node. If this is not observed in your environment even with higher pod counts, consult your Kubernetes administrator. Meanwhile, try to get as much worker node coverage as reasonably possible.
    kubectl patch deployment nginx-deployment -p '{"spec":{"replicas":3}}' --type merge
  10. For each pod that you add, repeat step 5 to step 8.

Ensuring that all the worker nodes have at least one nginx pod in the Running state ensures that all worker nodes have access to Docker Hub or to your private Docker repository.

Validating Common Building Blocks in the Kubernetes Cluster

To approach OSM cloud native in a sustainable manner, you must validate the common building blocks that are on top of the basic Kubernetes infrastructure individually. The following sections describe how you can validate the building blocks.

Network File System (NFS)

OSM cloud native uses Kubernetes Persistent Volumes (PV) and Persistent Volume Claims (PVC) to use a pod-remote destination filesystem for OSM logs and performance data. By default, these artifacts are stored within a pod in Kubernetes and are not easily available for integration into a toolchain. For these to be available externally, the Kubernetes environment must implement a mechanism for fulfilling PV and PVC. The Network File System (NFS) is a common PV mechanism.

For the Kubernetes environment, identify an NFS server and create or export an NFS filesystem from it.

Ensure that this filesystem:
  • Has enough space for the OSM logs and performance data.

  • Is mountable on all the Kubernetes worker nodes

Create an nginx pod that mounts an NFS PV from the identified server. For details, see the documentation about "Kubernetes Persistent Volumes" on the Kubernetes website. This activity verifies the integration of NFS, PV/PVC and the Kubernetes cluster. To clean up the environment, delete the nginx pod, the PVC, and the PV.

Ideally, data such as logs and JFR data is stored in the PV only until it can be retrieved into a monitoring toolchain such as Elastic Stack. The toolchain must delete the rolled over log files after processing them. This helps you to predict the size of the filesystem. You must also consider the factors such as the number of OSM cloud native instances that will use this space, the size of those instances, the volume of orders they will process, and the volume of logs that your cartridges generate.

Validating the Load Balancer

For a development-grade environment, you can use an in-cluster software load balancer. OSM cloud native toolkit provides documentation and samples that show you how to use Traefik to perform load balancing activities for your Kubernetes cluster.

It is not necessary to run through "Traefik Quick Start" as part of validating the environment. However, if the OSM cloud native instances have connectivity issues with HTTP/HTTPS traffic, and the OSM logs do not show any failures, it might be worthwhile to take a step back and validate Traefik separately using Traefik Quick Start.

A more intensive environment, such as a test, a production, a pre-production, or performance environments can additionally require a more robust load balancing service to handle the HTTP/HTTPS traffic. For such environments, Oracle recommends using a load balancing hardware that is set up outside the Kubernetes cluster. A few examples of external load balancers are Oracle Cloud Infrastructure LBaaS for OKE, Google's Network LB Service in GKE, and F5's Big-IP for private cloud. The actual selection and configuration of an external load balancer is outside the scope of OSM cloud native itself, but is an important component to sort out in the implementation of OSM cloud native. For more details on the requirements and options, see "Integrating OSM."

To validate the ingress controller of your choice, you can use the same nginx deployment used in the smoke test described earlier. This is valid only when run in a Kubernetes cluster where multiple worker nodes are available to take the workload.

To perform a smoke test of your ingress setup:

  1. Run the following commands:
    kubectl apply -f https://k8s.io/examples/application/deployment.yaml
    kubectl get pods -o wide    # two nginx pods in Running state; ensure these are on different worker nodes
    cat > smoke-internal-nginx-svc.yaml <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: smoke-internal-nginx
      namespace: default
    spec:
      ports:
      - port: 80
        protocol: TCP
        targetPort: 80
      selector:
        app: nginx
      sessionAffinity: None
      type: ClusterIP
    EOF
    kubectl apply -f ./smoke-internal-nginx-svc.yaml
    kubectl get svc smoke-internal-nginx
  2. Create your ingress targeting the internal-nginx service. The following text shows a sample ingress annotated to work with the Traefik ingress controller:
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      annotations:
        kubernetes.io/ingress.class: traefik
      name: smoke-nginx-ingress
      namespace: default
    spec:
      rules:
      - host: smoke.nginx.osmtest.org
        http:
          paths:
          - backend:
              serviceName: smoke-internal-nginx
              servicePort: 80

    If the Traefik ingress controller is configured to monitor the default namespace, then Traefik creates a reverse proxy and the load balancer for the nginx deployment. For more details, see Traefik documentation.

    If you plan to use other ingress controllers, refer to the documentation about the corresponding controllers for information on creating the appropriate ingress and make it known to the controller. The ingress definition should be largely reusable, with ingress controller vendors describing their own annotations that should be specified, instead of the Traefik annotation used in the example.

  3. Create a local DNS/hosts entry in your client system mapping smoke.nginx.osmtest.org to the IP address of the cluster, which is typically the IP address of the Kubernetes master node, but could be configured differently.

  4. Open the following URL in a browser:

    http://smoke.nginx.osmtest.org:Traefik_Port/

    where Traefik_Port is the external port that Traefik has been configured to expose.

  5. Verify that the web address opens and displays the nginx default page.

Your ingress controller must support session stickiness for OSM cloud native. To learn how stickiness should be configured, refer to the documentation about the ingress controller you choose. For Traefik, stickiness must be set up at the service level itself. For testing purposes, you can modify the internal-nginx service to enable stickiness by running the following commands:
kubectl delete ingress smoke-nginx-ingress
vi smoke-internal-nginx-svc.yaml
# Add an annotation section under the metadata section:
#   annotation:
#     traefik.ingress.kubernetes.io/affinity: "true"
kubectl apply -f ./smoke-internal-nginx-svc.yaml
# now apply back the ingress smoke-nginx-ingress using the above yaml definition

Other ingress controllers may have different configuration requirements for session stickiness. Once you have configured your ingress controller, and the smoke-nginx-ingress and smoke-internal-nginx services as required, repeat the browser-based procedure to verify and confirm if nginx is still reachable. As you refresh (Ctrl+R) the browser, you should see the page getting served by one of the pods. Repeatedly refreshing the web page should show the same pod servicing the access request.

To further test session stickiness, you can either do a hard refresh (Ctrl+Shift+R) or restart your browser (you may have to use the browser in Incognito or Private mode), or clear your browser cache for the access hostname for your Kubernetes cluster. You may observe that the same nginx pod or a different pod is servicing the request. Refreshing the page repeatedly should stick with the same pod while hard refreshes should switch to the other pod occasionally. As the deployment has two pods, chances of a switch with a hard refresh are 50%. You can modify the deployment to increase the number of replica nginx pods (controlled by the replicas parameter under spec) to increase the odds of a switch. For example, with four nginx pods in the deployment, the odds of a switch with hard refresh rise to 75%. Before testing with the new pods, run the commands for identifying the pods to add unique identification to the new pods. See the procedure in "Performing a Smoke Test" for the commands.

To clean up the environment after the test, delete the following services and the deployment:

  • smoke-nginx-ingress
  • smoke-internal-nginx
  • nginx-deployment

Running Oracle WebLogic Kubernetes Operator Quickstart

Oracle recommends that you validate your new Kubernetes environment for OSM cloud native by performing the procedures described in Oracle WebLogic Kubernetes Operator Quickstart available at: https://oracle.github.io/weblogic-kubernetes-operator/quickstart/

The quickstart guide provides instructions for creating a WebLogic deployment in a Kubernetes cluster with the Oracle WebLogic Kubernetes Operator. The guide also provides instructions for downloading and installing a load balancer, and a domain. Follow the instructions provided above for Helm 3.x.

When you run and complete the tasks in the quickstart successfully, the following aspects of the cloud environment are tested and verified:

  • Private Docker repository (or procedures to sync per-node Docker cache on a multi-node Kubernetes cluster)

  • Initial view of the chosen in-cluster load balancers

  • RBAC for WebLogic Kubernetes Operator

  • Procedure to introduce secrets into the cloud environment

  • Basic compatibility of the cloud environment with WebLogic Kubernetes Operator

The quickstart also contains instructions for cleaning up the environment after you finish the validation and testing. Perform these clean-up procedures to return the environment to the original state for OSM cloud native.

After completing the clean-up procedures, ensure that the WebLogic Kubernetes Operator CustomResourceDefinition (CRD) is removed from your cluster by running the following commands:
$ kubectl get crd domains.weblogic.oracle
# if this returns an existing CRD even after WKO quickstart cleanup, then run:
$ kubectl delete crd domains.weblogic.oracle