4 Creating a Basic OSM Cloud Native Instance

This chapter describes how to create a basic OSM cloud native instance in your cloud environment using the operational scripts and the base OSM configuration provided in the OSM cloud native toolkit. You can create an OSM instance quickly in order to become familiar with the process, explore the configuration, and structure your own project. This procedure is intended to validate that you are able to create a basic OSM instance in your environment. For information on creating your own project with custom configuration, see "Creating Your Own OSM Cloud Native Instance".

Before you can create an OSM instance, you must do the following:

  • Download and extract the OSM cloud native toolkit archive file
  • Clone the WebLogic Kubernetes Operator (WKO) GIT repository. This is required to be performed on each host that installs and runs the toolkit scripts when a Kubernetes cluster is shared by multiple hosts.
  • Install the WKO and Traefik container images. These tasks are required to be performed for each cluster that has shared resources.

Installing the OSM Cloud Native Artifacts and the Toolkit

Build container images for the following using the OSM cloud native Image Builder:

  • OSM core application
  • OSM database installer

You must create a private Docker repository for these images, ensuring that all nodes in the cluster have access to the repository. See "About Container Image Management" for more details.

Download the OSM cloud native toolkit archive and do the following:

  • On Oracle Linux: Where Kubernetes is hosted on Oracle Linux, download and extract the tar archive to each host that has connectivity to the Kubernetes cluster.
  • On OKE: For an environment where Kubernetes is running in OKE, extract the contents of the tar archive on each OKE client host. The OKE client host is the bastion host/s that is set up to communicate with the OKE cluster.
Set the variable for the installation directory by running the following command, where osm_cntk_path is the installation directory of the OSM cloud native toolkit:
$ export OSM_CNTK=osm_cntk_path

Cloning the WebLogic Kubernetes Operator (WKO) GIT Repository

Some scripts in the OSM cloud native toolkit require access to the WebLogic Kubernetes Operator Helm chart. You must ensure that the repository is cloned on each host that will use the scripts in the toolkit. In environments where multiple hosts have access to the Kubernetes cluster, this would be done on each host.

To clone the repository, run the following commands:
$ cd path_to_wlsko_repository
$ git clone https://github.com/oracle/weblogic-kubernetes-operator.git
$ cd weblogic-kubernetes-operator
$ git checkout tags/v3.1.0
# This is the tag of v3.1.0 GA

For details on WKO 3.1.0, see https://github.com/oracle/weblogic-kubernetes-operator/releases/tag/v3.1.0.

After cloning the repository, set the WLSKO_HOME environment variable to the location of the WKO git repository, by running the following command:
$ export WLSKO_HOME=path-to-wlsko-repo/weblogic-kubernetes-operator

Note:

Developers can add the export command to ~/.bashrc or ~/.profile so that it is always set.

Installing WebLogic Kubernetes Operator (WKO) and Traefik Container Images

In a shared environment, multiple developers may create OSM instances in the same cluster, using a shared WebLogic Kubernetes Operator.

For each cluster in your environment, you download and install the following:

  • WebLogic Kubernetes Operator (WKO) container image
  • Traefik container image

Note:

These installations must be co-ordinated on large teams so that they occur in a controlled manner.
Before installing the WKO image and the Traefik image, do the following tasks:
  • Remove the instances of the WKO and Traefik that you installed to validate your cloud environment.
  • Ensure that you have cleaned up the environment. See "Validating Your Cloud Environment" for instructions on cleaning up.
  • Ensure that there are no WebLogic Server Operator artifacts in the environment.

Installing the WebLogic Kubernetes Operator Container Image

To download and install the WKO container image:
  1. Ensure that Docker in your Kubernetes cluster can pull images from Docker Hub. The WKO container image is at: ghcr.io/oracle/weblogic-kubernetes-operator:3.1.0.
  2. Choose a namespace for the operator and set the WLSKO_NS environment variable to the Kubernetes namespace in which WKO will be deployed.
  3. Run the following installation script to install the image:
    $OSM_CNTK/scripts/install-operator.sh -n $WLSKO_NS
  4. Validate that the operator is installed by running the following command:
    kubectl get pods -n $WLSKO_NS

Installing the Traefik Container Image

To leverage the OSM cloud native samples that integrate with Traefik, the Kubernetes environment must have the Traefik ingress controller installed and configured.

If you are working in an environment where the Kubernetes cluster is shared, confirm whether Traefik has already been installed and configured for OSM cloud native. If Traefik is already installed and configured, set your TRAEFIK_NS environment variable to the appropriate namespace.

The instance of Traefik that you installed to validate your cloud environment must be removed as it does not leverage the OSM cloud native samples. Ensure that you have removed this installation in addition to purging the Helm release. Check that any roles and rolebindings created by Traefik are removed. There could be a clusterrole and clusterrolebinding called "traefik-operator". There could also be a role and rolebinding called "traefik-operator" in the $TRAEFIK_NS namespace. Delete all of these before you set up Traefik.

To download and install the Traefik container image:

  1. Ensure that Docker in your Kubernetes cluster can pull images from Docker Hub. See OSM Compatibility Matrix for the required and supported versions of the Traefik image.
  2. Run the following command to create a namespace ensuring that it does not already exist:

    Note:

    You might want to add the traefik namespace to the environment setup like .bashrc.
    kubectl get namespaces
    export TRAEFIK_NS=traefik   
    kubectl create namespace $TRAEFIK_NS
  3. Run the following commands to install Traefik using the $OSM_CNTK/samples/charts/traefik/values.yaml file in the samples:

    Note:

    Set kubernetes.namespaces and the chart version specifically using command-line.
    helm repo add traefik https://helm.traefik.io/traefik
    helm install traefik-operator traefik/traefik \
     --namespace $TRAEFIK_NS \
     --version 9.11.0 \
     --values $OSM_CNTK/samples/charts/traefik/values.yaml \
      --set "kubernetes.namespaces={$TRAEFIK_NS}"

After the installation, Traefik monitors the namespaces listed in its kubernetes.namespaces field for Ingress objects. The scripts in the toolkit manage this namespace list as part of creating and tearing down OSM cloud native projects.

When the values.yaml Traefik sample in the OSM cloud native toolkit is used as is, Traefik is exposed to the network outside of the Kubernetes cluster through port 30305. To use a different port, edit the YAML file before installing Traefik. Traefik metrics are also available for Prometheus to scrape from the standard annotations.

Traefik function can be viewed using the Traefik dashboard. Create the Traefik dashboard by running the instructions provided in the $OSM_CNTK/samples/charts/traefik/traefik-dashboard.yaml file. To access this dashboard, the URL is: http://traefik.osm.org. This is if you use the values.yaml file provided with the OSM cloud native toolkit; it is possible to change the hostname as well as the port to your desired values.

Creating a Basic OSM Instance

This section describes how to create a basic OSM instance.

Setting Environment Variables

OSM cloud native relies on access to certain environment variables to run seamlessly. Ensure the following variables are set in your environment:
  • Path to your private specification repository
  • Location of the WebLogic Server Kubernetes Operator (WLSKO) GIT repository
  • Traefik namespace

To set the environment variables:

  1. Create a directory that serves as your specification repository, by running the following command, where spec_repo_path is the path to your private specification repository:

    Note:

    The scripts in the toolkit support multiple directories being supplied to the -s parameter in a colon separated list (path/one:path/two:path/three). For simplicity, the toolkit works with a single directory.
    $ export SPEC_PATH=spec_repo_path/quickstart
  2. Set the WLSKO_HOME environment variable to the location of the WLSKO git repository that you cloned:
    $ export WLSKO_HOME=~/git/weblogic-kubernetes-operator 
  3. Set the TRAEFIK_NS variable for Traefik namespace.

Registering the Namespace

After you set the environment variables, register the namespace.

To register the namespace, run the following command:
$OSM_CNTK/scripts/register-namespace.sh -p sr -t targets
# For example, $OSM_CNTK/scripts/register-namespace.sh -p sr -t wlsko,traefik
# Where the targets are separated by a comma without extra spaces

Note:

wlsko and traefik are the names of the targets for registration of the namespace. The script uses WLSKO_NS and TRAEFIK_NS to find these targets. Do not provide the "traefik" target if you are not using Traefik.

Creating Secrets

You must store sensitive data and credential information in the form of Kubernetes Secrets that the scripts and Helm charts in the toolkit consume. Managing secrets is out of the scope of the toolkit and must be implemented while adhering to your organization's corporate policies. Additionally, OSM cloud native does not establish password policies.

Note:

The passwords and other input data such as RCU schema prefix length that you provide must adhere to the policies specified by the appropriate component.
As a pre-requisite to using the toolkit for either installing the OSM database or creating an OSM instance, you must create secrets for access to the following:
  • OSM database
  • OSM system users
  • RCU DB
  • OPSS
  • Operator artifacts for the instance
  • WebLogic Server Admin credentials used when creating the domain

The toolkit provides sample scripts for this purpose. However, they are not pipeline-friendly. The scripts should be used for creating an instance manually and quickly, but not for any automated process for creating instances. The scripts also illustrate both the naming of the secret and the layout of the data within the secret that OSM cloud native requires. You must create secrets prior to running the install-osmdb.sh or create-instance.sh scripts.

Run the following script to create the required secrets:
$OSM_CNTK/scripts/manage-instance-credentials.sh -p sr -i quick \ 
 create \ 
 osmdb,rcudb,wlsadmin,osmldap,opssWP,wlsRTE

where:

  • osmdb specifies the connectivity details and the credentials for connecting to the OSM PDB (OSM schema). This is consumed by the OSM DB installer and OSM runtime.

    Note:

    The osmdb secrets contain PDB sysdba user, osm main schema user, osm rule engine schema user, and osm report schema user. The names of these must be unique.
  • osmldap is the credential for OSM system admin and internal users. The script prompts for passwords for the following users.
    • OSM admin user (username is omsadmin)
    • Design Studio admin user (username is sceadmin)
    • OSM internal user (username is oms-internal)
    • OSM automation user (username is oms-automation)
  • rcudb specifies the connectivity details and the credentials for connecting to the OSM PDB (RCU schema). This is consumed by the OSM database installer and OSM and Fusion MiddleWare runtime.
  • wlsadmin is the credential for the intended user that will be created with administrative access to the WebLogic domain.
  • opssWP is the password for encrypting and decrypting the ewallet contents.
  • wlsRTE is the password used to encrypt the operator artifacts for this instance. The merged domain model and the domain ZIP are available in the operator config map and are encoded using this password.
Verify that the following secrets are created:
sr-quick-database-credentials
sr-quick-embedded-ldap-credentials
sr-quick-weblogic-credentials
sr-quick-rcudb-credentials
sr-quick-opss-wallet-password-secret
sr-quick-runtime-encryption-secret
Additionally, the secret opssWF is created by the installation process and does not follow the same guidelines. It is therefore not a pre-requisite for creating a new instance. In scenarios where a database is being re-used for a different OSM instance, then this becomes a pre-requisite secret. For more details, see "Reusing the Database State".

Assembling the Specifications

To assemble the specifications:
  1. Copy the instance specification to your $SPEC_PATH and rename:
    cp $OSM_CNTK/samples/instance.yaml $SPEC_PATH/sr-quick.yaml
  2. Copy the project specification to your $SPEC_PATH and rename:
    cp $OSM_CNTK/samples/project.yaml $SPEC_PATH/sr.yaml
    You edit these files as per the instructions described in the sections that follow.

Installing the OSM and RCU Schemas

This procedure configures an empty PDB. Depending on the database strategy for your team, you may have already performed this procedure as described in "Planning Your Cloud Native Environment". Before continuing, confirm whether the PDB being used for creating the OSM instance has been cloned from a master PDB that includes the schema installation. If the PDB already has the schema installed, skip this procedure and proceed to the Creating OSM Users and Groups topic.

After the PDB is created, it is configured with the OSM schema, the RCU schema, and the cluster leasing table.

Note:

Before installing the OSM and RCU schemas, stop or interrupt the automatic optimizer statistics collection maintenance task. For more details, see the New OSM Database Optimizer Statistics Management knowledge article (Doc ID 1925539.1) on My Oracle Support.
To install the OSM and RCU schemas:

Note:

YAML formatting is case-sensitive. While the next step uses vi editor for editing, if you are not familiar with editing YAML files, use a YAML editor to ensure that the you do not make any syntax errors while editing. Follow the indentation guidelines for YAML, as incorrect spacing can lead to errors.
  1. Edit the project specification file and update the DB installer image to point to the location of your image as shown below:

    Note:

    Before changing the default values provided in the specification file, confirm that they align with the values used during PDB creation. For example, the default tablespace name should match the value used when PDB is created.
    dbinstaller:
      image:  DB_installer_image_in_your_repo:<tag>
  2. If your environment requires a password to download the container images from your repository, create a Kubernetes secret with the Docker pull credentials. See the Kubernetes documentation for details. Reference the secret name in the project specification.
    # The image pull access credentials for the "docker login" into Docker repository, as a Kubernetes secret.
    # Uncomment and set if required.
    # imagePullSecret: ""
  3. Set the partition size to the actual tablespace size that was created. The default value for production sizing is 20000000 (20 million) and for development is 2000000 (2 million). These may need to be overridden for this instance. See the OSM System Administrator's Guide for guidelines on partition and tablespace sizing. If required, update defaultPartitionSize in the development shape in $OSM_CNTK/charts/osm/shapes/dev.yaml. The defaultPartitionSize parameter also impacts how defaultSubPartitionCount is calculated. Calculate OSM_SUBPARTITION_COUNT from OSM_PARTITION_SIZE.

    Table 4-1 Calculating Sub-partitions

    defaultPartitionSize Calculated Sub-partitions
    < = 2M 16
    > 2M and < = 10M 32
    > 10M 64
  4. Run the following script to start the OSM DB installer, which instantiates a Kubernetes Pod resource. The pod resource lives until the DB installation operation completes.
    #(OSM Schema)
    $OSM_CNTK/scripts/install-osmdb.sh -p sr -i quick -s $SPEC_PATH -c 1 
     ## once finished
    # (RCU Schema)
    $OSM_CNTK/scripts/install-osmdb.sh -p sr -i quick -s $SPEC_PATH -c 7 

    You can invoke the script with -h to see the available options.

  5. Check the console to see if the DB installer is installed successfully.
  6. If the installation failed, run the following command to review the error message in the log:
    kubectl logs -n sr sr-quick-dbinstaller-osm-dbinstaller
  7. Clean up the failed pod by running the following command:
    helm uninstall sr-quick-dbinstaller -n sr
  8. Go back to step 4 and run the script again to install the OSM DB installer.

The following table lists the basic database parameters that are handled by the DB Installer:

Table 4-2 Database Parameters Handled by the DB Installer

Parameter Value
cursor_sharing FORCE
parallel_degree_policy AUTO
deferred_segment_creation By default, set to True. The DB Installer specification can override this to FALSE for production environments.
open_cursors 2000
optimizer_mode ALL_ROWS
_optimizer_invalidation_period 600
OSM DB Installer Activities

The OSM DB Installer performs the following activities during OSM schema creation:

  • Automatic Optimizer Statistics Collection Maintenance Task: The OSM DB Installer disables this task during the creation of OSM schema. This avoids race conditions when copying partition statistics as part of the OSM schema installation. This maintenance task is re-enabled after the partition statistics are copied. This is handled as part of the OSM schema installation.

  • Statistics gathering schedule: The OSM DB Installer modifies the default statistics gathering schedule so that the weekend schedule is the same as the weekday schedule. By default, weekday maintenance windows start at 10 PM and are 4 hours long. The Saturday and Sunday maintenance windows are 20 hours long and start at 6 AM; this impacts order processing performance during peak weekend hours.

    See the following topics in Oracle Database Administrator's Guide for more details:
    • Predefined Maintenance Windows
    • Configuring Automated Maintenance Tasks

Configuring the Project Specification

This section provides instructions for creating a project that is configured to support the processing of the SimpleRabbits sample cartridge that is provided with the toolkit. This sample cartridge validates that OSM processes orders successfully. The project specification is a Helm override file that contains values that are scoped to a project. The values specified in the specification are shared by all the instances of a project, unless they are overridden in an instance specification. Review the content about Helm chart layering in "Overview of the OSM Cloud Native Deployment".

The toolkit provides a sample project specification by the name sr that you can use with minor adjustments.

To configure the project specification:
  1. Edit the project specification to provide the image in your repository (name and tag) by running the following command:
    vi $SPEC_PATH/sr.yaml
     
    **  edit the image to reflect the OSM image name and location in your docker repository
     
    image: osm_image_in_your_repository
  2. The test cartridge requires JMS Queue configuration, which is provided with the toolkit. Copy the JMS Queue configuration from the location shown below into the instance specification.
    vi $OSM_CNTK/samples/simpleRabbits/project_fragment.yaml
    
     ** Copy the queue content
      vi $SPEC_PATH/sr.yaml
      * find the existing placeholder for the queues and paste the content
    The following text is an example of JMS Queue configuration:
    # jms distributed queues
    uniformDistributedQueues:
     - name: new_jms_queue_1
        jndiName: oracle.communication.ordermanagement.ppt.loopbackA
        jmsTemplate: defaultJmsTemplate
     
    ## first line is LEFT algined with no leading spaces. each subsequent indent is 2 spaces from the last
  3. If your environment requires a password to download the container images from your repository, create a Kubernetes secret with the Docker pull credentials. See the Kubernetes documentation for details. Reference the secret name in the project specification.
    # The image pull access credentials for the "docker login" into Docker repository, as a Kubernetes secret.
    # uncomment and set if required.
    #imagePullSecret: ""
  4. For your DNS resolution mechanism, change the default load balancer domain name as needed:
    loadBalancerDomainName: "osm.org"
Tuning the Project Specification

This section provides instructions for tuning the project specification. The values specified in the specification are shared by all the instances of a project, unless they are overridden in an instance specification.

Do the following to tune the project specification:

  • To configure the maximum number of bytes allowed in messages that are received over all WebLogic protocols, set the following parameter:
    wlsMaxMsgSize: value_in_bytes

    For OSM cloud native, the default value is 300000000 bytes, which is much higher than the default value of 10000000 bytes in WebLogic. The low default value in WebLogic can cause errors when this limit is reached.

  • To configure the tablespace name for OSM model and order tables and indexes, see the following parameters:
    db:
      modelDataTablespace: string
      modelIndexTablespace: string
      orderDataTablespace: string
      orderIndexTablespace: string
    For each parameter, the default value is OSM.
  • To configure the partition size for OSM order tables, see the following parameter:
    defaultPartitionSize: integer
    The default is 2,000,000 (2 million). Production shapes define a larger value of 20,000,000 (20 million), which is a better choice when combined with online purging.
  • To configure the sub-partition count for partitioned OSM order tables, see the following parameter:
    defaultSubPartitionCount: integer
    The default value is undefined. Typical values are 16, 32 and 64. Leave this parameter undefined to allow the OSM cloud native database installer to choose a value appropriate for the partition size. For example, for a large 20 million partition, the installer will choose a value of 64 so as to minimize database contention.
  • To configure whether database segment creation should be deferred, see the following parameter:
    deferredSegmentCreation: "TRUE" or "FALSE"
    The default value is TRUE. To minimize database contention, this should be set to FALSE for production systems.
  • To configure OSM and infrastructure data source connection pool parameters, see the parameters under the jdbc element. For example, the maximum database connection pool capacity for the OSM application data sources and for the infrastructure data sources (which support JMS and tlog JDBC stores) can be set with:
    jdbc:
      app:
        maxCapacity: integer
      infra:
        maxCapacity: integer
    For more details on connection pool parameters, see Oracle Fusion Middleware Administration Console Online Help for Oracle WebLogic Server 12.2.1.4.0. Also refer to the production and development shapes for the full list of supported parameters and default values.
  • To configure the message buffer cache size for individual JMS servers, see the following parameter:
    jmsMsgBufferSize: value_in_bytes
    The default value is approximately one-third of the maximum JVM heap size, or a maximum of 512 megabytes (536,870,912 bytes). For production environments, the recommended value is 1 giga byte (1,073,741,824 bytes) to reduce the possibility that WebLogic will start paging JMS message bodies to disk once the buffer is full.
  • To configure whether database optimizer statistics should be loaded when creating OSM order table partitions, see the following parameter:
    loadPartitionStatistics: false
    The default value is false. This should be set to true for production systems.
  • To configure logging options, see the following parameter:
    logging_options: string
    Refer to the production and development shapes for more details and the default values. The following is an example:
    logging_options: " -Dweblogic.log.FileMinSize=5000 -Dweblogic.log.FileCount=10 -Dweblogic.log.RotateLogOnStartup=false "
  • To configure JVM parameters for the admin server or for managed servers, see the following parameter:
    user_mem_args: string
    Refer to the production and development shapes for sample values. The following is an example from the prodlarge shape:
    managedServers:
          shape:
            user_mem_args: "-XX:+UseG1GC -XX:G1HeapRegionSize=16m -XX:+ClassUnloadingWithConcurrentMark -XX:+UseStringDeduplication -XX:SurvivorRatio=3 -XX:CodeCacheMinimumFreeSpace=16m -XX:ReservedCodeCacheSize=512m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintAdaptiveSizePolicy -Xloggc:/u01/oracle/user_projects/domains/domain/gc.log -XX:+DisableExplicitGC -XX:+ParallelRefProcEnabled -XX:+AlwaysPreTouch -Xms64g -Xmx64g -Xmn22g -XX:InitiatingHeapOccupancyPercent=50 -XX:ParallelGCThreads=13 "
    For more details, see the OSM Memory Tuning Guidelines (Doc ID: 2028249.1) knowledge article on My Oracle Support.

Configuring the Instance Specification

The instance specification is a Helm override file that contains values that are specific to a single instance. These values feed into the WDT model developed for the OSM WebLogic domain.

To configure the instance specification:
  1. Edit the sr-quick.yaml file to specify the database details:
    db:
      datasourcesPrimary:
       port: 1521
        # If not using RAC, provide the DB server hostname/IP address
        # If using RAC, comment out "#host:"
        # host: dbserver-ip
        #
        # If using RAC, provide the list of SCAN hostname/IP addresses
        # If not using RAC, comment out "#scans:"
        #scans:
        # - scan1-ip
        # - scan2-ip
        #
        # If using RAC, provide either a list of VIP hostname/IP addresses
        # or a list of INSTANCE_NAMES
        # If not using RAC, comment out "#vips:" and "#instances:"
        #
        #vips:
        # - vip1-ip
        # - vip2-ip
        # --- OR ---
        #instances:
        # - instance-1
        # - instance-2
  2. Assuming that oci-lb-service-traefik is the service created as part of the Oracle Cloud Infrastructure Load Balancer setup, run the following command to find the IP address of the Oracle Cloud Infrastructure LBaaS:
    kubectl get svc -n traefik oci-lb-service-traefik --output=jsonpath="{..status.loadBalancer.ingress[0].ip}"
  3. Because an external load balancer is not required to be configured for the basic OSM instance, change the value of loadBalancerPort to the default Traefik NodePort of 30305 if you are not using Oracle Cloud Infrastructure LBaaS:
    loadBalancerPort: 30305
    If you use Oracle Cloud Infrastructure LBaaS, or any other external load balancer, set loadBalancerPort to 80, and uncomment and update the value for externalLoadBalancerIP appropriately:
    loadBalancerPort: load_balancer_port
    #externalLoadBalancerIP: IP_address_of_the_external_load_balancer

Creating an Ingress

An ingress establishes connectivity to the OSM instances.

To create an Ingress, run the following command:
$OSM_CNTK/scripts/create-ingress.sh -p sr -i quick -s $SPEC_PATH
Project Namespace : sr
Instance Fullname : sr-quick
LB_HOST           : quick.sr.osm.org
Ingress Controller: TRAEFIK
External LB IP    : 192.0.0.8
 
NAME: sr-quick-ingress
LAST DEPLOYED: Wed Jul  1 10:20:27 2020
NAMESPACE: sr
STATUS: deployed
REVISION: 1
TEST SUITE: None
 
Ingress created successfully...

Creating an OSM Instance

This procedure describes how to create an OSM instance in your environment using the scripts that are provided with the toolkit.

To create an OSM instance:
  1. Run the following command:
    $OSM_CNTK/scripts/create-instance.sh -p sr -i quick -s $SPEC_PATH

    The create-instance.sh script uses the Helm chart located in the charts/osm directory to create and deploy the domain custom resource and the domain config map for your instance. If the scripts fails, see the Troubleshooting Issues section at the end of this topic, before you make additional attempts.

    The instance creation process creates the opssWF secret, which is required for access to the RCU DB. It is possible to handle the wallet manually if needed. To do so, pass -w to the create-instance.sh script, which creates the wallet file at a location you choose. You can then use this wallet file to create a secret by using the manage instance credentials script.

  2. Validate the important input details such as Image name and tag, specification files used (Values Applied), hostname, and port for ingress routing:
    $OSM_CNTK/scripts/create-instance.sh -p sr -i quick -s $SPEC_PATH
    
    Calling helm lint
    ==> Linting ./scripts/../charts/osm
    [INFO] Chart.yaml: icon is recommended
     
    1 chart(s) linted, 0 chart(s) failed
    Project Namespace : sr
    Instance Fullname : sr-quick
    LB_HOST           : quick.sr.osm.org
    LB_PORT           : 30305
    Image             : osm:7.4.1.200504-0655-b1735.a0f9526f
    Shape             : dev
    Values Applied    : -f ./scripts/../charts/osm/values.yaml -f ./scripts/../charts/osm/shapes/dev.yaml   -f /home/oracle/SmokeTest/repo/sr.yaml -f /home/oracle/SmokeTest/repo/sr-quick.yaml
    Output wallet     : n/a
    After the script finishes executing, the log shows the following:
    NAME             READY   STATUS              RESTARTS   AGE
    sr-quick-admin   1/1     Running             0          2m12s
    sr-quick-ms1     0/1     ContainerCreating   0          1s
     
    Provide opss wallet File for 'sr-quick' ...
    For example : '/path-to-osm-cntk/sr-quick.ewallet'
    opss wallet File:
    secret/sr-quick-opss-walletfile-secret created
     
    Instance 'sr/sr-quick' admin server is now running.
    Creation of instance 'sr/sr-quick' has completed successfully.

    The create-instance.sh script also provides some useful commands and configuration to inspect the instance and access it for use.

  3. If you query the status of the pods, the READY state of the managed servers may display 0/1 for several minutes while the OSM application is starting.

    When the READY state shows 1/1, your OSM instance is up and running. You can then validate the instance by deploying a sample cartridge and submitting orders.

The base hostname required to access this instance using HTTP is quick.sr.osm.org. See "Planning and Validating Your Cloud Environment" for details about hostname resolution.

The create-instance script prints out the following valuable information that you can use while working with your OSM domain:

  • The T3 URL: http://t3.quick.sr.osm.org This is required for external client applications such as JMS and WLST.
  • The URL for access to the WebLogic UI, which is provided through the ingress controller at host:http://admin.quick.sr.osm.org:30305/console.
  • The URL for access to the OSM UIs, which is provided through the ingress controller that requires the host to be specified as: http://quick.sr.osm.org:30305/OrderManagement/Login.jsp.

Validating the OSM Instance

After creating an instance, you can validate it by checking the domain configuration and the client UIs.

Run the following command to display the domain configuration details of the OSM instance that you have created:
kubectl describe domain sr-quick -n sr

The command displays the domain configuration information.

To verify the client UIs:
  • Log into the WebLogic console using the URL specified in the output of the create-instance script: http://admin.quick.sr.osm.org:30305/console

    You can use the console to verify the configuration that has been applied and to see that the OSM application is in a good state.

  • Log into the OSM Task Web client user interface with the OSM administrator login credentials created as part of "Creating Secrets" using the URL (http://quick.sr.osm.org:30305/OrderManagement/Login.jsp) specified in the output of the create-instance script.

Note:

After an OSM instance is created, it may take a few minutes for the OSM user interface to become active.

Scaling the OSM Application Cluster

Now that your OSM instance is up and running, you can explore the ability to dynamically scale the application cluster.

To scale the OSM application cluster, edit the configuration:

  1. In the instance specification, change the value for clusterSize manually. This change would ultimately be performed by an automated CI/CD pipeline.
    vi $SPEC_PATH/sr-quick.yaml
     
    # Change the cluster size to a value not larger than 18
     
     #cluster size
    clusterSize: 2

    Note:

    You can watch the Kubernetes pods in your namespace shrink or grow in real-time. To watch the pods shrink or grow, in a separate terminal window, run the following command:
    kubectl get pods -n sr --watch
  2. Upgrade the deployed Helm release:
    $OSM_CNTK/scripts/upgrade-instance.sh -p sr -i quick -s $SPEC_PATH 
    This pushes the new configuration to the deployed Helm release so the operator can take the necessary steps.

The WebLogic operator monitors changes to clusterSize and results in the operator spinning up or tearing down managed servers to align with the requested cluster size.

Deploying the Sample Cartridge

By deploying the sample cartridge that is provided with the toolkit, you can validate order processing in the OSM instance that you created.

Before deploying the cartridge, you must bring down the running domain. You can do this by scaling the cluster size down to 0.

To deploy the sample cartridge:
  1. Scale down the cluster:
    1. Reduce the cluster size in the configuration:
      vi $SPEC_PATH/sr-quick.yaml
       
      # Change the cluster size to 0
       
      #cluster size
      clusterSize: 0
    2. Push the configuration to the runtime environment:
      $OSM_CNTK/scripts/upgrade-instance.sh -p sr -i quick -s $SPEC_PATH

    The operator terminates the managed server.

  2. Deploy the SimpleRabbits sample cartridge by running the following command:
    ./scripts/manage-cartridges.sh \
     -p sr -i quick -s $SPEC_PATH
     -f $OSM_CNTK/samples/simpleRabbits/SimpleRabbits.par -c parDeploy
  3. Scale up the cluster:
    1. Increase the cluster size in the configuration:
      vi $SPEC_PATH/sr-quick.yaml
       
      # Change the cluster size to 1
       
      #cluster size
      clusterSize: 1
    2. Push the configuration to the runtime environment:
      $OSM_CNTK/scripts/upgrade-instance.sh -p sr -i quick -s $SPEC_PATH

    The operator terminates the managed server.

Submitting Orders

The OSM cloud native toolkit provides a sample order that you can submit to validate order processing in OSM. The sample order is available at: $OSM_CNTK/samples/simpleRabbits/sample.xml.

To submit OSM orders over HTTP, use an external client such as SoapUI. The endpoint is the same as the URL used to verify the OSM Task Web client.

When using SoapUI, a Soap Envelope element needs to wrap CreateOrderBySpecification that is provided in $OSM_CNTK/samples/simpleRabbits/sample.xml

To submit OSM orders over JMS, use an external client such as Hermes JMS. The endpoint must be as follows:
jms://OSM_1::queue_oracle/communications/ordermanagement/WebServiceQueue::queue_oracle/communications/ordermangement/SoapUIResponseQueue
The connection factory's providerURL must be as follows:
http://t3.quick.sr.osm.org:30305

Deleting and Recreating Your OSM Instance

Deleting Your OSM Instance

To delete your OSM instance, run the following command:
$OSM_CNTK/scripts/delete-instance.sh -p sr -i quick

Re-creating Your OSM Instance

When you delete an OSM instance, the database state for that instance still remains unaffected. You can re-create an OSM instance with the same project and the instance names, pointing to the same database.

Note:

Ensure that you use the same specifications that you used for creating the instance and that the following secrets have not been deleted:
  • osmdb
  • osmldap
  • rcudb
  • opssWF
  • opssWP
  • wlsRTE
To re-create an OSM instance, run the following command:
$OSM_CNTK/scripts/create-instance.sh -p sr -i quick -s $SPEC_PATH

Note:

After re-creating an instance, client applications such as SoapUI and HermesJMS may need to be restarted to avoid using expired cache information.

Cleaning Up the Environment

To clean up the environment:
  1. Delete the instance:
    $OSM_CNTK/scripts/delete-instance.sh -p sr -i quick
  2. Delete the ingress:
    $OSM_CNTK/scripts/delete-ingress.sh -p sr -i quick
  3. Delete the namespace, which in turn deletes the Kubernetes namespace and the secrets:
    $OSM_CNTK/scripts/unregister-namespace.sh -p sr -d -t targets

    Note:

    wlsko and traefik are the names of the targets for registration of the namespace. The script uses WLSKO_NS and TRAEFIK_NS to find these targets. Do not provide the "traefik" target if you are not using Traefik.
  4. Drop the PDB.

Troubleshooting Issues with the Scripts

This section provides information about troubleshooting some issues that you may come across when running the scripts.

If you experience issues when running the scripts, do the following:

  • Check the operator logs to find out the details about the issue:
    kubectl get pods -n $WLSKO_NS
    # get the operator pod name to be used in the next command
    kubectl logs -n $WLSKO_NS operator_pod
  • Check the "Status" section of the domain to see if there is useful information:
    kubectl describe domain -n sr sr-quick 

"Timeout" Issue

In the logs, you may sometimes see the word "timeout" when the create-instance script fails. When you run the create-instance script, it may take a long time to pull the image, if you are doing it for the first time. In such a scenario, the script may fail and display the text "timeout" in the log.

To resolve this issue, try increasing the podStartupDeadlineSeconds parameter. The podStartupDeadlineSeconds parameter is a configurable parameter exposed in the instance specification that can be increased if required. Start with a very high timeout value and then monitor the average time it takes, because it depends on the speed with which the images are downloaded and how busy your cluster is. Once you have a good idea of the average time, you can reduce the timeout value accordingly to something that considers both the average time and some buffer.
# Modify the timeout value to start introspector pod. Mainly
# when using against slow DB or pulling image first time.
podStartupDeadlineSeconds: 800

After adjusting the parameter, clean up the failed instance and re-create the instance.

Cleanup Failed Instance

When a create-instance script fails, you must clean up the instance before making another attempt at instance creation.

Note:

Do not retry running the create-instance script or the upgrade-instance script immediately to fix any errors, as they would return errors. The upgrade-instance script may work but re-running it does not complete the operation.

To clean up the failed instance:

  1. Delete the instance:
    $OSM_CNTK/scripts/delete-instance.sh -p sr -i quick
  2. Delete and recreate the RCU schema:
    $OSM_CNTK/scripts/install-osmdb.sh -p sr -i quick -s $SPEC_PATH -c 5

Recreating an Instance

If you face issues when creating an instance, do not try to re-run the create-instance.sh script as this will fail. Instead, perform the cleanup activities and then run the following command:
$OSM_CNTK/scripts/create-instance.sh -p sr -i quick -s $SPEC_PATH

Next Steps

A basic OSM cloud native instance should now be running in your environment. This process exposed you to some of the base functionality and concepts that are new to OSM cloud native. You can continue in your sandbox environment learning about more OSM cloud native capabilities by following the learning path.

If, however, your first priority is to understand details on infrastructure setup and structuring of OSM instances for your organization, then you should follow the infrastructure path.

To follow the infrastructure path, proceed to "Planning Infrastructure".

To follow the learning path, proceed to "Creating Your Own OSM Cloud Native Instance".