6 Creating Your Own OSM Cloud Native Instance

This chapter provides information on creating your own OSM instance. While the "Creating a Basic OSM Cloud Native Instance" chapter provides instructions for creating a basic OSM instance that is capable of processing orders for the SimpleRabbits sample cartridge that is provided with the OSM cloud native toolkit, this chapter provides information on how you can create an OSM instance that is tailored to the business requirements of your organization. However, if you want to first understand details on infrastructure setup and structuring of OSM instances for your organization, then see "Planning Infrastructure".

Before proceeding with creating your own OSM instance, you can look at the alternate and optional configuration options described in "Exploring Alternate Configuration Options".

When you created a basic instance, you used the operational scripts and the base configuration provided with the toolkit.

Creating your own instance involves various activities spanning both instance management and instance configuration and includes some of the following tasks:

  • Configuring OSM Runtime Parameters
  • Preparing Cartridges for OSM Cloud Native
  • Extending the WDT Model. See "Extending the WebLogic Server Deploy Tooling (WDT) Model".
  • Working with Kubernetes Secrets
  • Adding JMS Queues and Topics
  • Creating a JMS template
  • Working with Cartridges
  • Provisioning Cartridge User Accounts

Configuring OSM Runtime Parameters

You can control various OSM runtime parameters using the oms-config.xml file. See "Configuring OSM with oms-config.xml" in OSM Cloud Native System Administrator's Guide for details.

This configuration is managed differently in OSM cloud native. While all the parameters described in the OSM Cloud Native System Administrator's Guide are still valid, the method of specifying them is different for a cloud native deployment.

Each of the three specification file tiers - shape, project, and instance - has a section called omsConfig. For example, the project specification has the following section:

omsConfig:
  project:
   com.mslv.oms.handler.cluster.ClusteredHandlerFactory.HighActivityOrder.CollectionCycle.Enabled: true 
   oracle.communications.ordermanagement.cache.UserPerferenceCache: near

Some parameters have been laid out for you in the pre-configured shape specification files and in the sample project and instance specification files. When you wish to change the value of a parameter to a different one from the documented default value, you must add that parameter and its custom value to an appropriate specification file.

For values that depend on (or contribute to) the footprint of the OSM Managed Server, the shape specification would be best. For values that are common across instances for a given project, the project specification would be best. Values that vary for each instance would be appropriate in the instance specification.

Any parameter specified in the instance specification overrides the same parameter specified in the project or shape specification. Any parameter specified in the project specification overrides the same parameter in the shape specification.

Any parameter that is not present in all three specification files (shape, project, instance) automatically has its default value as documented in OSM Cloud Native System Administrator's Guide.

Note:

All pre-defined shape specifications have the omsConfig parameters flagged as do NOT delete. These must not be edited and must be copied as-is to custom shapes. See "Working with Shapes" for details about custom shapes.

Preparing Cartridges

Existing OSM cartridges that run on a traditional OSM deployment can still be used with OSM cloud native, but you prepare and deploy those cartridges differently. Instead of using multiple interfaces to persist the WebLogic domain configuration (WebLogic Admin console and WLST), the configuration is added into the files that feed into the instance creation mechanism. With OSM cloud native, you use the WebLogic Admin Console only for validation purposes.

Before proceeding, you must determine which OSM solution cartridge you want to use to validate your OSM cloud native environment. For simplicity, use a setup where any communication with OSM is restricted to an application running in the same instance of the WebLogic domain.

Identify the following requirements for your cartridge:
  • The list of JMS queues and topics that the cartridge needs.
  • The list of credentials stored in the OSM Credential Store.
  • Users that the cartridge requires.
  • Applications that need to be deployed to the WebLogic server. This can include system emulators for stubbing out communication to external peer systems.

About OSM Cloud Native Cartridges and Design Studio

Existing cartridges do not always need to be rebuilt for use with OSM cloud native. As long as they were built with an OSM 7.4.0.x SDK, using the Design Studio target OSM version of 7.4.0, their existing par files can be deployed.

If cartridges have to be built afresh or re-built, use the OSM SDK packaged with OSM 7.4.1 release, and set the Design Studio target OSM version as 7.4.0. In general, use the Design Studio target OSM version that is closest to the actual OSM version but not newer than it.

About Domain Configuration Restrictions

Some restrictions on the domain configuration are necessary to keep the process simple for creating and validating your basic OSM cloud native instance. As you build confidence in the tooling and the extension mechanisms, you can remove the restrictions and include additional configuration in your specifications to support advanced features.

Ensure that you restrict the domain configuration to the following:
  • Instance with no SAF setup.
  • Re-directing logs (to live outside the pods) will not be configured at this time.

Changing the Default Values

The project and the instance specification templates in the toolkit contain the values used in the out-of-the-box domain configuration. These files are intended for editing, as the required information such as the PDB host needs updating and the flags controlling the optional features such as NFS need to be turned on or off, and the default values such as Java options and cluster size can be changed. If you find that the existing values need to be updated for your OSM instance, update the values in your specification files.

Change the default values as per the following guidelines:
  • NFS: As per the restrictions, leave nfs disabled in the instance specification
  • Shape: The provided configuration uses tuning parameters that are appropriate for a development environment. This is done through the use of a shape specification that is specified in the instance specification.
Creating an instance with the default shape is recommended. For details on how you can provide a custom shape if necessary, see "Working with Shapes".

Adding New WDT Metadata

The OSM cloud native toolkit provides the base WDT metadata in $OSM_CNTK/charts/osm/templates. As the OSM application requires this WDT metadata for the proper functioning, this must not be edited. Instead, the toolkit provides a mechanism whereby new pieces of WDT metadata can be included in the final description of the domain.

See "Extending the WebLogic Server Deploy Tooling (WDT) Model" for complete details on the general process for providing custom WDT. The steps described must be repeated for a variety of WDT use cases.

You must specify the JMS Queues required for your new using the WDT metadata.

There are two options for providing the required configuration for JMS Queues: Handling of sensitive data from within the WDT metadata fragment is supported as described in the "Accessing Kubernetes Secrets from WDT Metadata".

Other Customizations

To support a custom OSM solution cartridge, not all changes are done using the WDT metadata. Depending on the processing needs of your OSM solution cartridge, there are other changes that are likely required:

This topic describes how to use the following methods for supporting a custom solution cartridge:
  • Credential Store
  • Custom Application EAR
  • Cartridge Users

Credential Store

For traditional installations, if a solution cartridge has automation plugins that needed to retrieve external system credentials, it did so by storing those credentials in the WebLogic Credential Store.

In OSM cloud native, if your cartridge uses the credential store framework of OSM, then you must provision cartridge user accounts. See "Provisioning Cartridge User Accounts" for details.

Custom Application Ear

If there are additional applications that need to be deployed to WebLogic to support the processing of your OSM solution cartridge, see "Deploying Entities to an OSM WebLogic Domain".

This method requires both WDT metadata as well as the custom OSM images. Supplemental scripts and WDT fragments are provided as samples in the $OSM_CNTK/samples/customExtensions

Cartridge Users

Cartridges may also define users who need access to OSM APIs. These user credentials need to be available in the right locations as described in "Provisioning Cartridge User Accounts". These credentials must then be made available through the configuration to OSM.

Working with Kubernetes Secrets

Secrets are Kubernetes objects that you must create in the cluster through a separate process that adheres to your corporate policies around managing secure data. Secrets are then made available to OSM cloud native by declaring them in your configuration.

When the OSM cloud native sample scripts are not used for creating secrets, the secrets you create must align to what is expected by OSM. The sample scripts contain guidelines for creating secrets.

The following diagram illustrates the role of Kubernetes Secrets in an OSM cloud environment:

Figure 6-1 Kubernetes Secrets in OSM Cloud Environment

Description of Figure 6-1 follows
Description of "Figure 6-1 Kubernetes Secrets in OSM Cloud Environment"

There are three classifications of secrets, as shown in the above illustration:

  • Mandatory (Pre-requisite) Secrets
  • Optional Secrets
  • Custom Secrets

About Mandatory Secrets

Mandatory secrets must be created prior to running the cartridge management scripts or the instance creation script.

The toolkit provides the sample script: $OSM_CNTK/scripts/manage-instance-credentials.sh to create the secrets for you. Refer to the script code to see the naming and internal structure required for each of these secrets.

See the following topics for more details about Kubernetes Secrets:

About Optional Secrets

Optional secrets are dictated by enabling the out-of-the-box configuration. There is some functionality that is pre-configured in OSM cloud native and can be enabled or disabled in the specification files. When the functionality is enabled, these secrets must be created in the cluster before an OSM instance is created.

  • If you use OpenLDAP for authentication, OSM cloud native relies on the following secret to have been created:
    project-instance-openldap-credentials
    The toolkit provides a sample script to create these secrets for you ($OSM_CNTK/samples/credentials/manage-osm-ldap-credentials.sh by passing in "-o secret").
  • With Credential Store, the secrets hold credentials for external systems that the automation plug-ins access. These secrets are a pre-requisite to running cartridges that rely on this mechanism and must adhere to a naming convention. See "Provisioning Cartridge User Accounts" for more details.
  • When SAF is configured, SAF secrets are used. SAF secrets are similar to custom secrets and are declared in a specialized area within the instance specification that feeds into the SAF-specific WDT.
    safConnectionConfig:
      - name: external_system_identifier
        t3Url: t3_url
        secretName: secret_t3_user_pass

About Custom Secrets

OSM cloud native provides a mechanism where WDT metadata can access sensitive data through a custom secret that is created in the cluster and then declared in the configuration. See "Accessing Kubernetes Secrets from WDT Metadata" to familiarize yourself with this process.

This class of secrets are required only if you need secrets for this mechanism.

To use custom secrets with WDT metadata:

Note:

As an example, this procedure uses a WDT snippet for authentication.
  1. Add secret usage in the WDT metadata fragment:
    Host: '@@SECRET:authentication-credentials:host@@'
    Port: '@@SECRET:authentication-credentials:port@@'
    ControlFlag: SUFFICIENT
    Principal: '@@SECRET:authentication-credentials:principal@@'
    CredentialEncrypted: '@@SECRET:authentication-credentials:credential@@'
  2. Add the secret to the project specification.
    #Custom secrets
    # Multiple secret names can be provided
    customSecrets:
      enabled: true
      secretNames:
       - authentication-credentials
  3. Create the secret in the cluster, by using any one of the following methods:
    • Using OSM cloud native toolkit scripts
    • Using a Template
    • Using the Command-line Interface
    In the example metadata shown in step 1, the secret must capture host, port, principal, and credential.

See "Mechanism for Creating Custom Secrets" for details about the methods.

Accommodating the Scope of Secrets

The WDT metadata fragments are defined at the project level as the project typically owns the solution definition. Accommodating this is a simple task. However, the scenario becomes complicated when you consider that there may be project level configuration that needs to allow for instance level control over the secret contents.

To walk through this, we will use authentication as an example and introduce a COM project that includes three instances: development, test, and production. The production environment has a dedicated authentication system, but the development and test instances use a shared authentication server.

To accommodate this scenario, the following changes must be made to each of the basic steps:

  1. Define a naming strategy for the secrets that introduce scoping. For instance, secrets that need instance level control could prepend the instance name. In the example, this results in the following secret names:
    • COM-dev-authentication-credentials
    • COM-test-authentication-credentials
    • COM-prod-authentication-credentials
  2. Include the secret in the WDT fragment. In order for this scenario to work, a generic way is required to declare the "scope" or instance portion of the secret name. To do this, use the built-in Helm values:
    .Values.name - references the full instance name (project-instance)
    .Values.namespace - references the project name (project)
    
    If the fragment needs to support instance-level control, derive the instance name portion of the secret name.
    Host: '@@SECRET:{{ .Values.name }}-authentication-credentials:host@@'
    Port: '@@SECRET:{{ .Values.name }}-authentication-credentials:port@@'
    ControlFlag: SUFFICIENT
    Principal: '@@SECRET:{{ .Values.name }}-authentication-credentials:principal@@'
    CredentialEncrypted: '@@SECRET:{{ .Values.name }}-authentication-credentials:credential@@'
  3. Add the secret to the instance specification. The secret name must be provided in the instance specification as opposed to the project specification.
    ## Dev Instance Spec
     
     #Custom secrets
    # Multiple secret names can be provided
    customSecrets:
      enabled: true
      secretNames:
       - COM-dev-authentication-credentials
     
     ## Test Instance spec
     
     #Custom secrets
    # Multiple secret names can be provided
    customSecrets:
      enabled: true
      secretNames:
       - COM-test-authentication-credentials
     
     ## Prod Instance Spec
     
    #Custom secrets
    # Multiple secret names can be provided
    customSecrets:
      enabled: true
      secretNames:
       - COM-prod-authentication-credentials
  4. Create the secret in the cluster by following any one of the methods described in the Mechanism for Creating Custom Secrets topic. In our example, the secret would need to capture host, port, principal and credential. Each instance would need a secret created, but the values provided depend on which authentication system is being used.
    # Dev secret creation
     
     kubectl create secret generic COM-dev-authentication-credentials \
    -n COM \
    --from-literal=principal=<value1> \
    --from-literal=credential=<value2> \
    --from-literal=host=<value3> \
    --from-literal=port=<value4>
     
     # Test secret creation
     
    kubectl create secret generic COM-test-authentication-credentials \
    -n COM \
    --from-literal=principal=<value1> \
    --from-literal=credential=<value2> \
    --from-literal=host=<value3> \
    --from-literal=port=<value4>
     
      ##Production secret creation
     
     kubectl create secret generic COM-prod-authentication-credentials \
    -n COM \
    --from-literal=principal=<prodvalue1> \
    --from-literal=credential=<prodvalue2> \
    --from-literal=host=<prodvalue3> \
    --from-literal=port=<prodvalue4>

The following diagram illustrates the secret landscape in this example:

Figure 6-2 Landscape of Secrets



Mechanism for Creating Custom Secrets

You can create custom secrets in any of the following ways:
  • Using Scripts
  • Using a Template
  • Using the Command-line Interface

Using Scripts to Create Secrets

Functionality such as OpenLDAP, NFS, and Credential Store that can be enabled or disabled in OSM cloud native relies on pre-requisite secrets to be created. In such cases, the toolkit provides sample scripts that can create the secrets for you. While these scripts are useful for configuring instances quickly in development situations, it is important to remember that they are sample scripts, and not pipeline friendly. These scripts are also essential because when the secret is mandated by OSM cloud native, both the secret name and the secret data are available in the sample script that populates it.

As an example, the secrets used by the Credential Store mechanism must follow a specific naming convention:
projectName-instanceName-osmcn-cred-mapName

Using a Template

To create custom secrets using a template:

  1. Save the secret details into a template file.
    apiVersion: v2
    kind: Secret
    metadata:
     labels:
     weblogic.resourceVersion: domain-v2
     weblogic.domainUID: project-instance
     weblogic.domainName: project-instance
     namespace: project
     name: secretName
    type: Opaque
    stringData:
    password_key: value1
    user_key: value2
  2. Run the following command to create the secret:
    kubectl apply -f templateFile

Using the Command-line Interface

You can also specify the secret name and the details directly on the command-line interface:
kubectl create secret generic secretName \
-n project \
--from-literal=password_key=value1 \
--from-literal=user_key=value2

Adding JMS Queues and Topics

JMS queues and topics are unique because the base JMS resources (JMS server and JMS subdeployments) already exist in the domain as the OSM core application requires them. You can add custom queues and topics to the OSM JMS resources by specifying the appropriate content in the project specification file.

To add queues or topics, uncomment the sample in your specification file, providing the values necessary to align with your requirements.

Consider the following points:
  • The only mandatory values are 'name' and 'jndiName'.
  • Text in angular brackets do not have a default value. You must supply an actual value per your requirements.
  • The remaining parameters are set to their default values if omitted. When a different value is supplied in the specification file, it is used as an override to the default value.

Note:

There should only be one list of uniformDistributedQueues and one list of uniformDistributedTopics in the specification. When copying the content from the samples, ensure that you do not replicate these sections more than once.
To add JMS distributed queues:
# jms distributed queues
uniformDistributedQueues:
 - name: custom-queue-name
 jndiName: custom-queue-jndi
 resetDeliveryCountOnForward: false
 deliveryFailureParams:
 redeliveryLimit: 10
 deliveryParamsOverrides:
 timeToLive: -1
 priority: -1
 redeliveryDelay: 1000
 deliveryMode: 'No-Delivery'
 timeToDeliver: '-1'
 
To add JMS distributed topics:

# jms distributed topics
uniformDistributedTopics:
 - name: custom-topic-name
 jndiName: custom-topic-jndi
 resetDeliveryCountOnForward: false
 deliveryFailureParams:
 redeliveryLimit: 10
 deliveryParamsOverrides:
 timeToLive: -1
 priority: -1
 redeliveryDelay: 1000
 deliveryMode: 'No-Delivery'
 timeToDeliver: '-1'

Generating Error Queues for Custom Queues and Topics

You can generate error queues for all custom queues and topics automatically.

To generate error queues automatically, configure the following parameters in the project.yaml file:

errorQueue: 
    autoGenerate: false
    expirationPolicy: "Redirect"
    redeliveryLimit: 15

By default, the autoGenerate parameter is set to false. To generate error queues for all JMS queues automatically, set this parameter to true.

When autoGenerate is set to true, all custom queues and topics will have their own error queues.

The following sample shows the error queue generated for a custom queue:

'jms_queue_name_ERROR':
    ResetDeliveryCountOnForward: false
    SubDeploymentName: osm_jms_server
    JNDIName: error/ jms_queue_jndiName                         
    IncompleteWorkExpirationTime: -1
    LoadBalancingPolicy: 'Round-Robin'
    ForwardDelay: -1
    Template: osmErrorJmsTemplate

Note:

  • All error queues have _ERROR as the suffix.
  • For internal queues and topics in OSM, generation of error queues is always enabled. Each queue and topic has its own _ERROR queue. Messages that cannot be delivered are redirected accordingly.
  • Disable this feature for O2A 2.1.2.1.0 cartridges used in an OSM cloud native environment. The O2A build generates its own project specification fragment, which must be used instead.

Creating a JMS Template

A JMS template provides an efficient means of defining multiple destinations with similar attribute settings.

You can add one or more JMS templates if required in addition to the one provided. To create additional JMS templates, copy the customJmsTemplate definition and rename it:
# JMS Template (optional). Uncomment to define "customJmsTemplate"
# Alternatively use the built-in template "customJmsTemplate"
#jmsTemplate:
#  customJmsTemplate:
#    DeliveryFailureParams:
#      RedeliveryLimit: 10
#      ExpirationPolicy: Discard
#    DeliveryParamsOverrides:
#      RedeliveryDelay: 1000
#      TimeToLive: -1
#      Priority: -1
#      TimeToDeliver: -1
To use a JMS template for a queue or topic definition, you can specify the template name, as well as the unique JNDI name:
# jms distributed queues. Uncomment to define one or more JMS queues under a
# single element uniformDistributedQueues.
uniformDistributedQueues: {} # This empty declaration should be removed if adding items here.
#uniformDistributedQueues:
#  - name: jms_queue_name
#    jndiName: jms_queue_jndiName
#    jmsTemplate: customJmsTemplate

# jms distributed topic. Uncomment to define one or more JMS Topics under a
# single element uniformDistributedTopics.
uniformDistributedTopics: {} # This empty declaration should be removed if adding items here.
#uniformDistributedTopics:
#  - name: jms_topic_name
#    jndiName: jms_topic_jndiName
#    jmsTemplate: customJmsTemplate

If the queues and topics need to be created under custom JMS resources, then the OSM cloud native WDT extension mechanism should be employed as described in "Adding a JMS System Resource".

Working with Cartridges

This section describes how you build, deploy, and undeploy OSM cartridges in a cloud native environment.

OSM cartridges are built using either Design Studio or build scripts, which are the methods used for building cartridges in traditional environments.

This topic covers the following operations:

Deploying Cartridges Using the OSM Cloud Native Toolkit

To deploy cartridge par files, OSM cloud native employs a mechanism using the OSM cloud native toolkit's manage-cartridges.sh script. You can deploy cartridge par files in offline or online modes.

Use the following commands with the manage-cartridges.sh script:
  • -p projectName: Mandatory. Name of the project.
  • -i instanceName: Mandatory. Name of the instance.
  • -s specPath: Mandatory. The location of the specification files. A colon(:) delimited list of directories.
  • -m customExtPath: Use this to specify the path of custom extension files. Takes a colon(:) delimited list of directories. If the path provided is empty with the custom flag enabled as true in the specifications, then the script is stopped.
  • –o: Enables online cartridge deployment.
  • -c commandName: Mandatory. Use the following command names:
    • parDeploy: Use this to deploy a cartridge par file from your local file system. Use this for development environments only.
    • sync: Use this to synchronize cartridges using the project specification and remote repository. Use this for all controlled environments.
  • -f parPath: Mandatory if parDeploy is used. This specifies the path of the cartridge par file that you want to deploy.
  • -q: Optional. Disables verbose progress indicators.
The manage-cartridges.sh script spins up a pod to perform the requested deployment activities:
  • If parDeploy is chosen, the script must be run such that it has access to the specified cartridge par file as well as the "kubectl cp" privileges on the pod that is spun up.
  • If sync is chosen, the script compares the list of cartridges and versions in the project specification file against those that are present in the OSM cloud native database and performs the necessary synchronization actions. The list in the project specification file must depict the desired target state:

    Note:

    In the actions listed below, "cartridge" refers to "cartridge+version".
    • If a cartridge is listed as deployed, but is not deployed in the database: it is deployed.
    • If a cartridge is listed as deployed and the same version exists in the database, the two are compared; if there is a difference, the new par file is redeployed.
    • If a cartridge is listed with a default setting that does not match with what is in the database, the default setting in the database is updated to match; no change is done to this setting if they already match.
    • If a cartridge is listed as fastundeployed and it exists as active in the database, it is fast-undeployed in the database. If the cartridge is already fast-undeployed in the database, nothing is done. If the cartridge does not exist in the database, nothing is done.
    The OSM cloud native toolkit ignores the "default" flag in the par file when the sync command is used - it enforces the list as specified in the project specification. For each cartridge, the sync validation ensures that exactly one version is tagged as default.

Using a remote repository is the recommended approach as all aspects of an OSM instance, including the cartridge set deployed, remain in source controlled configuration.

Each entry in the list of cartridges describes a specific cartridge using the name of the cartridge, its version, the intended deployment state and the intended default state. In addition, it specifies a URL that can be used to download the cartridge par file into the cartridge management pod. This URL would be pointing to a remote repository that may require authentication or other parameters. The cartridge entry has fields that can be used to provide parameters (in the form of "curl" command line parameters) as well as a secret that carries the username and password information.
cartridges:
  - name: name_of_the_cartridge - Mandatory, (must match the cartridge name in the par file)
    url: URL_of_the_location_where_to_download_the_cartridge_par_file - Mandatory.
    secret: a Kubernetes_secret_in_the_project_namespace - Optional, only required if remote URL server requires authentication.
    params: Commandline_parameters_will_be_passed_to_curl - Optional, user can provide additional parameters like proxy settings for curl.
    version: cartridge version, example 1.0.0.0.0 - Mandatory, cartridge version (must match the cartridge version in the par file)
    default: true|false - Mandatory, set this cartridge as default cartridge or not.
    deploymentState: deployed|fastundeployed - Mandatory, indicate the desired target state of the cartridge

Offline Cartridge Deployment

This deployment mode supports deployment of new cartridges, deployment of new versions of existing cartridges, and redeployment of existing cartridge versions with changes.

For offline cartridge deployment, all managed servers in your environment must be shut down. The script stops running if there are managed servers up and running. Rolling restart of managed servers is not performed for offline deployment.

When using the toolkit for deploying cartridges in offline mode, the running instance of OSM must be shut down first by scaling down the cluster size to 0:
vi spec_Path/project-instance.yaml

# Change the cluster size to 0

#cluster size
clusterSize: 0

$OSM_CNTK/scripts/upgrade-instance.sh -p project -i instance -s spec_Path
Run the following command to deploy cartridges in offline mode:
$OSM_CNTK/scripts/manage-cartridges.sh -p project -i instance -s spec_Path -c sync [-o]

Online Cartridge Deployment

This deployment mode supports deployment of new cartridges and deployment of new versions of existing cartridges.

Deploying cartridges in an OSM cloud native environment provides the following key benefits:

  • You can deploy the cartridges without needing to isolate OSM from order processing at the JMS/HTTP level.
  • You can describe the cartridges for an environment in a declarative fashion.

In online mode, you can deploy cartridges to your OSM cloud native running instance while orders from a cartridge that you deployed earlier are still being processed. To achieve this, you should have a minimum of two managed servers on which your OSM cloud native instance is running. In such an environment, when you deploy cartridges, OSM availability is uninterrupted and ongoing order processing continues.

You use the manage-cartridges.sh script with the -o option to enable online deployment of cartridges. After deploying the cartridges, the script performs a rolling restart of all the managed servers in your environment.

When deploying cartridges in online mode, the running instance of OSM must continue to run and the required cluster size is at least 2.
vi spec_Path/project-instance.yaml

# Change the cluster size to a minimum of 2

#cluster size
clusterSize: 2

$OSM_CNTK/scripts/upgrade-instance.sh -p project -i instance -s spec_Path
Run the following command to deploy cartridges in online mode:
$OSM_CNTK/scripts/manage-cartridges.sh -p project -i instance -s spec_Path -c sync [-o]

Consider the following when deploying cartridges in online mode:

  • If no managed servers are running, a warning is shown that no managed server is up and running and that the deployment mode is switching to offline deployment. The script continues with offline deployment.
  • If only one managed server is running, then the script fails to perform the deployment.

The OSM cloud native deployment has two different methods of providing cartridge par files based on the following types of the environment they are being deployed to:

  • Open Environments
  • Controlled Environments

Deploying Cartridges in Open Environments

Open environments are mostly development and some test environments. To deploy cartridges to a running instance of OSM cloud native in an open environment, you can use any of the following options:

  • Local par file
    Run the script as follows:
    $OSM_CNTK/scripts/manage-cartridges.sh -p projectName -i instanceName -s spec_Path -f cartridge_par_file -c parDeploy
  • Remote Repository (Unsecured)

    This approach could be suitable for test environments.

    1. Edit the project specification in your file repository to add entries for each cartridge to be deployed:
      ## Unsecured repository
      cartridges:
       - name: OracleComms_OSM_O2A_COMSOM_CSO_Solution
         version: 2.1.2.0.0
         url: http://example.com/Repo/OracleComms_OSM_O2A_COMSOM_CSO_Solution/OracleComms_OSM_O2A_COMSOM_CSO_Solution.par
         default: true
         deploymentState: deployed
       - name: SimpleRabbits
         version: 1.0.0.0.0
         url: http://example.com/Repo/SimpleRabbits/1.0/SimpleRabbits.par
         default: false
         deploymentState: fastundeployed
       - name: SimpleRabbits
         version: 2.0.0.0.0
         url: http://example.com/Repo/SimpleRabbits/2.0/SimpleRabbits.par
         default: true
         deploymentState: deployed
    2. Run the script as follows:
      $OSM_CNTK/scripts/manage-cartridges.sh -p project_name -i instance_name -s spec_path -c sync [-o]
  • Remote Repository - Disabling Verification

    To disable host verification:

    1. Pass in the curl -k option as follows.

      Note:

      Disabling the verification on a secured repository is a security risk.
      ## secured repository, disabling host verification
      cartridges:
       - name: OracleComms_OSM_O2A_COMSOM_CSO_Solution
         version: 2.1.2.0.0
         url: http://example.com/Repo/OracleComms_OSM_O2A_COMSOM_CSO_Solution/OracleComms_OSM_O2A_COMSOM_CSO_Solution.par
         default: true
         deploymentState: deployed
         params: -k
       - name: SimpleRabbits
         version: 1.0.0.0.0
         url: http://example.com/Repo/SimpleRabbits/1.0/SimpleRabbits.par
         default: false
         deploymentState: fastundeployed
         params: -k
       - name: SimpleRabbits
         version: 2.0.0.0.0
         url: http://example.com/Repo/SimpleRabbits/2.0/SimpleRabbits.par
         default: true
         deploymentState: deployed
         params: -k
    2. Run the script as follows:
      $OSM_CNTK/scripts/manage-cartridges.sh -p project_name -i instance_name -s specification_path -c sync [-o]

Deploying Cartridges in Controlled Environments

To install cartridges in controlled environments such as UAT, pre-production, and production, use only the declarative approach. Rather than copying the par files into the cartridge management pod, they are "pulled" from a URL.

The cartridge list is defined in the project specification, ensuring that the cartridge load is also under version control.

  • Using a Remote Repository

    To use a remote repository to deploy cartridges in a controlled environment:

    1. Edit the project specification in your file repository as follows:
      ## Credentials required
      cartridges:
       - name: OracleComms_OSM_O2A_COMSOM_CSO_Solution
         version: 2.1.2.0.0
         url: http://example.com/Repo/OracleComms_OSM_O2A_COMSOM_CSO_Solution/OracleComms_OSM_O2A_COMSOM_CSO_Solution.par
         default: true
         deploymentState: deployed
         secret: solution_cartridge_secret_name_in_lowercase
       - name: SimpleRabbits
         version: 1.0.0.0.0
         url: http://example.com/Repo/SimpleRabbits/1.0/SimpleRabbits.par
         default: false
         deploymentState: fastundeployed
         secret: solution_cartridge_secret_name_in_lowercase
       - name: SimpleRabbits
         version: 2.0.0.0.0
         url: http://example.com/Repo/SimpleRabbits/2.0/SimpleRabbits.par
         default: true
         deploymentState: deployed
         secret: solution_cartridge_secret_name_in_lowercase

      The secret would contain any authentication credentials required to download the par file from the remote repository. The toolkit relies on the secret having the entries for the username and password set to the appropriate values. These are used by curl.

      An example of creating the secret using kubectl on the command line is as follows:
      kubectl create secret generic solution_cartridge_secret_name_in_lowercase \
       -n project \
       --from-literal=username='remoteRepoUsername' \
       --from-literal=password='remoteRepoPassword'
    2. Run the script as follows:
      ./scripts/manage-cartridges.sh -p project_name -i instance_name \ -s spec_path -c sync [-o]
  • Using a Remote Repository - TLS/SSL

    For HTTPS, the SSL certificate of the repository server must be exposed to the cartridge management pod and then passed as a command line parameter -cacertpath_to_repo_server_ssl_certificate to curl. The path_to_repo_server_ssl_certificate is the path within the pod.

    To allow curl access to the SSL certificate within the cartridge management pod:
    1. Obtain the server certificate by running the following command:
      echo quit | openssl s_client -showcerts -servername repo_server_hostname -connect repo_server_url path_to_repo_server_ssl_name.pem
    2. Run the register-certificate.sh script to create a Kubernetes secret that contains the SSL certificate:
      $OSM_CNTK/scripts/register-certificate.sh -p project_name -n secret_name -f path_to_repo_server_ssl_name.pem
    3. Add the following fragment to the project specification to enable the secret to be mounted at the path /etc/ssl/certs/ within the cartridge management pod. The name is the secret_name created in step 2 and type is the file extension of the certificate file:
      certificates:
        - name: secret_name
          type: file_type
       
      #example
      certificates:
        - name: mySecret
          type: pem
    4. Add the parameter --cacert /etc/ssl/certs/secret_name.file_type to the cartridges: params parameter in the project specification:
      cartridges:
       - name: OracleComms_OSM_O2A_COMSOM_CSO_Solution
         version: 2.1.2.0.0
         url: http://example.com/Repo/OracleComms_OSM_O2A_COMSOM_CSO_Solution/OracleComms_OSM_O2A_COMSOM_CSO_Solution.par
         default: true
         deploymentState: deployed
         params: --cacert /etc/ssl/certs/secret_name.file_type

You use Design Studio or build scripts to undeploy (fast undeploy and full undeploy) OSM cartridges.

Deploying Cartridges Using Design Studio

You can deploy cartridges directly from Design Studio using the Eclipse user interface or headless Design Studio. However, use Design Studio for deploying cartridges in scenarios where there is a lot of churn in the build, deploy and test cycle, but not for production environments. If used in conjunction with the OSM cloud native cartridge management mechanism, then the deployed cartridges become out of sync with what is listed in the source controlled specification file. For this reason, deploying cartridges using Design Studio is not recommended for environments where the specification file is considered the single source of truth for the set of deployed cartridges.

In order to incorporate Design Studio into the larger OSM cloud native ecosystem, you need to have previously taken care of the mapping of the hostname to the Kubernetes cluster or the load balancer as described in "Planning and Validating Your Cloud Environment".

After confirming that this has been done, do the following in Design Studio:
  • Ensure that the connection URL of the Design Studio environment project matches your OSM cloud native environment. This is likely: http://instance.project.osm.org:30305/cartridge/wsapi. The suffix osm.org is configurable.
  • In the Design Studio workspace, depending on your network setup, you may need to set the Proxy bypass field in the Network Connection Preferences to: instance.project.osm.org

Provisioning Cartridge User Accounts

This section describes how to use the sample scripts to create credential store secrets and provide the instance configuration so that OSM cloud native can access the credentials.

The sample scripts also provide the ability to populate the OpenLDAP server so that OSM can authenticate any cartridge users. In this way, provisioning a cartridge user account uses the same mechanism regardless of the end location for the credentials. In this way, provisioning a cartridge user account uses the same mechanism regardless of the end location for the credentials.

This section covers the following topics:
  • Creating Credential Store Secret
  • Declaring the Secret
  • Configuring LDAP Systems

OSM solution cartridges have complex requirements around user credentials:

  • Automation plugins that handle communication with external systems need a programmatic way to access credentials so that outgoing requests can supply the appropriate credentials for the requested operation. To meet this requirement, a credential store mechanism is required. Credentials must be populated into a central repository for storing usernames and passwords, and OSM must be able to access this repository to pass credentials to the plugin code when requested.
  • Additionally, if a cartridge defined user (non-human) account is accessing an OSM API, then the credentials for this user account also need to exist in the embedded LDAP so that OSM can authenticate the user. Also, the cartridge human user account needs to exist in the external authentication system (OpenLDAP).

In summary, some cartridge defined users need to be provisioned in a credential store, some in OpenLDAP or other LDAP provider, and some users need to be defined in both.

The following table summarizes the system that cartridge user accounts need to be provisioned to:

Note:

When the same credentials need to exist in both the LDAP server and as a Kubernetes secret, care must be taken to ensure the credentials remain in sync.

Table 6-1 Cartridge User Accounts

User Credential Usage LDAP Kubernetes Secret Description
OSM UI Required Not Required Normal manual OSM user
OSM Web Service API Required Required The cartridge code generates an OSM create order request or other OSM Web Service payload.
OSM XML API Required Required if API access is to another instance of OSM Normal manual OSM user
OSM Automation Required Not Required OSM Automation Plugin Run as user
OSM REST API Required Required if API access is to another instance of OSM REST API User
External Systems (Web Services, APIs and so on) Not Required Required The cartridge code generates a request for external systems that require authentication.

Creating Credential Store Secret

In a traditional deployment, OSM uses the Fusion Middleware Credential Store framework and provides tooling for creating and populating the credential store through the XMLIE's "credStoreAdmin" operation. OSM cloud native uses Kubernetes Secrets as the credential store and the OSM cloud native toolkit provides sample scripts that create credential store secrets and populate them with the required credentials.

Note:

If you use custom code that relies on the OPSS Keystore Service, you need to make changes for OSM cloud native as that mechanism is no longer supported. For details, see "Differences Between OSM Cloud Native and OSM Traditional Deployments".

A text file is used to describe the details required to provision the user accounts properly. Each user is captured in one line and has the following format:

map_name:key_name:username:credential-system[:osm-groups]

$OSM_CNTK/samples/credentials/osm_users.txt is used to define OSM human users for external LDAP but can be used as a template for other user credentials that need to be created.

Copy this file to your private specification repository under the instance specific directory and rename it to something meaningful. For example, rename the file as repo/cartridge_user_text_file.txt.

The mapName parameter is a mandatory parameter. If <credential-system> contains "secret", then this value is used as the prefix of the secret name to be created.

Note:

If only LDAP is required, use "osm" for the secret prefix. This value is not used anywhere, but enables the sample to extract the remaining data properly.

The choice of map name and key name affects which OSM automation framework API can be used to retrieve the value within the automation plugin:

  • Use "osm" as map name and _sysgen_ as key name. The credential record is accessed with the context:getOsmCredentialPassword API.
  • Any other map name and key name needs access with the context:getCredentialAsXML or context:getCredential APIs.

    Refer to the OSM SDK for more details.

The credential-system parameter is a mandatory parameter and must be at least one of the following values:

  • ldap: Creates the OSM human user against the external LDAP server.

    Note:

    The cartridge automation user account should be created in embedded LDAP by specifying the list of usernames in cartridgeUsers in projectName.yaml. Do not create them in external LDAP.
  • secret: Creates the human user or automation user against the Kubernetes Secret.

Note:

Use a comma to separate the values if creation in both the systems is required.

The osm-groups parameter represents a list of OSM groups to associate the user to either the embedded or external LDAP server.

The valid values for the osm-groups parameter are:
  • OMS_client
  • OMS_designer
  • OMS_user_assigner
  • OMS_workgroup_manager
  • OMS_xml_api
  • OMS_ws_api
  • OMS_ws_diag
  • OMS_log_manager
  • OMS_cache_manager
  • Cartridge_Management_WebService
  • OSM_automation
  • osmEntityClientGroup
  • osmRestApiGroup
Refer to OSM System Administrator's Guide for details on OSM user group mapping.
The following text shows a sample user information text file:
Line 1 osm:_sysgen_:osmfallout:ldap,secret:OMS_client,OMS_xml_api,OSM_automation,OMS_ws_api
Line 2 osm:_sysgen_:webuser:ldap,secret:OMS_client
Line 3 uim:_sysgen_:uimuser:secret
Line 4 uim:_sysgen_:uimadmin:secret
Line 5 osm:_sysgen_:osmlf:secret:OMS_client,OMS_xml_api,OSM_automation,OMS_ws_api
Line 6 # Guidelines
Line 7 mapName:keyName:userName:credentialSystem:OsmGroup

Note:

The secret contains username, password, and the groups.
In the above example:
  • Line 1 creates a user "osmfallout" in OpenLDAP and associates that user against the groups listed.
  • Line 2 creates a user "webuser" in OpenLDAP and associates this user to the OSM_client group.
  • Line 3 and 4 create users "uimadmin" and "uimuser" in the "uim" credential secret.
  • Line 5 creates user "osmlf" in the "osm" credential secret.

The secrets that the manage-cartridge-credentials.sh script creates are named project-instance-osmcn-cred-mapName as per the naming conventions required by OSM. For each unique mapName that you provide, the script creates one secret. This means if five user entries exist for "uim", each entry will be available in a single secret named project-instance-osmcn-cred-uim. The script prompts for passwords interactively.

To create the credential store secret:

  1. Run the following script:
    $OSM_CNTK/samples/credentials/manage-cartridge-credentials.sh \
    -p project \
    -i instance \
    -c create \
    -f fileRepo/customSolution_users.txt
     
     
    # You will see the following output
    secret/project-instance-osmcn-cred-uim created
  2. Validate that the secrets are created:
    kubectl get secret -n project
     
    NAME
    project-instance-osmcn-cred-uim 

Creating Cartridge User Accounts in Embedded LDAP

To create accounts for cartridge users in embedded LDAP, under the cartridgeUsers section in project.yaml, add all the cartridge users (only those from the prefix/map name osm). During the creation of the OSM server instance, for all the cartridge users listed, an account is created in embedded LDAP with the same username and password and groups as the Kubernetes secret.
cartridgeUsers:
  - osm
  - osmoe
  - osmde
  - osmfallout
  - osmoelf
  - osmlfaop
  - osmlf
  - tomadmin

Declaring the Secret

After the secret is created, declare the secret used by the credential store mechanism by editing your project specification. In the project specification, specify only mapName. The prefix project-instance-osmcn-cred is derived during the instance creation.

To declare the secrets, edit the project specification:
#External Credentials Store
externalCredStore:
 secrets:
   mapNames: 
     -mapName

The OSM cloud native configuration provides a start-up parameter that allows the OSM core application to determine whether the credentials are held in a WebLogic Credential Store (for traditional deployments) or in a Kubernetes Secret Credential Store (for cloud native) so that the configuration is set for you. Cartridges that rely on accessing these credentials are now enabled for execution.

Configuring Other LDAP Systems

The manage-cartridge-credentials.sh script supports the OpenLDAP system. To provide support for a different LDAP provider, you must modify the script. Also, the corresponding LDAP client or the API must be installed on the system where the script is executed.

You must modify the following functions within this script:
  • create_ldap_account. This function creates the user account in the LDAP system and associates the user to the specified groups.
  • update_ldap_account. This function updates the user password.
  • delete_ldap_account. This function deletes the user from the LDAP system and disassociates this user from the specified group.
  • verify_ldap_account. This function verifies that the specified user exists in the LDAP server.
For details on developing the functions, see the developer's guide of the target LDAP server that you want to use.