8 Exploring Alternate Configuration Options
The OSM cloud native toolkit provides samples and documentation for setting up your OSM cloud native environment using standard configuration options. However, you can choose to explore alternate configuration options for setting up your environment, based on your requirements. This chapter describes alternate configurations you can explore, allowing you to decide how best to configure your OSM cloud native environment to suit your needs.
You can choose alternate configuration options for the following:
- Setting Up Authentication
- Working with Shapes
- Injecting Custom Configuration Files
- Choosing Worker Nodes for Running OSM Cloud Native
- Working with Ingress, Ingress Controller, and External Load Balancer
- Using an Alternate Ingress Controller
- Reusing the Database State
- Setting Up Persistent Storage
- Setting Up Database Optimizer Statistics
- Leveraging Oracle WebLogic Server Active GridLink
- Managing Logs
- Managing OSM Cloud Native Metrics
The sections that follow provide instructions for working with these configuration options.
Setting Up Authentication
By default, OSM uses the WebLogic embedded LDAP as the authentication provider and all OSM system users are created in embedded LDAP during instance creation. For human users, you may set up an optional authentication for the users who access OSM through user interfaces. See "Planning and Validating Your Cloud Environment" for information on the components that are required for setting up your cloud environment. The OSM cloud native toolkit provides samples that you use to integrate components such as OpenLDAP, WebLogic Kubernetes Operator (WKO), and Traefik. This section describes the tasks you must do for configuring optional authentication for OSM cloud native human users.
Perform the following tasks using the samples provided with the OSM cloud native toolkit:
- Install and configure OpenLDAP. This is required to be done once for your organization.
- Install OpenLDAP clients. This is required to be performed on each host that installs and runs the toolkit scripts and when a Kubernetes cluster is shared by multiple hosts.
- In the OpenLDAP server, create the root node for each OSM instance
Installing and Configuring OpenLDAP
OpenLDAP enables your organization to handle authentication for all instances of OSM. You install and configure OpenLDAP once for your organization.
To install and configure OpenLDAP:
- Run the following command, which installs OpenLDAP:
$ sudo -s yum -y install "openldap" "migrationtools"
- Specify a password by running the following command:
$ sudo -s slappasswd New password: Re-enter new password:
- Configure OpenLDAP by running the following
commands:
$ sudo -s $ cd /etc/openldap/slapd.d/cn=config $ vi olcDatabase\=\{2\}hdb.ldif
- Update the values for the following parameters:
Note:
Ignore the warning about editing the file manually.olcSuffix: dc=osmcn-ldap,dc=com
olcRootDN: cn=Manager,dc=osmcn-ldap,dc=com
olcRootPW:
sshawhere ssha is the SSHA that is generated
- Update the dc values for the olcAccess parameter as
follows:
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external, cn=auth" read by dn.base="cn=Manager,dc=osmcn-ldap,dc=com" read by * none
- Test the configuration by running the following command:
sudo -s slaptest -u
Ignore the checksum warnings in the output and ensure that you get a success message at the end.
- Run the following commands, which restart and enable
LDAP:
sudo -s systemctl restart slapd sudo -s systemctl enable slapd sudo -s cp -rf /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif
- Create a root node named domain, which will be the top parent for all OSM instances.
- Run the following command to create a new file named
base.ldif:
sudo -s vi /root/base.ldif
- Add the following entries to the base.ldif
file:
dn: ou=Domains,dc=osmcn-ldap,dc=com objectClass: top objectClass: organizationalUnit ou: Domains
- Run the following commands to update the values in the base.ldif
file:
ldapadd -x -W -D "cn=Manager,dc=osmcn-ldap,dc=com" -f /root/base.ldif ldapsearch -x cn=Manager -b dc=osmcn-ldap,dc=com
- Open the LDAP port 389 on all Kubernetes nodes in the cluster.
Installing OpenLDAP Clients
In environments where the Kubernetes cluster is shared by multiple hosts, you must install the OpenLDAP clients on each host. You use the scripts in the toolkit to populate the LDAP server with users and groups.
sudo -s yum -y install openldap-clients
Creating the Root Node
You must create the root node for each OSM instance before additional OSM non-automation user and OSM group can be created.
The toolkit provides a sample script ($OSM_CNTK/samples/credentials/manage-osm-ldap-credentials.sh) that you can use to create the root node in the LDAP tree for the OSM instance.
Run the $OSM_CNTK/samples/credentials/manage-osm-ldap-credentials.sh script by passing in -o account.
Working with Shapes
The OSM cloud native toolkit provides the following pre-configured shapes:
- charts/osm/shapes/dev.yaml. This can be used for development, QA and user acceptance testing (UAT) instances.
- charts/osm/shapes/devsmall.yaml. This can be used to reduce CPU requirements for small development instances.
- charts/osm/shapes/prod.yaml. This can be used for production, pre-production, and disaster recovery (DR) instances.
- charts/osm/shapes/prodlarge.yaml. This can be used for production, pre-production and disaster recovery (DR) instances that require more memory for OSM cartridges and order caches.
- charts/osm/shapes/prodsmall.yaml. This can be used to reduce CPU requirements for production, pre-production and disaster recovery (DR) instances. For example, it can be used to deploy a small production cluster with two managed servers when the order rate does not justify two managed servers configured with a prod or prodlarge shape. For production instances, Oracle recommends two or more managed servers. This provides increased resiliency to a single point of failure and can allow order processing to continue while failed managed servers are being recovered.
You can create custom shapes using the pre-configured shapes. See "Creating Custom Shapes" for details.
The pre-defined shapes come in standard sizes, which enable you to plan your Kubernetes cluster resource requirement.
Table 8-1 Sizing Requirements of Shapes for a Managed Server
Shape | Kube Request | Kube Limit | JVM Heap (GB) |
---|---|---|---|
prodlarge | 80 GB RAM, 15 CPU | 80 GB RAM, 15 CPU | 64 |
prod | 48 GB RAM, 15 CPU | 48 GB RAM, 15 CPU | 31 |
prodsmall | 48 GB RAM, 7.5 CPU | 48 GB RAM, 7.5 CPU | 31 |
dev | 8 GB RAM, 2 CPU | 8 GB RAM | 5 |
devsmall | 8 GB RAM, 0.5 CPU | 8 GB RAM | 5 |
The following table lists the sizing requirements of the shapes for an admin server:
Table 8-2 Sizing Requirements of Shapes for an Admin Server
Shape | Kube Request | Kube Limit | JVM Heap (GB) |
---|---|---|---|
prodlarge | 8 GB RAM, 2 CPU | 8 GB RAM | 4 |
prod | 8 GB RAM, 2 CPU | 8 GB RAM | 4 |
prodsmall | 8 GB RAM, 2 CPU | 8 GB RAM | 4 |
dev | 3 GB RAM, 1 CPU | 4 GB RAM | 1 |
devsmall | 3 GB RAM, 0.5 CPU | 4 GB RAM | 1 |
These values are encoded in the specifications and are automatically part of the individual pod configuration. The Kubernetes schedulers evaluate the Kube request settings to find space for each pod in the worker nodes of the Kubernetes cluster.
- Number of development instances required to be running in parallel: D
- Number of managed servers expected across all the development instances: Md (Md will be equal to D if all the development instances are 1 MS instances)
- Number of production (and production-like) instances required to be running in parallel: P
- Number of managed servers expected across all production instances: Mp
- Assume use of "dev" and "prod" shapes
- CPU requirement (CPUs) = D * 1 + Md * 2 + P * 2 + Mp * 15
- Memory requirement (GB) = D * 4 + Md * 8 + P * 8 + Mp * 48
Note:
The production managed servers take their memory and CPU in large chunks. Kube scheduler requires the capacity of each pod to be satisfied within a particular worker node and does not schedule the pod if that capacity is fragmented across the worker nodes.The shapes are pre-tuned for generic development and production environments. You can create an OSM instance with either of these shapes, by specifying the preferred one in the instance specification.
# Name of the shape. The OSM cloud native shapes are devsmall, dev, prodsmall, prod, and prodlarge.
# Alternatively, custom shape name can be specified (as the filename without the extension)
Creating Custom Shapes
You create custom shapes by copying the provided shapes and then specifying the desired tuning parameters. Do not edit the values in the shapes provided with the toolkit.
- The number of threads allocated to OSM work managers
- OSM connection pool parameters
- Order cache sizes and inactivity timeouts
To create a custom shape:
- Copy one of the pre-configured shapes and save it to your source repository.
- Rename the shape and update the tuning parameters as required.
- In the instance specification, specify the name of the shape you copied
and renamed:
shape: custom
- Create the domain, ensuring that the location of your custom shape is
included in the comma separated list of directories passed with
-s
.$OSM_CNTK/scripts/create-instance.sh -p project -i instance -s spec_Path
Note:
While copying a pre-configured shape or editing your custom shape, ensure that you preserve any configuration that has comments indicating that it must not be deleted.Injecting Custom Configuration Files
Sometimes, a solution cartridge may require access to a file on disk. A common example is for reading of property files or mapping rules.
A solution may also need to provide configuration files for reference via parameters in the oms-config.xml file for OSM use (for example, for operational order jeopardies and OACC runtime configuration).
- Make a copy of the OSM_CNTK/samples/customExtensions/custom-file-support.yaml file.
- Edit it so that it contains the contents of the files. See the comments in the file for specific instructions.
- Save it (retaining its name) into the directory where you save all extension files. Say extension_directory. See "Extending the WebLogic Server Deploy Tooling (WDT) Model" for details.
- Edit your project specification to reference the desired files in
the
customFiles
element:#customFiles: # - mountPath: /some/path/1 # configMapSuffix: "path1" # - mountPath: /some/other/path/2 # configMapSuffix: "path2"
When you run create-instance.sh or upgrade-instance.sh, provide the extension_directory in the "-m" command-line argument. In your oms-config.xml file or in your cartridge code, you can refer to these custom files as mountPath/filename, where mountPath comes from your project specification and filename comes from your custom-file-support.yaml contents. For example, if your custom-file-support.yaml file contains a file called properties.txt and you have a mount path of /mycompany/mysolution/config, then you can refer to this file in your cartridge or in the oms-config.xml file as /mycompany/mysolution/config/properties.txt.
- The files created are read-only for OSM and for the cartridge code.
- The mountPath parameter provided in the project specification should point to a new directory location. If the location is an existing location, all of its existing content will occlude with the files you are injecting.
- Do not provide the same mountPath more than once in a project specification.
- The custom-file-support.yaml file in your extension_directory is part of your configuration-as-code, and must be version controlled as with other extensions and specifications.
To modify the contents of a custom file, update your custom-file-support.yaml file in your extension_directory and invoke upgrade-instance.sh. Changes to the contents of the existing files are immediately visible to the OSM pods. However, you may need to perform additional actions in order for these changes to take effect. For example, if you changed a property value in your custom file, that will only be read the next time your cartridge runs the appropriate logic.
- Update the instance specification to set the size to 0 and then run upgrade-instance.sh.
- Update the instance specification to set the size to the initial value and remove the file from your custom-file-support.yaml file.
- Update the
customFiles
parameter in your project specification and run upgrade-instance.sh.
Choosing Worker Nodes for Running OSM Cloud Native
By default, OSM cloud native has its pods scheduled on all worker nodes in the Kubernetes cluster in which it is installed. However, in some situations, you may want to choose a subset of nodes where pods are scheduled.
- Licensing restrictions: Coherence could be limited to be deployed on specific shapes. Also, there could be a limit on the number of CPUs where Coherence is deployed.
- Non license restrictions: Limitation on the deployment of OSM on specific worker nodes per each team for reasons such as capacity management, chargeback, budgetary reasons, and so on.
# If OSM cloud native instances must be targeted to a subset of worker nodes in the
# Kubernetes cluster, tag those nodes with a label name and value, and choose
# that label+value here.
# key : any node label key
# values : list of values to choose the node.
# If any of the values is found for the above label key, then that
# node is included in the pod scheduling algorithm.
#
# This can be overriden in instance specification if required.
osmcnTargetNodes: {} # This empty declaration should be removed if adding items here.
#osmcnTargetNodes:
# nodeLabel:
## oracle.com/licensed-for-coherence is just an indicative example, any label and its values can be used for choosing nodes.
# key: oracle.com/licensed-for-coherence
# values:
# - true
- There is no restriction on node label key. Any valid node label can be used.
- There can be multiple valid values for a key.
- You can override this configuration in the instance specification yaml file, if required.
Working with Ingress, Ingress Controller, and External Load Balancer
# valid values are TRAEFIK, GENERIC, OTHER
ingressController: "TRAEFIK"
$OSM_CNTK/scripts/create-ingress.sh -p project -i instance -s $SPEC_PATH
$OSM_CNTK/scripts/delete-ingress.sh -p project -i instance
The Traefik ingress controller works by creating an operator in its own
"traefik" namespace and exposing a NodePort service. However, all ingress controllers do
not behave the same way. In order to accommodate all types of ingress controllers, by
default, the instance.yaml file provides the loadBalancerPort
parameter.
If an external load balancer is used, it needs to be connected to the NodePort service of
the Ingress controller. Hence, externalLoadBalancerIP
also needs to
be present in instance.yaml.
- If an external load balancer is not configured, fetch
loadBalancerPort
by running the following command:$kubectl -n $TRAEFIK_NS get service traefik-operator --output=jsonpath="{..spec.ports[?(@.name=='http')].nodePort}"
- If an external load balancer is used, fetch
loadBalancerPort
by running the following command:kubectl -n $TRAEFIK_NS get service traefik-operator --output=jsonpath="{..spec.ports[?(@.name=='http')].port}"
# If external hardware or software load balancer is used, set this value to that frontend host IP.
# If OCI load balancer is used, then set externalLoadBalancerIP from OCI LBaaS
#externalLoadBalancerIP: ""
# For Traefik Ingress Controller:
# If external load balancer is used, then this would be 80, else traefik pod's Nodeport (30305)
loadBalancerPort: 80
Note:
If you choose Traefik or any other ingress controller such as Traefik, you can move theloadBalancerPort
and externalLoadBalancerIP
parameters to project.yaml.
Using an Alternate Ingress Controller
By default, OSM cloud native supports Traefik and provides sample files for integration. However, you can use any Ingress controller that supports host-based routing and session stickiness with cookies. OSM cloud native uses the term "generic" ingress for scenarios where you want to leverage the Ingress capabilities that the Kubernetes platform may provide.
To use a generic ingress controller, you must create the ingress object and configure your OSM instance to use it. The toolkit uses an ingress Helm chart ($OSM_CNTK/samples/charts/ingress-per-domain/templates/traefik-ingress.yaml) and scripts for creating the ingress objects. If you want to use a generic ingress controller, these samples can be used as a reference and customized as necessary.
If your OSM cloud native instance needs to secure incoming communications, then look at the $OSM_CNTK/samples/charts/ingress-per-domain/templates/traefik-ingress.yaml file. This file demonstrates the configuration for a TLS-enabled Traefik ingress that can be used as a sample.
- domainUID: Combination of project-instance. For example, sr-quick.
- clusterName: The name of the cluster in lowercase. Replace any hyphens "-" with underscore "_". The default name of the cluster in values.yaml is c1.
The following table lists the service name and service ports for Ingress rules:
Table 8-3 Service Name and Service Ports for Ingress Rules
Rule | Service Name | Service Port | Purpose |
---|---|---|---|
instance.project.loadBalancerDomainName | domainUID-cluster-clusterName | 8001 | For access to OSM through UI, XMLAPI, Web Services, and so on. |
t3.instance.project.loadBalancerDomainName | t3.instance.project.loadBalancerDomainName | 30303 | OSM T3 Channel access for WLST, JMS, and SAF clients. |
admin.instance.project.loadBalancerDomainName | domainUID-admin | 7001 | For access to OSM WebLogic Admin Console UI. |
Ingresses need to be created for each of the above rules per the following guidelines:
- Before running create-instance.sh, ingress must be created.
- After running delete-instance.sh, ingress must be deleted.
You can develop your own code to handle your ingress controller or copy the
sample ingress-per-domain
chart and add additional template files for
your ingress controller with a new value for the type (NGINX for example).
- The reference sample for creation is: $OSM_CNTK/scripts/config-ingress.sh
- The reference sample for deletion is: $OSM_CNTK/scripts/delete-ingress.sh
ingressController
parameter in the instance specification at
$SPEC_PATH/project-instance.yaml#valid values are TRAEFIK, GENERIC, OTHER
ingressController: "GENERIC"
If any of the supported Ingress controllers or even a generic ingress does not meet your requirements, you can choose "OTHER".
By choosing this option, OSM cloud native does not create or manage any ingress required for accessing the OSM cloud native services. However, you may choose to create your own ingress objects based on the service and port details mentioned in the above table.
Note:
Regardless of the choice of Ingress controller, it is mandatory to provide the value
of loadBalancerPort
in one of the specification files. This is
used for establishing front-end cluster.
Reusing the Database State
When an OSM instance is deleted, the state of the database remains unaffected, which makes it available for re-use. This is common in the following scenarios:
- When an instance is deleted and the same instance is re-created using the same project and the instance names, the database state is unaffected. For example, consider a performance instance that does not need to be up and running all the time, consuming resources. When it is no longer actively being used, its specification files and PDB can be saved and the instance can be deleted. When it is needed again, the instance can be rebuilt using the saved specifications and the saved PDB. Another common scenario is when developers delete and re-create the same instance multiple times while configuration is being developed and tested.
- When a new instance is created to point to the data of another instance with a new project and instance names, the database state is unaffected. A developer, who might want to create a development instance with the data from a test instance in order to investigate a reported issue, is likely to use their own instance specification and the OSM data from PDB of the test instance.
- The OSM DB (schema and data)
- The RCU DB (schema and data)
Recreating an Instance
You can re-create an OSM instance with the same project and instance names, pointing to the same database. In this case, both the OSM DB and the RCU DB are re-used, making the sequence of events for instance re-creation relatively straightforward.
- PDB
- The project and instance specification files
Reusing the OSM Schema
To reuse the OSM DB, the secret for the PDB must still exist:
project-instance-database-credentials
project-instance-database-credentials.
This is the osmdb
credential in the
manage-instance-credentials.sh script.
Reusing the RCU
- project-instance
-rcudb-credentials
. This is thercudb
credential. - project-instance
-opss-wallet-password-secret
. This is theopssWP
credential. - project-instance
-opss-walletfile-secret
. This is theopssWF
credential.
opssWP
and opssWF
secrets no longer exist and cannot be re-created from offline
data, then drop the RCU schema and re-create it using the OSM DB Installer.
$OSM_CNTK/scripts/create-instance.sh -p project -i instance -s spec_Path
Creating a New Instance
If the original instance does not need to be retained, then the original PDB can be re-used directly by a new instance. If however, the instance needs to be retained, then you must create a clone of the PDB of the original instance. This section describes using a newly cloned PDB for the new instance.
If possible, ensure that the images specified in the project specification (project.yaml) match the images in the specification files of the original instance.
Reusing the OSM Schema
osmdb
credential in
manage-instance-credentials.sh and points to your cloned
PDB:project-instance-database-credentials
If your new instance must reference a newer OSM DB installer image in its specification files than the original instance, it is recommended to invoke an in-place upgrade of OSM schema before creating the new instance.
# Upgrade the OSM schema to match new instance's specification files
# Do nothing if schema already matches
$OSM_CNTK/scripts/install-osmdb.sh -p project -i instance -s spec_path -c 1
- Create a new RCU
- Reuse RCU
Creating a New RCU
If you only wish to retain the OSM schema data (cartridges and orders), then you can create a new RCU schema.
The following steps provide a consolidated view of RCU creation described in "Managing Configuration as Code".
- project-instance
-rcudb-credentials
. This is thercudb
credential and describes the new RCU schema you want in the clone. - project-instance
-opss-wallet-password-secret
. This is theopssWP
credential unique to your new instance
# Create a fresh RCU DB schema while preserving OSM schema data
$OSM_CNTK/scripts/install-osmdb.sh -p project -i instance -s spec_path -c 7
With
this approach, the RCU schema from the original instance is still available in the
cloned PDB, but is not used by the new instance.Reusing the RCU
Using the manage-instance-credentials.sh script, create the following secret using your new project and instance names:
project-instance-rcudb-credentials
The secret should describe the old RCU schema, but with new PDB details.
-
Reusing RCU Schema Prefix
Over time, if PDBs are cloned multiple times, it may be desirable to avoid the proliferation of defunct RCU schemas by re-using the schema prefix and re-initializing the data. There is no OSM metadata or order data stored in the RCU DB so the data can be safely re-initialized.
project-instance
-opss-wallet-password-secret
. This is theopssWP
credential unique to your new instance.To re-install the RCU, invoke DB Installer:$OSM_CNTK/scripts/install-osmdb.sh -p project -i instance -s spec_path -c 5
-
Reusing RCU Schema and Data
In order to reuse the full RCU DB from another instance, the original
opssWF
andopssWP
must be copied to the new environment and renamed following the convention: project-instance-opss-wallet-password-secret and project-instance-opss-walletfile-secret.This directs Fusion MiddleWare OPSS to access the data using the secrets.
$OSM_CNTK/scripts/create-instance.sh -p
project -i instance -s spec_path
Setting Up Persistent Storage
OSM cloud native can be configured to use a Kubernetes Persistent Volume to store data that needs to be retained even after a pod is terminated. This data includes application logs, JFR recordings and DB Installer logs, but does not include any sort of OSM state data. When an instance is re-created, the same persistent volume need not be available. When persistent storage is enabled in the instance specification, these data files, which are written inside a pod are re-directed to the persistent volume.
Data from all instances in a project may be persisted, but each instance does not need a unique location for logging. Data is written to a project-instance folder, so multiple instances can share the same end location without destroying data from other instances.
The final location for this data should be one that is directly visible to the users of OSM cloud native. The development instances may simply direct data to a shared file system for analysis and debugging by cartridge developers. Whereas, formal test and production instances may need the data to be scraped by a logging toolchain such as EFK, that can then process the data and make it available in various forms. The recommendation therefore is to create a PV-PVC pair for each class of destination within a project. In this example, one for developers to access and one that feeds into a toolchain.
A PV-PVC pair would be created for each of these "destinations", that multiple instances can then share. A single PVC can be used by multiple OSM domains. The management of the PV and PVC lifecycles is beyond the scope of OSM cloud native.
The OSM cloud native infrastructure administrator is responsible for creating and deleting PVs or for setting up dynamic volume provisioning.
The OSM cloud native project administrator is responsible for creating and deleting PVCs as per the standard documentation in a manner such that they consume the pre-created PVs or trigger the dynamic volume provisioning. The specific technology supporting the PV is also beyond the scope of OSM cloud native. However, samples for PV supported by NFS are provided.
Creating a PV-PVC Pair
The technology supporting the Kubernetes PV-PVC is not dictated by OSM cloud native. Samples have been provided for NFS and can either be used as is, or as a reference for other implementations.
To create a PV-PVC pair supported by NFS:
- Edit the sample PV and PVC yaml files and update entries with enclosing
brackets
Note:
PVCs need to be ReadWriteMany.
vi $OSM_CNTK/samples/nfs/pv.yaml vi $OSM_CNTK/samples/nfs/pvc.yaml
- Create the Kubernetes PV and
PVC.
kubectl create -f $OSM_CNTK/samples/nfs/pv.yaml kubectl create -f $OSM_CNTK/samples/nfs/pvc.yaml
# The storage volume must specify the PVC to be used for persistent storage.
storageVolume:
enabled: true
pvc: storage-pvc
[oracle@localhost project-instance]$ dir
db-installer logs performance
Setting Up Database Optimizer Statistics
As part of the setup of a highly performant database for OSM, it is necessary to set up database optimizer statistics. OSM DB Installer can be used to set up the database partition statistics, which ensures a consistent source of statistics for new partitions so that the database generates optimal execution plans for queries in those partitions.
About the Default Partition Statistics
The OSM DB Installer comes with a set of default partition statistics. These statistics come from an OSM system running a large number of orders (over 400,000) for a cartridge of reasonable complexity. These partition statistics are usable as-is for production.
Setting Up Database Partition Statistics
To use the provided default partition statistics, no additional input, in terms of specification files, secrets or other runtime aspects, is required for the OSM cloud native DB Installer.
The OSM cloud native DB Installer is invoked during the OSM instance creation, to either create or update the OSM schema. The installer is configured to automatically populate the default partition statistics (to all partitions) for a newly created OSM schema when the "prod", "prodsmall", or "prodlarge" (Production) shape is declared in the instance specification. The statistics.loadPartitionStatistics field within these shape files is set to true to enable the loading.
If you want to load partition statistics for a non-production shape, or if you want to reload statistics due to a DB or schema upgrade, use the command with 11 to load the statistics to all existing partitions in the OSM schema:
$OSM_CNTK/scripts/install-osmdb.sh -p project -i instance -s $SPEC_PATH -c 11
Note:
The partition name is specified in -b parameter with a comma delimited list of partition names.$OSM_CNTK/scripts/install-osmdb.sh -p project -i instance -s $SPEC_PATH -b the_newly_created_partition_1,the_newly_created_partition_2 -c 11
$OSM_CNTK/scripts/install-osmdb.sh -p project -i instance -s $SPEC_PATH -a existing_partition_name -b the_newly_created_partition_1,the_newly_created_partition_2 -c 11
Leveraging Oracle WebLogic Server Active GridLink
If you are using a RAC database for your OSM cloud native instance, by default, OSM uses WebLogic Multi-DataSource (MDS) configurations to connect to the database.
If you are licensed to use Oracle WebLogic Server Active GridLink (AGL) separately from your OSM license (consult any additional WebLogic licenses you possess that may apply), you can configure OSM cloud native to use AGL configurations where possible. This will better distribute load across RAC nodes.
db:
aglLicensed: true
Managing Logs
OSM cloud native generates traditional textual logs. By default, these log files are generated in the managed server pod, but can be re-directed to a Persistent Volume Claim (PVC) supported by the underlying technology that you choose. See "Setting Up Persistent Storage" for details.
# The storage volume must specify the PVC to be used for persistent storage. If enabled, the log, metric and JFR data will be directed here.
storageVolume:
enabled: true
pvc: storage-pvc
- The OSM application logs can be found at: pv-directory/project-instance/logs
- The OSM DB Installer logs can be found at: pv_directory/project-instance/db-installer
Managing OSM Cloud Native Metrics
All managed server pods running OSM cloud native carry annotations added by WebLogic Operator and an additional annotation by OSM cloud native.
osmcn.metricspath: /OrderManagement/metrics
osmcn.metricsport: 8001
prometheus.io/scrape: true
Configuring Prometheus for OSM Cloud Native Metrics
Configure the scrape job in Prometheus as follows:
- job_name: 'osmcn'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: ['__meta_kubernetes_pod_annotationpresent_osmcn_metricspath']
action: 'keep'
regex: 'true'
- source_labels: [__meta_kubernetes_pod_annotation_osmcn_metricspath]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: ['__meta_kubernetes_pod_annotation_prometheus_io_scrape']
action: 'drop'
regex: 'false'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_osmcn_metricsport]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
Note:
OSM cloud native has been tested with Prometheus and Grafana installed and configured using the Helm chart prometheus-community/kube-prometheus-stack available at: https://prometheus-community.github.io/helm-charts.Viewing OSM Cloud Native Metrics Without Using Prometheus
http://instance.project.domain_Name:LoadBalancer_Port/OrderManagement/metrics
By default, domain_Name
is set to
osm.org and can be modified in project.yaml. This only provides
metrics of the managed server that is serving the request. It does not provide
consolidated metrics for the entire cluster. Only Prometheus Query and Grafana
dashboards can provide consolidated metrics.
Viewing OSM Cloud Native Metrics in Grafana
OSM cloud native metrics scraped by Prometheus can be made available for further processing and visualization. The OSM cloud native toolkit comes with sample Grafana dashboards to get you started with visualizations.
Import the dashboard JSON files from $OSM_CNTK/samples/grafana into your Grafana environment.
- OSM by Instance: Provides a view of OSM cloud native metrics for one or more instances in the selected project namespace.
- OSM by Server: Provides a view of OSM cloud native metrics for one or more managed servers for a given instance in the selected project namespace.
- OSM by Order Type: Provides a view of OSM cloud native metrics for one or more order types for a given cartridge version in the selected instance and project namespace.
Exposed OSM Order Metrics
The following OSM metrics are exposed via Prometheus APIs.
Note:
- All metrics are per managed server. Prometheus Query Language can be used to combine or aggregate metrics across all managed servers.
- All metric values are short-lived and indicate the number of orders (or tasks) in a particular state since the managed server was last restarted.
- When a managed server restarts, all the metrics are reset to 0. These metrics do not refer to the exact values, which can be queried via OSM APIs such as Web Services and XML API.
Order Metrics
The following table lists order metrics exposed via Prometheus APIs.
Table 8-4 Order Metrics Exposed via Prometheus APIs
Name | Type | Help Text | Notes |
---|---|---|---|
osm_orders_created | Counter | Counter for the number of Orders Created | N/A |
osm_orders_completed | Counter | Counter for the number of Orders Completed | N/A |
osm_orders_failed | Counter | Counter for the number of Orders Failed | N/A |
osm_orders_cancelled | Counter | Counter for the number of Orders Cancelled | N/A |
osm_orders_aborted | Counter | Counter for the number of Orders Aborted | N/A |
osm_orders_in_progress | Gauge | Gauge for the number of orders currently in the In Progress state | N/A |
osm_orders_amending | Gauge | Gauge for the number of orders currently in the Amending state | N/A |
osm_short_lived_orders | Histogram |
Histogram that tracks the duration of all orders in seconds with buckets for 1 second, 3 seconds, 5 seconds, 10 seconds, 1 minute, 3 minutes, 5 minutes, and 15 minutes. Enables focus on short-lived orders. |
Buckets for 1 second, 3 seconds, 5 seconds, 10 seconds, 1 minute, 3 minutes, 5 minutes, and 15 minutes. |
osm_medium_lived_orders | Histogram |
Histogram that tracks the duration of all orders in minutes with buckets for 5 minutes, 15 minutes, 1 hour, 12 hours, 1 day, 3 days, 1 week, and 2 weeks. Enables focus on medium-lived orders. |
Buckets for 5 minutes, 15 minutes, 1 hour, 12 hours, 1 day, 3 days, 7 days, and 14 days. |
osm_long_lived_orders | Histogram | Histogram that tracks the duration of all orders in days with buckets for 1 week, 2 weeks, 1 month, 2 months, 3 months, 6 months, 1 year and 2 years. Enables focus on long-lived orders. | Buckets for 7 days, 14 days, 30 days, 60 days, 90 days, 180 days, 365 days, and 730 days. |
osm_order_cache_entries_total | Gauge | Gauge for the number of entries in the cache of type order, orchestration, historical order, closed order, and redo order | N/A |
osm_order_cache_max_entries_total | Gauge | Gauge for the maximum number of entries in the cache of type order,orchestration, historical order, closed order, and redo order | N/A |
Labels For All Order Metrics
The following table lists labels for all order metrics.
Table 8-5 Labels for All Order Metrics
Label Name | Sample Value | Notes | Source of the Label |
---|---|---|---|
cartridge_name_version | SimpleRabbits_1.7.0.1.0 | Combined Cartridge Name and Version | OSM Metric Label Name/Value |
order_type | SimpleRabbitsOrder | OSM Order Type | OSM Metric Label Name/Value |
server_name | ms1 | Name of the Managed Server | OSM Metric Label Name/Value |
instance | 10.244.0.198:8081 | Indicates the Pod IP and Pod port from which this metric is being scraped. | Prometheus Kubernetes SD |
job | omscn | Job name in Prometheus configuration which scraped this metric. | Prometheus Kubernetes SD |
namespace | quick | Project Namespace | Prometheus Kubernetes SD |
pod_name | quick-sr-ms1 | Name of the Managed Server Pod | Prometheus Kubernetes SD |
weblogic_clusterName | c1 | OSM Cloud Native WebLogic Cluster Name | WebLogic Operator Pod Label |
weblogic_clusterRestartVersion | v1 | OSM Cloud Native WebLogic Operator Cluster Restart Version | WebLogic Operator Pod Label |
weblogic_createdByOperator | true | WebLogic Operator Pod Label to identify operator created pods | WebLogic Operator Pod Label |
weblogic_domainName | domain | WebLogic Operator pod label | WebLogic Operator pod label |
weblogic_domainRestartVersion | v1 | OSM Cloud Native WebLogic Operator Domain Restart Version | WebLogic Operator Pod Label |
weblogic_domainUID | quick-sr | OSM Cloud Native WebLogic Operator Domain UID | WebLogic Operator Pod Label |
weblogic_modelInImageDomainZipHash | md5.3d1b561138f3ae3238d67a023771cf45.md5 | Image md5 hash | WebLogic Operator Pod Label |
weblogic_serverName | ms1 | WebLogic Operator Pod Label for Name of the Managed Server | WebLogic Operator Pod Label |
Task Metrics
The following metrics are captured for Manual or Automated Task Types only. All other Task Types are currently not being captured.
Table 8-6 Task Metrics Captured for Manual or Automated Task Types Only
Name | Type | Help Text |
---|---|---|
osm_tasks_created | Counter | Counter for the number of Tasks Created |
osm_tasks_completed | Counter | Counter for the number of Tasks Completed |
Labels for all Task Metrics
A task metric has all the labels that an order metric has. In addition, a task metric has two more labels.
Table 8-7 Labels for All Task Metrics
Label | Sample Value | Notes | Source of Label |
---|---|---|---|
task_name | RabbitRunTask | Task Name | OSM Metric Label Name/Value |
task_type | A |
A for Automated M for Manual |
OSM Metric Label Name/Value |
Managing WebLogic Monitoring Exporter (WME) Metrics
OSM cloud native provides a sample Grafana dashboard that you can use to visualize WebLogic metrics available from a Prometheus data source.
You use the WebLogic Monitoring Exporter (WME) tool to expose WebLogic server metrics. WebLogic Monitoring Exporter is part of the WebLogic Kubernetes Toolkit. It is an open source project, based at: https://github.com/oracle/weblogic-monitoring-exporter. You can include WME in your OSM cloud native images. Once an OSM cloud native image with WME is generated, creating an OSM cloud native instance with that image automatically deploys a WME WAR file to the WebLogic server instances. While WME metrics are available through WME Restful Management API endpoints, OSM cloud native relies on Prometheus to scrape and expose these metrics. This version of OSM supports WME 1.3.0. See WME documentation for details on configuration and exposed metrics.
Generating the WME WAR File
mkdir -p ~/wme
cd ~/wme
curl -x $http_proxy -L https://github.com/oracle/weblogic-monitoring-exporter/releases/download/v1.3.0/wls-exporter.war -o wls-exporter.war
curl -x $http_proxy https://raw.githubusercontent.com/oracle/weblogic-monitoring-exporter/v1.3.0/samples/kubernetes/end2end/dashboard/exporter-config.yaml -o exporter-config.yaml
jar -uvf wls-exporter.war exporter-config.yaml
Deploying the WME WAR File
After the WME WAR file is generated and updated, you can deploy it as a custom application archive.
For details about deploying entities, see "Deploying Entities to an OSM WebLogic Domain".
appDeployments:
Application:
'wls-exporter':
SourcePath: 'wlsdeploy/applications/wls-exporter.war'
ModuleType: war
StagingMode: nostage
PlanStagingMode: nostage
Target: '@@PROP:ADMIN_NAME@@ , @@PROP:CLUSTER_NAME@@'
Enabling Prometheus for WebLogic Monitoring Exporter (WME) Metrics
#### For AdminServer pod
prometheus.io/path: /wls-exporter/metrics
prometheus.io/port: 7001
prometheus.io/scrape: true
#### For Managed Server pods
prometheus.io/path: /wls-exporter/metrics
prometheus.io/port: 8001
prometheus.io/scrape: true
Configuring the Prometheus Scrape Job for WME Metrics
Note:
In thebasic_auth
section,
specify the WebLogic username and
password.
- job_name: 'basewls'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: ['__meta_kubernetes_pod_annotation_prometheus_io_scrape']
action: 'keep'
regex: 'true'
- source_labels: [__meta_kubernetes_pod_label_weblogic_createdByOperator]
action: 'keep'
regex: 'true'
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
basic_auth:
username: weblogic_username
password: weblogic_password
Viewing WebLogic Monitoring Exporter Metrics in Grafana
WebLogic Monitoring Exporter metrics scraped by Prometheus can be made available for further processing and visualization. The OSM cloud native toolkit comes with sample Grafana dashboards to get you started with visualizations. The OSM and WebLogic by Server sample dashboard provides a combined view of OSM cloud native and WebLogic Monitoring Exporter metrics for one or more managed servers for a given instance in the selected project namespace.
Import the dashboard JSON file from $OSM_CNTK/samples/grafana into your Grafana environment, selecting Prometheus as the data source.