16 Configuring Messaging

This chapter describes how messaging is supported in WebLogic Server Multitenant (MT) and includes:

  • Persistent stores (file and JDBC stores)

  • JMS servers

  • Store-and-Forward (SAF) agents

  • Path services

  • Messaging bridges

  • JMS system modules and JMS application modules

  • JMS connection pools

This chapter also describes approaches for accessing partitioned JMS resources from other partitions in the same WebLogic server instance or cluster, and from remote client or server JVMs.

Prior to configuring JMS in a multitenant environment, it is assumed that you are familiar with and have already created:

This chapter assumes familiarity with existing WLS messaging configuration. See the following books:

This chapter includes the following sections:

About Messaging Configuration Scopes

When working with WebLogic Server in non-partitioned environments, you can configure and deploy JMS artifacts at the domain level. Examples of JMS artifacts include persistent stores (file or JDBC stores), JMS servers, store-and-forward agents, path services, and messaging bridges, which are directly configured in a WebLogic Server domain config.xml file using a JMX PersistentStoreMBean, JMSServerMBean, SAFAgentMBean, PathServiceMBean, and MessagingBridgeMBean.

In addition, JMS resources, such as connection factories and destinations are configured in an external descriptor file called a JMS module. JMS modules are most commonly configured as a JMS system resource (using a JMSSystemResourceMBean). Less commonly, JMS modules can be embedded as a standalone or application scoped XML file that is part of a deployed application (called standalone and application scoped modules respectively), or indirectly by Java EE 7 connection factory and destination annotations (which have the same basic semantics as external resources defined in an application scoped module).

When working in WebLogic Server MT, all of the above JMS artifacts can be defined and deployed in the following scopes:

  • Domain scoped—using the exact same configuration as in a non-partitioned WLS environment.

  • Resource group scoped—as part of a resource group that is created at the partition level or at the domain level.

  • Resource group template scoped—as part of a resource group template that is created at the domain level.

A resource group can optionally inherit a resource group template scoped JMS configuration. No more than one resource group per partition can reference a particular resource group template, and similarly, no more than one domain level resource group can reference a resource group template.

To summarize, the domain configuration structure for JMS messaging artifacts is as follows:

  • Domain-level JMS configuration

  • Domain-level resource group with JMS configuration

  • Domain-level resource group template with JMS configuration

  • Domain-level resource group based on a resource group template

  • Partition

    • Partition-level resource group with JMS configuration

    • Partition-level resource group based on a resource group template

About Configuration Validation and Targeting Rules

Validation and targeting rules ensure that WebLogic Server MT JMS configuration is isolated, self contained, and easy to manage. These rules help achieve the following goals:

  • A resource group can shut down or migrate independently without causing failures in other resource groups or domain-level resources.

  • A resource group template is a fully encapsulated, independent configuration unit without direct dependencies on resource groups, domain configuration, or other resource group templates.

  • The same configuration is valid regardless of whether a resource group is single server targeted, cluster targeted, or not targeted.

  • There is no change in behavior for any domain-level configuration that was valid in previous releases. For example, non-resource group/resource group template, domain-level behavior remains unchanged for backwards compatibility.

One basic, high-level rule that helps accomplish these goals is that a JMS configuration MBean may only reference another configuration MBean that is in the same scope. For example, a resource group template-defined JMS server can only reference a store that is also defined in the same resource group template. These rules are enforced by configuration validation checks and by errors and warnings that are logged at runtime.

Configuring Messaging Components

The following sections describe considerations when configuring JMS artifacts in a multitenant environment.

Configuring JDBC or File Persistent Stores

Creating a persistent store is a required step before configuring a JMS server, SAF agent, or path service. This is because resource group and resource group template scoped JMS servers, SAF agents, and path services must reference an existing persistent store.

Creating a custom file or JDBC persistent store inside a resource group that is either scoped to a domain or to a partition is similar to creating a persistent store at the domain level. However, an additional step is that you must specify the scope. In the WLS Administration Console and Fusion Middleware Control (FMWC), there is a Scope drop-down menu in the first step of the creation process that lists the available scopes in which to create a persistent store. In WLST, you must create the persistent store using createPersistentStore on the owner MBean (the MBean for the domain, resource group, or resource group template).

The following Distribution and Migration Policy rules apply to all resource group and resource group template scoped persistent stores:

  • A resource group or resource group template scoped store that will be used to host JMS server distributed destinations or SAF agent imported destinations must specify a Distributed Distribution Policy (the default). This setting instantiates a store instance per WebLogic Server instance in a cluster. Furthermore, a resource group or resource group template scoped store with a Distributed Distribution Policy may optionally be configured with an On-failure or Always Migration Policy.

  • A resource group or resource group template scoped store that will be used by a path service or that will be used to host JMS server standalone (non-distributed) destinations must specify a Singleton Distribution Policy. This setting instantiates a single store instance in a cluster. Furthermore, a resource group or resource group template scoped store with a Singleton Distribution Policy must have either On-failure or Always as its Migration Policy instead of Off. Off is the default.

  • A cluster targeted store with an On-failure or Always Migration Policy requires that the cluster be configured with either database leasing or cluster leasing where database leasing is recommended as a best practice.

These policies control the distribution and high availability behavior of stores and any JMS artifacts that target a cluster. For more information, see "Simplified JMS Cluster and High Availability Configuration" in Administering JMS Resources for Oracle WebLogic Server.

The following are the enforced configuration validation and targeting rules for both file and JDBC stores:

  • A resource group or resource group template-level JMS server, SAF agent, or path service must reference a configured store; they cannot reference null.

  • A resource group template scoped JMS server, SAF agent, or path service may only reference a store that is defined in the same resource group template. It cannot reference a store defined at the child resource group level.

  • A resource group scoped JMS server, SAF agent, or path service may only a reference a store that is defined in the same resource group, or in the resource group template optionally referenced by the resource group.

  • A domain-level JMS server, SAF agent, or path service may only reference a store in the domain scope.

The following are additional rules that are specific to JDBC stores.

  • A resource group template scoped JDBC store may only reference a data source that is in the same resource group template.

  • A resource group scoped JDBC store may only reference a data source that is in the same resource group, or in the resource group template, optionally referenced by the resource group.

  • A domain scoped JDBC store may only reference a data source in the domain scope.

Configuring JMS Servers

Creating a JMS server that is scoped to a domain-level resource group or in a partition is similar to creating a JMS server at the domain level. One additional step is to specify the scope. In the WLS Administration Console and Fusion Middleware Control (FMWC), there is a Scope drop-down menu in the first step of the creation process that lists the available scopes in which to create a JMS server. In WLST, you must create the JMS server using createJMSServer on the owner MBean (the MBean for the domain, resource group, or resource group template).

Another required step is to configure the JMS server so that it references a persistent store that is configured in the same scope as the JMS server.

Finally, if the JMS server is going to be used to host distributed destinations, its store must be configured with a Distributed Distribution Policy. If the JMS server is going to host standalone (non-distributed) destinations, the store must be configured with a Singleton Distribution Policy.

Configuring Store-and-Forward (SAF) Agents

Creating a SAF agent that is scoped to a domain-level resource group or in a partition is similar to creating a SAF agent at the domain level. One additional step is to specify the scope. In the WLS Administration Console and Fusion Middleware Control (FMWC), there is a Scope drop-down menu in the first step of the creation process that lists the available scopes in which to create an SAF agent. In WLST, you must create the SAF agent using createSAFAgent on the owner MBean (the MBean for the domain, resource group, or resource group template).

Another required step is to configure the SAF agent so that it references a persistent store that is configured in the same scope as the SAF agent. This store must be configured with a Distributed Distribution Policy (the default).

Note:

A resource group or resource group template-level SAF agent with service type Receiving Only is not allowed. An exception will be thrown or an error message will be logged on an attempt to setup such a configuration. This mode is specific to "old style" JAX-RPC web services reliable messaging. Use JAX-WS RM instead.

Configuring Path Services to Support Using Unit-of-Order with Distributed Destinations

A path service must be configured in a resource group or resource group template if the resource group or resource group template also configures any distributed destinations that will be used to host Unit-of-Order (UOO) messages. In addition, such distributed destinations need to be configured with a Unit of Order routing policy set to PathService instead of Hash since hash-based UOO routing is not supported in a resource group or resource group template scope. Resource group or resource group template scoped distributed destinations will only use a path service that is configured in the same resource group or resource group template for routing UOO messages. Attempts to send messages to a resource group or resource group template scoped distributed destination that does not configure a PathService Unit of Order routing policy will fail with an exception.

Creating a path service that is scoped to a domain-level resource group or in a partition is similar to creating a path service at the domain level. One additional step is to specify the scope. In the WLS Administration Console and Fusion Middleware Control (FMWC), there is a Scope drop-down menu in the first step of the creation process that lists the available scopes in which to create a path service. In WLST, you must create the path service using createPathService on the owner MBean (the MBean for the domain, resource group, or resource group template).

Another required step is to configure the path service so that it references a persistent store that is configured in the same scope as the path service. This store must be configured with a Singleton Distribution Policy and an Always or On-Failure Migration Policy.

Configuring Messaging Bridges

Creating a messaging bridge that is scoped to a domain-level resource group or in a partition is similar to creating a messaging bridge at the domain level. One additional step is to specify the scope. In the WLS Administration Console and Fusion Middleware Control (FMWC), there is a Scope drop-down menu in the first step of the creation process that lists the available scopes in which to create a messaging bridge. In WLST, you must create the messaging bridge using createMessagingBridge on the owner MBean (the MBean for the domain, resource group, or resource group template).

The following Distribution and Migration Policy rules apply to all resource group or resource group template scoped messaging bridges:

  • Specify a Distributed Distribution Policy (the default) on a bridge to cause a cluster targeted bridge to deploy an instance per server in a cluster. A messaging bridge with a Distributed Distribution Policy may optionally also configure an On-failure Migration Policy to add support for high availability.

  • Specify a Singleton Distribution Policy on a bridge to cause a cluster targeted bridge to limit itself to deploying one instance per cluster. A messaging bridge with a Singleton Distribution Policy must have an On-failure Migration Policy instead of Off. Off is the default.

  • A cluster targeted bridge with an On-failure Migration Policy requires that the cluster be configured with either database leasing or cluster leasing, where database leasing is recommended as a best practice.

These policies control the high availability behavior and distribution behavior of messaging bridges that target a cluster. For more information about distribution and migration policies, see "Simplified JMS Cluster and High Availability Configuration" in Administering JMS Resources for Oracle WebLogic Server.

The following are the configuration validation rules that are specific to a messaging bridge:

  • A resource group template scoped messaging bridge can only reference messaging bridge destinations in the same scope.

  • A resource group scoped messaging bridge can only reference messaging bridge destinations in the same resource group, or in the resource group template optionally referenced by the resource group.

  • A domain scoped messaging bridge may only reference messaging bridge destinations in the domain scope.

Configuring JMS System Resources and Application Scoped JMS Modules

Creating a JMS system resource that is scoped to a domain-level resource group or in a partition is similar to creating a JMS system resource at the domain level. One additional step is to specify the scope. In the WLS Administration Console and Fusion Middleware Control (FMWC), there is a Scope drop-down menu in the first step of the creation process that lists the available scopes in which to create a JMS system resource. In WLST, you must create the JMS system resource using createJMSSystemResource on the owner MBean (the MBean for the domain, resource group, or resource group template).

Creating an application scoped JMS module that has a domain-level resource group scope or is in a partition is similar to creating one for the domain level. An application deployment may contain a JMS module file, or an application EAR file that in turn contains JMS module files. One additional step is to specify the resource group or resource group template scope. For more information, see Deploying Applications.

Note:

If you create a JMS server and deploy an application that specifies a submodule target to this JMS server all within the same configuration edit session, the deployment may not succeed. Oracle recommends that you configure the JMS server in a separate edit session.

Note:

Oracle strongly recommends configuring JMS using system resource modules instead of embedding the configuration in application resource modules. Unlike application scoped configuration, system resource configuration can be dynamically tuned and easily monitored by an administrator or developer using the WLS Administration Console, WLST, or MBeans.

The following are the configuration validation and targeting rules associated with resources in a resource group or resource group template scoped JMS module.

Subdeployment Definitions

  • A resource group or resource group template scoped subdeployment can only target nothing (null), a single JMS server, or a single SAF agent.

  • A resource group template scoped subdeployment can only reference a JMS server or SAF agent that is defined in the same resource group template.

  • A resource group scoped subdeployment can only reference a JMS server or SAF agent that is defined in the same resource group or in the resource group template optionally referenced by the resource group.

JMS Module Resources

The following table shows JMS module resource types targeting rules.

Resource Type Using a Subdeployment Using Default Targeting
Standalone (Singleton) Destination May only target a subdeployment which targets a JMS server which in turn references a store with a Singleton Distribution Policy. Will only deploy if there is a single configured JMS server in the same resource group or resource group template scope that references a Singleton Distribution Policy store. In which case, the destination will deploy on this particular JMS server. JMS servers that reference Distributed Distribution Policy stores are ignored, and JMS servers defined outside the scope, for example, at the domain level or in another resource group or resource group template, are also ignored.
Uniform Distributed Destination* May only target a subdeployment which targets a JMS server which in turn references a store with a Distributed Distribution Policy Will only deploy if there is a single configured JMS server in the same resource group or resource group template scope that references a Distributed Distribution Policy store. In which case, the destination will deploy on this particular JMS server. JMS servers that reference Singleton Distribution Policy stores are ignored, and JMS servers defined outside the scope, for example, at the domain level or in another resource group or resource group template, are also ignored.
SAF Imported Destination May only target a subdeployment which targets a SAF agent. Will only deploy when there is a single configured SAF agent in the same resource group or resource group template scope. SAF agents defined outside the scope, for example, at the domain level or in another resource group or resource group template, are also ignored.
Connection Factory May target any subdeployment. Will deploy to all WLS server instances that are encompassed by the resource group's target.
Foreign Server May only target a subdeployment which targets a JMS server which in turn references a store with a Distributed Distribution Policy. Best practice is to use Default Targeting instead Will deploy to all WLS server instances that are encompassed by the resource group's target.

* Note: Resource group or resource group template scoped uniform distributed topics must specify a Partitioned Forwarding Policy. For example, they must be a Partitioned Distributed Topic (PDT). Be aware that the word "Partitioned" in a PDT does not have the same meaning as the word "partition" in a WebLogic Server MT partition. PDTs and WebLogic Server MT partitions are two independent concepts. For information about the trade-offs for using PDTs, see Limitations.

Configuring Partition Specific JMS Overrides

Resource group template scoped JMS configuration artifacts might not be complete because they lack or have incorrect values that are specific to partitions that use the resource group template. Each partition may need to have the appropriate override values specified to customize the template-derived values for correct deployment to the partition runtime. Partition specific, resource group scoped JMS configuration can be customized on a per-partition basis using resource deployment plans or application deployment plans. In addition, JMS foreign server configuration within a JMS system module can be customized using JMSSystemResourceOverrideMBeans.

Resource overriding allows system administrators to customize JMS resources and other resources such as data sources at the partition level. If you create a partition with a resource group that extends a resource group template, you can override settings for certain resources defined in that resource group template. If you create a resource group within the partition that does not extend a resource group template and then create resources within this resource group, you don't need overrides; you can just set partition-specific values for these resources.

Overrides are used mainly when there is a common definition for a resource, such as in a resource group template, that needs each partition that uses the resource to isolate its remotely stored state. For example, the same JMS server, JDBC store, data source, and JMS module configuration can be deployed to multiple partitions in the same cluster by configuring them in a single resource group template and configuring a resource group in each partition to reference the resource group template. The partition resource groups can then be overridden on a per-partition basis to ensure that their respective data sources connect to different databases or to different schemas within the same database.

In detail, system administrators can override resource definitions in partitions using the following specific techniques:

  • Resource override configuration MBeans—a configuration MBean which exposes a subset of attributes of an existing resource configuration MBean. Any attribute set on an instance of an overriding configuration MBean will replace the value of that attribute in the corresponding resource configuration MBean instance. Foreign JMS server and related configuration artifacts in a JMS system module can use override MBeans to override the user, password and provider URL settings. If you use override MBeans, you must define a separate override MBean for each corresponding foreign JMS server and related deployment Beans. Configuration changes to these attributes that are made at runtime after a JMS module has already been deployed require that the partition or JVM be restarted for the changes to take effect.

  • Resource deployment plans—an XML file which identifies arbitrary configured resources within a partition and overrides attribute settings on those resources. Persistent stores, JMS servers, SAF agents, messaging bridges, bridge destinations, and path services use the config-resource-override element in a resource deployment plan, while JMS resources in a JMS system module, such as queues, topics, and connection factories, use the external-resource-override element.

  • Partition-specific application deployment plans—similar to existing application deployment plans, this allows administrators to specify a partition-specific application deployment plan for each application deployment in a partition. For information about partition-specific application deployment plans, see Using Partition-Specific Deployment Plans.

Administrators can combine any of these resource overriding techniques. The system applies them in the following, ascending order of priority:

  • config.xml and external descriptors, including partition-specific application deployment plans.

  • Resource deployment plans.

  • Overriding configuration MBeans.

    If an attribute is referenced by both a resource deployment plan and an overriding configuration MBean, the overriding configuration MBean takes precedence.

For more information about overrides, see Configuring Resource Overrides.

Accessing Partition Scoped Messaging Resources Using JNDI

In order to access JMS resources in a partition, an application first needs to establish a JNDI initial context to the partition. Once you create a context for a partition, the context object sticks to the partition's namespace so that all subsequent JNDI operations occur within the scope of the partition. When the context is created with a java.naming.provider.url property set, JNDI is partition-aware by looking up the partition information from the provider URL value. The following four different URL types will associate the context with a particular partition:

In addition, an existing context can be used to reference a resource in another partition by prepending special scoping strings to JNDI names.

Each of these methods are described in the following sections.

Specifying No URL

An application that is running in a partition on a WebLogic Server instance can access JMS resources in its own local partition simply by creating a local initial context without specifying any provider URL. This approach is the best practice for creating locally scoped contexts.

Specifying a Partition Virtual Host or Partition URI

If a context URL matches a virtual host URL or URI that is configured for a partition, then JNDI creates the context for that partition and all requests from the context are delegated to the partition's JNDI name space.

A JMS application can therefore access a WLS JMS resource that is running in a different JVM or WLS cluster using the t3 or HTTP protocol by supplying a URL of the form:

  • t3://virtualhost:port

  • t3://host:port/URI

Note:

A misspelled or non-existent URI may cause a context to scope to the domain level without warning.

Specifying a Dedicated Port URL

It is possible to dedicate a specific port or address to a channel in a partition, in which case the URL format becomes t3://host:port.

This is the only supported method for pre-12.2.1 clients to interoperate with partition scoped resources.

For more information, see Configuring Virtual Targets.

Note:

This method doesn't currently support SSL when used for interoperability with previous releases.

Local Cross-Partition Use Cases Using local: URLs or Decorated JNDI Names

An application in one partition can access another partition on the same WebLogic Server instance or in the same cluster using one of the URLs described in the previous section.

However, in order to support access across partitions that reside on the same server more efficiently without a need to specify a specific host, port, or URI, an application has the following options.

  • Create a context with a local: protocol URL:

    • local://—Creates the context on the current partition, which can be either a partition or the domain.

    • local://?partitionName=DOMAIN—Creates the context on the domain.

    • local://?partitionName=partition_name—Creates the context on the partition partition_name.

  • Create a context without specifying a URL, and then prefix an explicit scope when specifying a JNDI name:

    • domain:<JNDIName>—Looks up the JNDI entry in the domain level.

    • partition:<partition_name>/<JNDIName>—Looks up the JNDI entry in the specified partition.

Partition Associations in JMS

The following sections describe various JMS partition associations.

Partition Association Between Connection Factories and Their Connections or JMS Contexts

JMS client connections and JMS contexts are permanently associated with the partition from which their connection factory was obtained, and will not change their partition based on the partition associated with the current thread.

Partition Association with Asynchronous Callbacks

When JMS pushes messages or exceptions to an asynchronous listener, or similarly pushes events to a destination availability listener or an asynchronous send completion listener, the listener's local partition ID (instead of the destination's partition) will be associated with the callback thread. The local partition ID is the partition associated with the thread that created the asynchronous listener.

Connection Factories and Destinations Need Matching Scopes

A connection factory can only interact with a destination defined in the same partition as the connection factory. For example a QueueBrowser, MessageConsumer/JMSConsumer, TopicSubscriber, or MessageProducer/JMSProducer client object can only communicate with a destination if the connection factory that was used to create these client objects was defined in the same partition as the destination. Furthermore a connection factory can only interact with a destination that is obtained from the same cluster or server JVM as the connection factory.

Temporary Destination Scoping

Prior to the 12.2.1 release, JMS servers could only be deployed at the domain level and a temporary destination could only be hosted by JMS servers that both:

  • Set Hosting Temporary Destinations to true (the default).

  • Are hosted on the same WLS server instance or in the same cluster as the connection factory used to create the temporary destination.

The behavior for creating a temporary destination in WebLogic Server MT is:

  • As in WLS (non-MT), a temporary destination can only be hosted by any JMS server that has Hosting Temporary Destinations enabled and that is hosted on the same WLS server instance or in the same cluster as the connection factory used to create the temporary destination.

  • If a JMS connection was created using a connection factory that is configured in a resource group or resource group template scope (including domain resource groups), its temporary destinations will only be hosted by a JMS server that is configured in the same scope.

  • If a JMS connection was created using a non resource group or resource group template scoped partition-level connection factory, it is allowed to create temporary destinations on any JMS server from the same partition as the connection factory. The non resource group or resource group template scoped partition-level connection factories are simply the default connection factories, for example the connection factories with JNDI names weblogic.jms.ConnectionFactory or weblogic.jms.XAConnectionFactory.

  • If a JMS connection was created using a non resource group or resource group template scoped domain-level connection factory, it is allowed to create temporary destinations on any JMS server at the domain level including JMS servers that are scoped to domain-level resource groups.

If there is no qualified JMS server found within the allowed scope, an attempt to create a temporary destination will return an exception.

Managing Partition Scoped Messaging Components

The following sections describe managing certain aspects of partition scoped messaging.

Runtime Monitoring and Control

All existing messaging runtime MBeans are supported for monitoring and controlling partition scoped JMS configuration and deployments and are accessible to JMX-based management clients. Partition scoped JMS runtime MBeans are located under their corresponding ParititionRuntimeMBean instances.

For example:

  • The JMS Server, Connection, and PooledConnection runtime MBeans are placed in the runtime MBean hierarchy under serverRuntime/PartitionRuntimes/partition/JMSRuntime

  • The SAF runtime MBeans are placed in the runtime MBean hierarchy under serverRuntime/PartitionRuntimes/partition/SAFRuntimeMBean

  • The messaging bridge and path service runtime MBeans are placed in the runtime MBean hierarchy directly under serverRuntime/PartitionRuntimes/partition

For more information, see Monitoring and Debugging Partitions.

Managing Partition Scoped Security

Security roles and policy definitions related to partition messaging configuration is the responsibility of the WLS system administrator.

WebLogic Server MT expands upon the traditional WebLogic Server security support in two significant ways:

  • Multiple realms—WebLogic Server MT supports multiple active security realms and allows each partition to execute against a different realm.

  • Identity domains—an identity domain is a logical namespace for users and groups, typically representing a discrete set of users and groups in the physical datastore. Identity domains are used to identify the users associated with particular partitions.

Otherwise, configuring security for a partition scoped messaging is similar to setting up security for domain-level messaging. For more information, see Configuring Security.

Managing Transactions

All JTA transactions in a JVM are serviced by a single JTA transaction manager regardless of scope. Partition scoped XA resource manager names are automatically qualified with their partition name so that the resource managers are uniquely identified to the transaction manager and are managed independently. One example of a resource manager is a persistent store.

For more information on transaction configuration and restrictions, see Configuring Transactions.

Managing Partition and Resource Group Lifecycle Operations

A JMS artifact that is associated with a partition or resource group can be started and shutdown by starting and shutting down its partition or resource group. Permissions to perform these operations are automatically supplied to the WLS system administrator and operator.

Partition Scoped JMS Diagnostic Image Sources

The messaging component does not support the ability to scope a diagnostic image to a partition. For more information, see Configuring Partition Scope Diagnostic Image Capture.

Partition Scoped JMS Logging

Partition scoped JMS log messages are qualified with the partition ID and name when the domain log format is not configured. For more information on logging, see Monitoring and Debugging Partitions.

Message Lifecycle Logging

Optionally enabled, JMS server and SAF agent message lifecycle logging is placed in locations that are different when these services are scoped to a partition. The logging files are in their partition's directory. Furthermore, the log file names of different runtime JMS server and SAF agent instances of a cluster-targeted JMS server or SAF agent are guaranteed to be disambiguated.

The expected new log locations are summarized below when configuring an absolute path, a relative path, or the default.


Nothing Configured (default) /<absolute-path>/<file> /<relative-path>/<file>
Domain Level <domain-log>/<log-suffix>/<instance>-jms.messages.log /<absolute-path>/<instance>-<file> <domain-log>/<relative-path>/<instance>-<file>
Partition <partition-log>/<log-suffix>/<instance>-jms.messages.log Same as <relative-path>* <partition-log>/<relative-path>/<instance>-<file>

* Note that partition scoped configuration treats absolute paths as relative paths.

<domain-log> = <domain>/servers/<wl-server-name>

<partition-log> = <domain>/partitions/<partition-name>/system/servers/<wl-server-name>

<log-suffix> = logs/jmsservers/<configured-name> (for JMS servers)

<log-suffix> = logs/safagents/<configured-name> (for SAF agents)

<instance> =

  • <configured-name>, when JMS server or SAF agent is single server targeted.

  • <configured-name>_<wl-server-name>, when cluster targeted and store's Distribution Policy=Distributed.

    (Note that an instance keeps its old name even as it migrates from one WLS server instance to another.)

  • <configured-name>_01, when cluster targeted and store's Distribution Policy=Singleton.

Admin Helpers

There are two JMS specific Java administration programming utilities that provide helper methods for configuring and monitoring JMS resources.

The JMSModuleHelper contains helper methods for locating JMS runtime MBeans (for monitoring) as well as methods to manage (locate/create/delete) JMS module configuration entities (descriptor beans) in a given module.

The JMSRuntimeHelper provides convenient methods for obtaining the corresponding JMX runtime MBean given a JMS object such as a connection, destination, session, message producer, or message consumer.

In 12.2.1, the enhanced version of the helpers are provided to handle both domain scoped and resource group or resource group template scoped JMS resources.

The existing JMSRuntimeHelper is enhanced to be partition aware. When calling a runtime helper method, it is required that the specified JNDI context and the specified JMS object must belong to the same partition (otherwise an exception is thrown).

The enhanced JMSModuleHelper is scope-aware and contains the following interface and classes.

  • weblogic.jms.extensions.IJMSModuleHelper—an interface that defines all helper methods.

  • weblogic.jms.extensions.JMSModuleHelper—the pre-12.2.1 version of the JMS module helper, which only handles domain-level JMS resources.

  • weblogic.jms.extensions.JMSModuleHelperFactory—a factory which creates an instance of a JMS module helper that works in a specific scope given an initial context to the Administration Server, a scope type (domain, resource group or resource group template), and the name of the scope.

The following code snippet demonstrates how to create a JMS module helper for each of the three different scopes.

   Context ctx = getContext(); // get an initial JNDI context
 
   JMSModuleHelperFactory factory = new JMSModuleHelperFactory();
 
      // create a JMS module helper for domain level
 
   IJMSModuleHelper domainHelper = factory.getHelper(ctx, IJMSModuleHelper.ScopeType.DOMAIN, null); 
 
      // create a JMS module helper for Resource Group "MyResourceGroup"
 
   IJMSModuleHelper rgHelper = factory.getHelper(ctx, IJMSModuleHelper.ScopeType.RG, "MyResourceGroup");
 
      // create a JMS module helper for Resource Group Template "MyResourceGroupTemplate"
 
   IJMSModuleHelper rgtHelper = factory.getHelper(ctx, IJMSModuleHelper.ScopeType.RGT, "MyResourceGroupTemplate");

Once a JMS module helper instance is created, you can use it to create JMS resources that are scoped to the corresponding scope. For example, the following example code creates a JMS system resource with a JMSQueue on JMS server MyJMSServer in the resource group MyResourceGroup. (It assumes that the JMS server and resource group have already been created.)

   String jmsServer = "MyJMSServer";
 
   String jmsSystemModule = "MyJMSSystemModule";
 
   String queue = "MyQueue";
 
   String queueJNDI = "jms/myQueue";
 
   rgHelper.createJMSSystemResource(jmsSystemModule, null);
 
   rgHelper.createQueue(jmsSystemModule, jmsServer, queue, queueJNDI, null);

File Locations

Persistent stores create a number of files in the file system for different purposes. Among them are file store data files, file store cache files (for file stores with a DirectWriteWithCache Synchronous Write Policy), and JMS server and SAF agent paging files.

Pre-12.2.1 file location behavior remains the same for the domain scoped persistent stores. This ensures that persisted data is recovered after an upgrade and that it is stored in the expected location. For partition scoped configuration, these files are placed in isolated directories within the partition file system in order to prevent file collisions among same-named stores in different partitions.

Here is a summary of the location of various files used by the file store system in WebLogic Server MT, where partitionStem = partitions/<partitionName>/system

Store Type Store Path Not Configured Relative Store Path Absolute Store Path File Name
custom file <domainRoot>/<partitionStem>/store/<storeName> <domainRoot>/<partitionStem>/store/<relativePath>/<storeName> <absolutePath>/<partitionStem>/store/<storeName> <storeName>NNNNNN.DAT
cache ${java.io.tmpdir}/WLStoreCache/${domainName}/<partitionStem>/tmp <domainRoot>/<partitionStem>/<tmp>/<relativePath> <absolutePath>/<partitionStem>/tmp <storeName>NNNNNN.CACHE
ejb timers <domainRoot>/<partitionStem>/store/_WLS_EJBTIMER_<serverName> <domainRoot>/<partitionStem>/store/<relativePath>/_WLS_EJBTIMER_<serverName> <absolutePath>/<partitionStem>/store/_WLS_EJBTIMER_<serverName> _WLS_EJBTIMER_<serverName>NNNNNN.dat
paging <domainRoot>/<partitionStem>/paging <domainRoot>/<partitionStem>/paging/<relativePath> <absolutePath>/<partitionStem>/paging <jmsServerName>NNNNNN.TMP

<safAgentName>NNNNNN.TMP


Here is a summary of how each of the above store types configure their directory location.

Store Type Directory Configuration
custom file The directory configured on a file store.
cache The cache directory configured on a file store that has a DirectWriteWithCache Synchronous Write Policy.
default ejb timer store The directory configured on the WLS default store's configuration. (Partition EJB timer default stores copy their configuration from the default store.)
paging The paging directory configured on a SAF agent or JMS server.

Best Practices

This section provides advice and best practices for beginning JMS users as well as advanced JMS users in an MT environment.

  • For MT-related known issues, Oracle recommends that all users review "Configuration Issues and Workarounds" in Release Notes for Oracle WebLogic Server.

  • If for any reason, newly created or updated JMS resources are not accessible in a running partition, review the WebLogic Server log files for warning and error log messages. If the server log messages do not provide helpful information, a partition restart may often resolve the issue. Note that a newly created partition has to be explicitly started before any of the resources are externally accessible.

  • The following rules always apply in a resource group and resource group template scope:

    • Use a Distribution Policy=Singleton store for path services, and for JMS servers that host standalone destinations.

    • Use a Distribution Policy=Distributed store for SAF agents, and for JMS servers that host distributed destinations.

    • Configure cluster leasing in clusters which have:

      • Distribution Policy=Singleton stores or bridges.

      • Migration Policy=On-Failure or Always stores or bridges.

  • For more general best practices related to using JMS, see "Best Practices for JMS Beginners and Advanced Users" in Administering JMS Resources for Oracle WebLogic Server.

Limitations

The following features in JMS or a related component are not currently supported in WebLogic Server MT:

  • Client SAF forwarding into a partition.

    • The behavior is undefined.

    • Note that there is support for server-side SAF agents to forward into a partition.

  • C client accessing resource group or resource group template scoped JMS resources; the behavior is undefined.

  • .NET client—an exception is thrown if a .NET client accesses JMS resources in a partition.

  • Replicated Distributed Topics (RDT)

    • The deployment of a JMS module to a resource group or resource group template that contains Replicated Distributed Topics (RDTs) will fail with an exception.

    • RDTs are the default type of uniform distributed topic and are configured with a Forwarding Policy of Replicated.

    • Workarounds include:

      • Configure a standalone (singleton) topic.

      • Configure a Partitioned Distributed Topic (PDT).

        *A PDT is configured by setting its Forwarding Policy to Partitioned.

        *For the advantages and limitations of a PDT, see "Configuring Partitioned Distributed Topics" in Administering JMS Resources for Oracle WebLogic Server.

        *Be aware that the word "Partitioned" in a PDT does not have the same meaning as the word "partition" in a WebLogic Server MT partition; PDTs and WebLogic Server MT partitions are two independent concepts.

  • Default store

    • Using the WebLogic Server's default store in partitions is not allowed.

    • All JMS servers, SAF agents and path services in a resource group or resource group template are required to reference a custom store.

  • Weighted Distributed Destinations (WDD)

    • The deployment of a JMS module to a resource group or resource group template that contains WDDs will fail with an exception.

    • Note that WDDs are deprecated.

  • Connection Consumer and Server Session Pool

    • An attempt to create a partition scoped Connection Consumer or Server Session Pool will fail.

    • Note that a best practice is to use a Message Driven Bean (MDB), as MDBs serve a similar purpose.

  • Logging Last Resource Data Sources (LLR)

    • The transaction system does not support the LLR feature in the partition scope.

    • For more information, including a potential workaround, see Configuring Transactions.

  • Client interoperability using a dedicated partition channel using SSL

    • Old clients can only interoperate with a partition by configuring a dedicated channel for the partition.

    • This method does not currently support SSL.