15 Configuring Messaging

This chapter describes how messaging is supported in Oracle WebLogic Server Multitenant (MT) and includes:
  • Persistent stores (file and JDBC stores)

  • JMS servers

  • Store-and-Forward (SAF) agents

  • Path services

  • Messaging bridges

  • JMS system modules and JMS application modules

  • JMS connection pools

This chapter also describes approaches for accessing partitioned JMS resources from other partitions in the same WebLogic Server instance or cluster, and from remote client or server JVMs.

This chapter includes the following sections:

Configuring Messaging: Prerequisites

Prior to configuring JMS in a multitenant environment, it is assumed that you are familiar with and have already created:

This chapter assumes familiarity with existing WebLogic Server messaging configuration. For more information, see:

About Messaging Configuration Scopes

When working with WebLogic Server in nonpartitioned environments, you can configure and deploy JMS artifacts at the domain level. Examples of JMS artifacts include persistent stores (file or JDBC stores), JMS servers, Store-and-Forward agents, path services, and messaging bridges, which are directly configured in a WebLogic Server domain config.xml file using a JMX MBean, such as PersistentStoreMBean, JMSServerMBean, SAFAgentMBean, PathServiceMBean, and MessagingBridgeMBean.

In addition, JMS resources, such as connection factories and destinations are configured in an external descriptor file called a JMS module. JMS modules are most commonly configured as a JMS system resource (using a JMSSystemResourceMBean). Less commonly, JMS modules can be embedded as a standalone or application-scoped XML file that is part of a deployed application (called standalone and application-scoped modules respectively), or indirectly by Java EE 7 connection factory and destination annotations (which have the same basic semantics as external resources defined in an application-scoped module).

When working in WebLogic Server MT, all of the prior JMS artifacts can be defined and deployed in the following scopes:

  • Domain-scoped: Using the exact same configuration as in a nonpartitioned WebLogic Server environment

  • Resource group-scoped: As part of a resource group that is created at the partition level or at the domain level

  • Resource group template-scoped: As part of a resource group template that is created at the domain level

A resource group can optionally inherit a resource group template-scoped JMS configuration. No more than one resource group per partition can reference a particular resource group template, and similarly, no more than one domain level resource group can reference a resource group template.

To summarize, the domain configuration structure for JMS messaging artifacts is as follows:

  • Domain-level JMS configuration

  • Domain-level resource group with JMS configuration

  • Domain-level resource group template with JMS configuration

  • Domain-level resource group based on a resource group template

  • Partition:

    • Partition-level resource group with JMS configuration

    • Partition-level resource group based on a resource group template

About Configuration Validation and Targeting Rules

Validation and targeting rules ensure that WebLogic Server MT JMS configuration is isolated, self-contained, and easy to manage. These rules help achieve the following goals:

  • A resource group can shut down or migrate independently without causing failures in other resource groups or domain-level resources.

  • A resource group template is a fully encapsulated, independent configuration unit without direct dependencies on resource groups, domain configuration, or other resource group templates.

  • The same configuration is valid regardless of whether a resource group is single-server targeted, cluster targeted, or not targeted.

  • There is no change in behavior for any domain-level configuration that was valid in previous releases. For example, nonresource group or resource group template, domain-level behavior remains unchanged for backwards compatibility.

One basic, high-level rule that helps accomplish these goals is that a JMS configuration MBean may reference only another configuration MBean that is in the same scope. For example, a resource group template-defined JMS server can reference only a store that is also defined in the same resource group template. These rules are enforced by configuration validation checks and by errors and warnings that are logged at runtime.

Configuring Messaging Components

The following sections describe considerations when configuring JMS artifacts in a multitenant environment.

Configuring JDBC or File Persistent Stores

Creating a persistent store is a required step before configuring a JMS server, SAF agent, or path service. This is because resource group and resource group template-scoped JMS servers, SAF agents, and path services must reference an existing persistent store.

Creating a custom file or JDBC persistent store inside a resource group that is either scoped to a domain or to a partition is similar to creating a persistent store at the domain level. However, an additional step is that you must specify the scope. In the Oracle WebLogic Server Administration Console and Oracle Enterprise Manager Fusion Middleware Control, there is a Scope menu in the first step of the creation process that lists the available scopes in which to create a persistent store. Using WebLogic Scripting Tool (WLST), you must create the persistent store using the createPersistentStore command on the parent MBean (the MBean for the domain, resource group, or resource group template).

The following Distribution Policy and Migration Policy rules apply to all resource group and resource group template-scoped persistent stores:

  • A resource group or resource group template-scoped store that will be used to host JMS server distributed destinations or SAF agent imported destinations must specify a Distributed Distribution Policy (the default). This setting instantiates a store instance per WebLogic Server instance in a cluster. Furthermore, a resource group or resource group template-scoped store with a Distributed Distribution Policy may optionally be configured with an On-failure or Always Migration Policy.

  • A resource group or resource group template-scoped store that will be used by a path service or that will be used to host JMS server standalone (nondistributed) destinations must specify a Singleton Distribution Policy. This setting instantiates a single store instance in a cluster. Furthermore, a resource group or resource group template-scoped store with a Singleton Distribution Policy must have either On-failure or Always as its Migration Policy instead of Off. Off is the default.

  • A cluster-targeted store with an On-failure or Always Migration Policy requires that the cluster be configured with either database leasing or cluster leasing where database leasing is recommended as a best practice.

These policies control the distribution and high availability behavior of stores and any JMS artifacts that target a cluster. For more information, see "Simplified JMS Cluster and High Availability Configuration" in Administering JMS Resources for Oracle WebLogic Server.

The following are the enforced configuration validation and targeting rules for both file and JDBC stores:

  • A resource group or resource group template-level JMS server, SAF agent, or path service must reference a configured store; they cannot reference null.

  • A resource group template-scoped JMS server, SAF agent, or path service may reference only a store that is defined in the same resource group template. It cannot reference a store defined at the child resource group level.

  • A resource group-scoped JMS server, SAF agent, or path service may a reference only a store that is defined in the same resource group, or in the resource group template optionally referenced by the resource group.

  • A domain-level JMS server, SAF agent, or path service may reference only a store in the domain scope.

The following are additional rules that are specific to JDBC stores.

  • A resource group template-scoped JDBC store may reference only a data source that is in the same resource group template.

  • A resource group-scoped JDBC store may reference only a data source that is in the same resource group, or in the resource group template, optionally referenced by the resource group.

  • A domain-scoped JDBC store may reference only a data source in the domain scope.

Configuring JMS Servers

Creating a JMS server that is scoped to a domain-level resource group or in a partition is similar to creating a JMS server at the domain level. One additional step is to specify the scope. In the WebLogic Server Administration Console and Fusion Middleware Control, there is a Scope menu in the first step of the creation process that lists the available scopes in which to create a JMS server. Using WLST, you must create the JMS server using the createJMSServer command on the parent MBean (the MBean for the domain, resource group, or resource group template).

Another required step is to configure the JMS server so that it references a persistent store that is configured in the same scope as the JMS server.

Finally, if the JMS server is going to be used to host distributed destinations, its store must be configured with a Distributed Distribution Policy. If the JMS server is going to host standalone (nondistributed) destinations, the store must be configured with a Singleton Distribution Policy.

Configuring Store-and-Forward Agents

Creating a Store-and-Forward (SAF) agent that is scoped to a domain-level resource group or in a partition is similar to creating a SAF agent at the domain level. One additional step is to specify the scope. In the WebLogic Server Administration Console and Fusion Middleware Control, there is a Scope menu in the first step of the creation process that lists the available scopes in which to create an SAF agent. Using WLST, you must create the SAF agent using the createSAFAgent command on the parent MBean (the MBean for the domain, resource group, or resource group template).

Another required step is to configure the SAF agent so that it references a persistent store that is configured in the same scope as the SAF agent. This store must be configured with a Distributed Distribution Policy (the default).

Note:

A resource group or resource group template-level SAF agent with a service type Receiving Only is not allowed. An exception will be thrown or an error message will be logged on an attempt to set up such a configuration. This mode is specific to outdated JAX-RPC web services reliable messaging. Use JAX-WS RM instead.

Configuring Path Services to Support Using Unit-of-Order with Distributed Destinations

A path service must be configured in a resource group or resource group template if the resource group or resource group template also configures any distributed destinations that will be used to host Unit-of-Order (UOO) messages. In addition, such distributed destinations need to be configured with a Unit-of-Order routing policy set to PathService instead of Hash because hash-based UOO routing is not supported in a resource group or resource group template scope. Resource group or resource group template-scoped distributed destinations will only use a path service that is configured in the same resource group or resource group template for routing UOO messages. Attempts to send messages to a resource group or resource group template-scoped distributed destination that does not configure a PathService Unit-of-Order routing policy will fail with an exception.

Creating a path service that is scoped to a domain-level resource group or in a partition is similar to creating a path service at the domain level. One additional step is to specify the scope. In the WebLogic Server Administration Console and Fusion Middleware Control, there is a Scope menu in the first step of the creation process that lists the available scopes in which to create a path service. Using WLST, you must create the path service using the createPathService command on the parent MBean (the MBean for the domain, resource group, or resource group template).

Another required step is to configure the path service so that it references a persistent store that is configured in the same scope as the path service. This store must be configured with a Singleton Distribution Policy and an Always or On-Failure Migration Policy.

Configuring Messaging Bridges

Creating a messaging bridge that is scoped to a domain-level resource group or in a partition is similar to creating a messaging bridge at the domain level. One additional step is to specify the scope. In the WebLogic Server Administration Console and Fusion Middleware Control, there is a Scope menu in the first step of the creation process that lists the available scopes in which to create a messaging bridge. Using WLST, you must create the messaging bridge using the createMessagingBridge command on the parent MBean (the MBean for the domain, resource group, or resource group template).

The following Distribution Policy and Migration Policy rules apply to all resource group or resource group template-scoped messaging bridges:

  • Specify a Distributed Distribution Policy (the default) on a bridge to cause a cluster-targeted bridge to deploy an instance per server in a cluster. A messaging bridge with a Distributed Distribution Policy may optionally also configure an On-failure Migration Policy to add support for high availability.

  • Specify a Singleton Distribution Policy on a bridge to cause a cluster-targeted bridge to limit itself to deploying one instance per cluster. A messaging bridge with a Singleton Distribution Policy must have an On-failure Migration Policy instead of Off. Off is the default.

  • A cluster-targeted bridge with an On-failure Migration Policy requires that the cluster be configured with either database leasing or cluster leasing, where database leasing is recommended as a best practice.

These policies control the high availability behavior and distribution behavior of messaging bridges that target a cluster. For more information about distribution and migration policies, see "Simplified JMS Cluster and High Availability Configuration" in Administering JMS Resources for Oracle WebLogic Server.

The following are the configuration validation rules that are specific to a messaging bridge:

  • A resource group template-scoped messaging bridge can reference only messaging bridge destinations in the same scope.

  • A resource group-scoped messaging bridge can reference only messaging bridge destinations in the same resource group, or in the resource group template optionally referenced by the resource group.

  • A domain-scoped messaging bridge may reference only messaging bridge destinations in the domain scope.

Configuring JMS System Resources and Application-Scoped JMS Modules

Creating a JMS system resource that is scoped to a domain-level resource group or in a partition is similar to creating a JMS system resource at the domain level. One additional step is to specify the scope. In the WebLogic Server Administration Console and Fusion Middleware Control, there is a Scope menu in the first step of the creation process that lists the available scopes in which to create a JMS system resource. Using WLST, you must create the JMS system resource using the createJMSSystemResource command on the parent MBean (the MBean for the domain, resource group, or resource group template).

Creating an application-scoped JMS module that has a domain-level resource group scope or is in a partition is similar to creating one for the domain level. An application deployment may contain a JMS module file, or an application EAR file that in turn contains JMS module files. One additional step is to specify the resource group or resource group template scope. For more information, see Deploying Applications.

Note:

If you create a JMS server and deploy an application that specifies a submodule target to this JMS server all within the same configuration edit session, then the deployment may not succeed. Oracle recommends that you configure the JMS server in a separate edit session.

Note:

Oracle strongly recommends configuring JMS using system resource modules instead of embedding the configuration in application resource modules. Unlike application-scoped configuration, system resource configuration can be dynamically tuned and easily monitored by an administrator or developer using the WebLogic Server Administration Console, WLST, or MBeans.

The following are the configuration validation and targeting rules associated with resources in a resource group or resource group template-scoped JMS module.

Subdeployment Definitions

  • A resource group or resource group template-scoped subdeployment only can target nothing (null), a single JMS server, or a single SAF agent.

  • A resource group template-scoped subdeployment can reference only a JMS server or SAF agent that is defined in the same resource group template.

  • A resource group-scoped subdeployment can reference only a JMS server or SAF agent that is defined in the same resource group or in the resource group template optionally referenced by the resource group.

JMS Module Resources

The following table shows JMS module resource types targeting rules.


Resource Type Using a Subdeployment Using Default Targeting

Standalone (Singleton) Destination

May target only a subdeployment which targets a JMS server that in turn references a store with a Singleton Distribution Policy.

Will deploy only if there is a single configured JMS server in the same resource group or resource group template scope that references a Singleton Distribution Policy store. In which case, the destination will deploy on this particular JMS server. JMS servers that reference Distributed Distribution Policy stores are ignored, and JMS servers defined outside the scope, for example, at the domain level or in another resource group or resource group template, are also ignored.

Uniform Distributed Destination*

May target only a subdeployment which targets a JMS server which in turn references a store with a Distributed Distribution Policy

Will deploy only if there is a single configured JMS server in the same resource group or resource group template scope that references a Distributed Distribution Policy store. In which case, the destination will deploy on this particular JMS server. JMS servers that reference Singleton Distribution Policy stores are ignored, and JMS servers defined outside the scope, for example, at the domain level or in another resource group or resource group template, are also ignored.

SAF Imported Destination

May target only a subdeployment which targets a SAF agent.

Will deploy only when there is a single configured SAF agent in the same resource group or resource group template scope. SAF agents defined outside the scope, for example, at the domain level or in another resource group or resource group template, are also ignored.

Connection Factory

May target any subdeployment.

Will deploy to all WebLogic Server instances that are included in the resource group's target.

Foreign Server

May target only a subdeployment which targets a JMS server which in turn references a store with a Distributed Distribution Policy. Best practice is to use Default Targeting instead

Will deploy to all WebLogic Server instances that are included in the resource group's target.


* Note: Resource group or resource group template-scoped uniform distributed topics must specify a Partitioned Forwarding Policy. For example, they must be a Partitioned Distributed Topic (PDT). Be aware that the word Partitioned in a PDT does not have the same meaning as the word partition in a WebLogic Server MT partition. PDTs and WebLogic Server MT partitions are two independent concepts. For information about the trade-offs for using PDTs, see Configuring Messaging: Limitations.

Configuring Partition-Specific JMS Overrides

Resource group template-scoped JMS configuration artifacts might not be complete because they lack or have incorrect values that are specific to partitions that use the resource group template. Each partition may need to have the appropriate override values specified to customize the template-derived values for correct deployment to the partition runtime. Partition-specific, resource group-scoped JMS configuration can be customized on a per-partition basis using resource deployment plans or application deployment plans. In addition, JMS foreign server configuration within a JMS system module can be customized using the JMSSystemResourceOverrideMBean.

Resource overriding allows system administrators to customize JMS resources and other resources such as data sources at the partition level. If you create a partition with a resource group that extends a resource group template, then you can override settings for certain resources defined in that resource group template. If you create a resource group within the partition that does not extend a resource group template and then create resources within this resource group, then you don't need overrides; you can just set partition-specific values for these resources.

Overrides are used mainly when there is a common definition for a resource, such as in a resource group template, that needs each partition that uses the resource to isolate its remotely stored state. For example, the same JMS server, JDBC store, data source, and JMS module configuration can be deployed to multiple partitions in the same cluster by configuring them in a single resource group template and configuring a resource group in each partition to reference the resource group template. The partition resource groups can then be overridden on a per-partition basis to ensure that their respective data sources connect to different databases or to different schemas within the same database.

System administrators can override resource definitions in partitions using the following specific techniques:

  • Resource override configuration MBeans: A configuration MBean that exposes a subset of attributes of an existing resource configuration MBean. Any attribute set on an instance of an overriding configuration MBean will replace the value of that attribute in the corresponding resource configuration MBean instance. JMS foreign server and related configuration artifacts in a JMS system module can use override MBeans to override the user, password and provider URL settings. If you use override MBeans, you must define a separate override MBean for each corresponding foreign JMS server and related deployment MBeans. Configuration changes to these attributes that are made at runtime after a JMS module has already been deployed require that the partition or JVM be restarted for the changes to take effect.

  • Resource deployment plans: An XML file that identifies arbitrary configured resources within a partition and overrides attribute settings on those resources. Persistent stores, JMS servers, SAF agents, messaging bridges, bridge destinations, and path services use the config-resource-override element in a resource deployment plan, while JMS resources in a JMS system module, such as queues, topics, and connection factories, use the external-resource-override element.

  • Partition-specific application deployment plans: Similar to existing application deployment plans, a plan that allows administrators to specify a partition-specific application deployment plan for each application deployment in a partition. For information about partition-specific application deployment plans, see Using Partition-Specific Deployment Plans.

Administrators can combine any of these resource overriding techniques. The system applies them in the following, ascending order of priority:

  • The config.xml file and external descriptors, including partition-specific application deployment plans

  • Resource deployment plans

  • Overriding configuration MBeans

    If an attribute is referenced by both a resource deployment plan and an overriding configuration MBean, then the overriding configuration MBean takes precedence.

For more information about overrides, see Configuring Resource Overrides.

Accessing Partition-Scoped Messaging Resources Using JNDI

To access JMS resources in a partition, an application first needs to establish a JNDI initial context to the partition. After you create a context for a partition, the context object stays in to the partition's namespace so that all subsequent JNDI operations occur within the scope of the partition. When the context is created with a java.naming.provider.url property set, JNDI is partition-aware by looking up the partition information from the provider URL value. The following four different URL types associate the context with a particular partition:

In addition, an existing context can be used to reference a resource in another partition by prefixing special scoping strings to JNDI names.

Each of these methods is described in the following sections.

Specifying No URL

An application that is running in a partition on a WebLogic Server instance can access JMS resources in its own local partition simply by creating a local initial context without specifying any provider URL. This approach is the best practice for creating locally scoped contexts.

Specifying a Partition Virtual Host or Partition URI

If a context URL matches a virtual host URL or URI that is configured for a partition, then JNDI creates the context for that partition and all requests from the context are delegated to the partition's JNDI name space.

A JMS application can therefore access a WebLogic Server JMS resource that is running in a different JVM or WebLogic Server cluster using the t3 or HTTP protocol by supplying a URL of the form:

  • t3://virtualhost:port

  • t3://host:port/URI

Note:

A misspelled or nonexistent URI may cause a context to scope to the domain level without warning.

Specifying a Dedicated Port URL

It is possible to dedicate a specific port or address to a channel in a partition, in which case the URL format becomes t3://host:port.

This is the only supported method for clients prior to release 12.2.1 to interoperate with partition-scoped resources.

For more information, see Configuring Virtual Targets.

Note:

This method does not currently support SSL when used for interoperability with previous releases.

Local Cross-Partition Use Cases Using local: URLs or Decorated JNDI Names

An application in one partition can access another partition on the same WebLogic Server instance or in the same cluster using one of the URLs described in the previous section.

However, to support access across partitions that reside on the same server more efficiently without a need to specify a specific host, port, or URI, an application has the following options.

  • Create a context with a local: protocol URL:

    • local:// Creates the context on the current partition, which can be either a partition or the domain.

    • local://?partitionName=DOMAIN Creates the context on the domain.

    • local://?partitionName=partition_name Creates the context on the partition partition_name.

  • Create a context without specifying a URL, and then prefix an explicit scope when specifying a JNDI name:

    • domain:<JNDIName> Looks up the JNDI entry in the domain level.

    • partition:<partition_name>/<JNDIName> Looks up the JNDI entry in the specified partition.

About Partition Associations in JMS

The following sections describe various JMS partition associations.

Partition Association Between Connection Factories and Their Connections or JMS Contexts

JMS client connections and JMS contexts are permanently associated with the partition from which their connection factory was obtained, and will not change their partition based on the partition associated with the current thread.

Partition Association with Asynchronous Callbacks

When JMS pushes messages or exceptions to an asynchronous listener, or similarly pushes events to a destination availability listener or an asynchronous send completion listener, the listener's local partition ID (instead of the destination's partition) will be associated with the callback thread. The local partition ID is the partition associated with the thread that created the asynchronous listener.

Connection Factories and Destinations Need Matching Scopes

A connection factory can interact only with a destination defined in the same partition as the connection factory. For example a QueueBrowser, MessageConsumer/JMSConsumer, TopicSubscriber, or MessageProducer/JMSProducer client object can communicate with a destination only if the connection factory that was used to create these client objects was defined in the same partition as the destination. Furthermore, a connection factory can interact with a destination only if it is obtained from the same cluster or server JVM as the connection factory.

Temporary Destination Scoping

Prior to the 12.2.1 release, JMS servers could be deployed only at the domain level and a temporary destination could be hosted only by JMS servers that both:

  • Set Hosting Temporary Destinations to true (the default).

  • Are hosted on the same WebLogic Server instance or in the same cluster as the connection factory used to create the temporary destination.

The behavior for creating a temporary destination in WebLogic Server MT is:

  • As in non-MT WebLogic Server, a temporary destination can be hosted only by any JMS server that has Hosting Temporary Destinations enabled and that is hosted on the same WebLogic Server instance or in the same cluster as the connection factory used to create the temporary destination.

  • If a JMS connection was created using a connection factory that is configured in a resource group or resource group template scope (including domain resource groups), then its temporary destinations will be hosted only by a JMS server that is configured in the same scope.

  • If a JMS connection was created using a nonresource group or resource group template-scoped partition-level connection factory, then it is allowed to create temporary destinations on any JMS server from the same partition as the connection factory. The nonresource group or resource group template-scoped partition-level connection factories are simply the default connection factories, for example the connection factories with JNDI names weblogic.jms.ConnectionFactory or weblogic.jms.XAConnectionFactory.

  • If a JMS connection was created using a nonresource group or resource group template-scoped domain-level connection factory, then it is allowed to create temporary destinations on any JMS server at the domain level including JMS servers that are scoped to domain-level resource groups.

If there is no qualified JMS server found within the allowed scope, then an attempt to create a temporary destination returns an exception.

Managing Partition-Scoped Messaging Components

The following sections describe managing certain aspects of partition-scoped messaging.

Runtime Monitoring and Control

All existing messaging runtime MBeans are supported for monitoring and controlling partition-scoped JMS configuration and deployments and are accessible to JMX-based management clients. Partition-scoped JMS runtime MBeans are located under their corresponding ParititionRuntimeMBean instances.

For example:

  • The JMSServer, Connection, and PooledConnection runtime MBeans are placed in the runtime MBean hierarchy under serverRuntime/PartitionRuntimes/partition/JMSRuntime.

  • The SAF runtime MBeans are placed in the runtime MBean hierarchy under serverRuntime/PartitionRuntimes/partition/SAFRuntimeMBean.

  • The messaging bridge and path service runtime MBeans are placed in the runtime MBean hierarchy directly under serverRuntime/PartitionRuntimes/partition.

For more information, see Monitoring and Debugging Partitions.

Managing Partition-Scoped Security

Security roles and policy definitions related to partition messaging configuration is the responsibility of the WebLogic Server system administrator.

WebLogic Server MT expands upon the traditional WebLogic Server security support in two significant ways:

  • Multiple realms: WebLogic Server MT supports multiple active security realms and allows each partition to execute against a different realm.

  • Identity domains: An identity domain is a logical namespace for users and groups, typically representing a discrete set of users and groups in the physical data store. Identity domains are used to identify the users associated with particular partitions.

Otherwise, configuring security for partition-scoped messaging is similar to setting up security for domain-level messaging. For more information, see Configuring Security.

Managing Transactions

All JTA transactions in a JVM are serviced by a single JTA transaction manager regardless of scope. Partition-scoped XA resource manager names are automatically qualified with their partition name so that the resource managers are uniquely identified to the transaction manager and are managed independently. One example of a resource manager is a persistent store.

For more information about transaction configuration and restrictions, see Configuring Transactions.

Managing Partition and Resource Group Lifecycle Operations

A JMS artifact that is associated with a partition or resource group can be started and shut down by starting and shutting down its partition or resource group. Permissions to perform these operations are automatically supplied to the WebLogic Server system administrator and operator.

Partition-Scoped JMS Diagnostic Image Sources

The messaging component does not support the ability to scope a diagnostic image to a partition. For more information, see Configuring Partition-Scoped Diagnostic Image Capture.

Partition-Scoped JMS Logging

Partition-scoped JMS log messages are qualified with the partition ID and name when the domain log format is not configured. For more information about logging, see Monitoring and Debugging Partitions.

Message Lifecycle Logging

Optionally enabled, JMS server and SAF agent message lifecycle logging is placed in locations that are different when these services are scoped to a partition. The logging files are in their partition's directory. Furthermore, the log file names of different runtime JMS server and SAF agent instances of a cluster-targeted JMS server or SAF agent are guaranteed to be replaced with a single interpretation.

The expected new log locations are summarized below when configuring an absolute path, a relative path, or the default.


Scope Nothing Configured (default) /<absolute-path>/<file> /<relative-path>/<file>

Domain Level

<domain-log>/<log-suffix>/<instance>-jms.messages.log

/<absolute-path>/<instance>-<file>

<domain-log>/<relative-path>/<instance>-<file>

Partition

<partition-log>/<log-suffix>/<instance>-jms.messages.log

Same as <relative-path>*

<partition-log>/<relative-path>/<instance>-<file>


* Note that partition-scoped configuration treats absolute paths as relative paths.

<domain-log> = <domain>/servers/<wl-server-name>

<partition-log> = <domain>/partitions/<partition-name>/system/servers/<wl-server-name>

<log-suffix> = logs/jmsservers/<configured-name> (for JMS servers)

<log-suffix> = logs/safagents/<configured-name> (for SAF agents)

<instance> =

  • <configured-name>, when JMS server or SAF agent is single-server targeted.

  • <configured-name>_<wl-server-name>, when cluster-targeted and the data store's Distribution Policy=Distributed.

    (Note that an instance keeps its old name even as it migrates from one WebLogic Server instance to another.)

  • <configured-name>_01, when cluster-targeted and the data store's Distribution Policy=Singleton.

Admin Helpers

There are two JMS-specific Java administration programming utilities that provide helper methods for configuring and monitoring JMS resources.

The JMSModuleHelper utility contains helper methods for locating JMS runtime MBeans (for monitoring) as well as methods to manage (locate/create/delete) JMS module configuration entities (descriptor beans) in a given module.

The JMSRuntimeHelper utility provides convenient methods for obtaining the corresponding JMX runtime MBean given a JMS object such as a connection, destination, session, message producer, or message consumer.

In release 12.2.1, the enhanced version of the helpers are provided to handle both domain-scoped and resource group or resource group template-scoped JMS resources.

The existing JMSRuntimeHelper utility is enhanced to be partition-aware. When calling a runtime helper method, it is required that the specified JNDI context and the specified JMS object must belong to the same partition (otherwise an exception is thrown).

The enhanced JMSModuleHelper utility is scope-aware and contains the following interface and classes:

  • weblogic.jms.extensions.IJMSModuleHelper—an interface that defines all helper methods.

  • weblogic.jms.extensions.JMSModuleHelper—the version prior to release 12.2.1 of the JMS module helper, which handles only domain-level JMS resources.

  • weblogic.jms.extensions.JMSModuleHelperFactory—a factory that creates an instance of a JMS module helper that works in a specific scope given an initial context to the Administration Server, a scope type (domain, resource group or resource group template), and the name of the scope.

The following code demonstrates how to create a JMS module helper for each of the three different scopes:

   Context ctx = getContext(); // get an initial JNDI context
 
   JMSModuleHelperFactory factory = new JMSModuleHelperFactory();
 
      // create a JMS module helper for domain level
 
   IJMSModuleHelper domainHelper = factory.getHelper(ctx, IJMSModuleHelper.ScopeType.DOMAIN, null); 
 
      // create a JMS module helper for Resource Group "MyResourceGroup"
 
   IJMSModuleHelper rgHelper = factory.getHelper(ctx, IJMSModuleHelper.ScopeType.RG, "MyResourceGroup");
 
      // create a JMS module helper for Resource Group Template "MyResourceGroupTemplate"
 
   IJMSModuleHelper rgtHelper = factory.getHelper(ctx, IJMSModuleHelper.ScopeType.RGT, "MyResourceGroupTemplate");

After a JMS module helper instance is created, you can use it to create JMS resources that are scoped to the corresponding scope. For example, the following example code creates a JMS system resource with a JMS queue on JMS server, MyJMSServer, in the resource group, MyResourceGroup. (It assumes that the JMS server and resource group have already been created.)

   String jmsServer = "MyJMSServer";
 
   String jmsSystemModule = "MyJMSSystemModule";
 
   String queue = "MyQueue";
 
   String queueJNDI = "jms/myQueue";
 
   rgHelper.createJMSSystemResource(jmsSystemModule, null);
 
   rgHelper.createQueue(jmsSystemModule, jmsServer, queue, queueJNDI, null);

File Locations

Persistent stores create a number of files in the file system for different purposes. Among them are file store data files, file store cache files (for file stores with a DirectWriteWithCache Synchronous Write Policy), and JMS server and SAF agent paging files.

The file location behavior prior to release 12.2.1 remains the same for the domain-scoped persistent stores. This ensures that persistent data is recovered after an upgrade and that it is stored in the expected location. For partition-scoped configuration, these files are placed in isolated directories within the partition file system to prevent file collisions among same-named stores in different partitions.

The following summary shows of the location of various files used by the file store system in WebLogic Server MT, where partitionStem = partitions/<partitionName>/system.


Store Type Store Path Not Configured Relative Store Path Absolute Store Path File Name

custom file

<domainRoot>/<partitionStem>/store/<storeName>

<domainRoot>/<partitionStem>/store/<relativePath>/<storeName>

<absolutePath>/<partitionStem>/store/<storeName>

<storeName>NNNNNN.DAT

cache

${java.io.tmpdir}/WLStoreCache/${domainName}/<partitionStem>/tmp

<domainRoot>/<partitionStem>/<tmp>/<relativePath>

<absolutePath>/<partitionStem>/tmp

<storeName>NNNNNN.CACHE

ejb timers

<domainRoot>/<partitionStem>/store/_WLS_EJBTIMER_<serverName>

<domainRoot>/<partitionStem>/store/<relativePath>/_WLS_EJBTIMER_<serverName>

<absolutePath>/<partitionStem>/store/_WLS_EJBTIMER_<serverName>

_WLS_EJBTIMER_<serverName>NNNNNN.dat

paging

<domainRoot>/<partitionStem>/paging

<domainRoot>/<partitionStem>/paging/<relativePath>

<absolutePath>/<partitionStem>/paging

<jmsServerName>NNNNNN.TMP

<safAgentName>NNNNNN.TMP


The following summary shows how each of the prior store types configure their directory location.


Store Type Directory Configuration

custom file

The directory configured on a file store.

cache

The cache directory configured on a file store that has a DirectWriteWithCache Synchronous Write Policy.

default ejb timer store

The directory configured on the WebLogic Server default store's configuration. (Partition EJB timer default stores copy their configuration from the default store.)

paging

The paging directory configured on a SAF agent or JMS server.


Configuring Messaging: Best Practices

This section provides advice and best practices for beginning JMS users as well as advanced JMS users in an MT environment.

  • For MT-related known issues, Oracle recommends that all users review "Configuration Issues and Workarounds" in Release Notes for Oracle WebLogic Server.

  • If for any reason, newly created or updated JMS resources are not accessible in a running partition, then review the WebLogic Server log files for warning and error log messages. If the server log messages do not provide helpful information, then a partition restart may often resolve the issue. Note that a newly created partition has to be explicitly started before any of the resources is externally accessible.

  • The following rules always apply in a resource group and resource group template scope:

    • Use a Distribution Policy=Singleton store for path services, and for JMS servers that host standalone destinations.

    • Use a Distribution Policy=Distributed store for SAF agents, and for JMS servers that host distributed destinations.

    • Configure cluster leasing in clusters that have:

      • Distribution Policy=Singleton stores or bridges.

      • Migration Policy=On-Failure or Always stores or bridges.

For more general best practices related to using JMS, see "Best Practices for JMS Beginners and Advanced Users" in Administering JMS Resources for Oracle WebLogic Server.

Configuring Messaging: Limitations

The following features in JMS or a related component are not currently supported in WebLogic Server MT:

  • Client SAF forwarding into a partition:

    • The behavior is undefined.

    • Note that there is support for server-side SAF agents to forward into a partition.

  • C client access to resource group or resource group template-scoped JMS resources. The behavior is undefined.

  • .NET client. An exception is thrown if a .NET client accesses JMS resources in a partition.

  • Replicated Distributed Topics (RDT):

    • The deployment of a JMS module to a resource group or resource group template that contains Replicated Distributed Topics (RDTs) fails with an exception.

    • The default type of uniform distributed topic is configured with a Forwarding Policy of Replicated.

    • Workarounds include:

      • Configure a standalone (singleton) topic.

      • Configure a Partitioned Distributed Topic (PDT).

        A PDT is configured by setting its Forwarding Policy to Partitioned.

        For the advantages and limitations of a PDT, see "Configuring Partitioned Distributed Topics" in Administering JMS Resources for Oracle WebLogic Server.

        Note that the word Partitioned in a PDT does not have the same meaning as the word partition in a WebLogic Server MT partition; PDTs and WebLogic Server MT partitions are two independent concepts.

  • Default store

    • Using the WebLogic Server's default store in partitions is not allowed.

    • All JMS servers, SAF agents, and path services in a resource group or resource group template are required to reference a custom store.

  • Weighted Distributed Destinations (WDD)

    • The deployment of a JMS module to a resource group or resource group template that contains WDDs fails with an exception.

    • Note that WDDs are deprecated.

  • Connection consumer and server session pool

    • An attempt to create a partition-scoped connection consumer or server session pool fails.

    • Note that a best practice is to use a Message Driven Bean (MDB), because MDBs serve a similar purpose to a connection consumer or server session pool.

  • Logging Last Resource (LLR) data sources

    • The transaction system does not support the LLR feature in the partition scope.

    • For more information, including a potential workaround, see Configuring Transactions.

  • Client interoperability using a dedicated partition channel using SSL

    • Old clients can interoperate only with a partition by configuring a dedicated channel for the partition.

    • This method does not currently support SSL.

Messaging Resource Group Migration

WebLogic Server provides the ability to migrate a resource group to a different server or cluster, as described in Migrating Resource Groups: Main Steps and WLST Example. All messaging configuration and deployments in a resource group can participate in a resource group's migration:

  • Messaging configuration includes file stores, JDBC stores, JMS servers, SAF agents, path services, JMS system resources, and messaging bridges. 

  • Messaging related deployments include application deployments that contain JMS modules, and EJB and MDB deployments that use JMS resources.

When performing messaging resource group migrations, be aware of the following special considerations:

  • WebLogic messaging resource group migration requires that the resource group be shutdown at its source location prior to migration and restarted after the migration completes. If an administrator tries to initiate the migration of a running resource group that contains a messaging configuration, a validation exception will be thrown or displayed with a message directing the administrator to shut down the resource group before migration. 

  • Non-persistent messaging application and runtime states will not survive a resource group migration.

    • Non-persistent messages will be lost.

    • Clients may get exceptions and need to reconnect during migration. If clients are designed to handle typical JMS failures, they may be able to automatically fail over to the new location. See Client Failover During Resource Group Migration.

  • Persistent messaging state data migration may require additional steps.

    • Requires shared storage (database or file) accessible by the source and target locations.

    • For most non-clustered use cases, no additional steps are required.

    • For most cluster use cases, messaging-specific, pre-migration steps are usually required to ensure correct behavior.

    For more information, see Resource Group Migration with Persistent Data.

  • Migration of Message Driven Beans that work with topics may require additional steps. For more information, see Migrating Message Driven Beans (MDBs).

  • Migration of applications that integrate third party JMS providers may require additional steps. For more information, see Global Transaction Considerations With Third Party JMS.

The remainder of this document references three types of messaging services:

  • Non-clustered services: services targeted to a non-clustered (standalone) Managed Server

  • Cluster singleton services: services targeted to a cluster that have a Singleton Distribution Policy.

  • Cluster distributed services: services targeted to a cluster that have a Distributed Distribution Policy.

A JMS service's distribution policy is configured via a StoreMBean or a MessagingBridgeMBean, and a messaging service, such as a JMS server, an SAF agent, or a path service, that references a store will inherit its policy from the store MBean configuration. For more information, see "Simplified JMS Cluster and High Availability Configuration" in Administering JMS Resources for Oracle WebLogic Server.

The following table summarizes the supported messaging resource group migration scenarios.


Messaging Service Non-Clustered Cluster Singleton Cluster Distributed

JMS Servers and Destinations

Configuration and persistent data

Configuration and persistent data

Configuration only

SAF Agents and Imported Destinations

Configuration only

N/A

Configuration only

Path Services

Configuration and persistent data

Configuration only

Configuration only

Messaging Bridges

Yes

Yes

Yes, with limitations in durable topic cases. For more information, see Migrating Persistent Data for a Cluster Distributed Service.


Resource Group Migration with Persistent Data

The following sections describe procedures for non-clustered, cluster singleton, and distributed service handling of persistent data.

Migrating Persistent Data for a Non-Clustered Service

Non-clustered messaging persistent state (messages, durable subscriptions, SAF data, and UOO data) can be safely migrated and processed on the target location with very few exceptions, as described below:

  • Additional steps are required to migrate a resource group that contains messaging services from a non-clustered Managed Server to a cluster or vice versa. Prior to such a migration, process all messages, complete all pending transactions, shut down the resource group, and then delete all files and database tables.

  • Additional steps are required when migrating a store-and-forward agent. For more information, see Migrating Store-and-Forward Messages.

  • If a domain level resource group has a file store with an undefined store path, then the path will change after migration because the generated path embeds the current WebLogic Server instance name. For any such store, you must move the files to the new location after the resource group is shutdown on the source location, and before it is restarted on the target location. Oracle recommends that you always specify a store path when configuring a file store.

Migrating Persistent Data for a Cluster Singleton Service

When migrating a cluster singleton service's persistent data, the considerations are the same as the non-clustered use cases, described in Migrating Persistent Data for a Non-Clustered Service, with one additional exception. If a path service is configured, the store file or table must be deleted after processing all distributed destination messages, completing all transactions, and shutting down the resource group at its original location but before starting the resource group in its new location.

Migrating Persistent Data for a Cluster Distributed Service

Resource group migration requires additional caution in use cases that involve persistent distributed services, for example, distributed destinations and imported destinations. Persistent data associated with such a service can not be safely migrated when its hosting resource group migrates. If messages cannot be lost, they must all be processed prior to starting the migration so that no messages remain in any distributed or imported destinations.

In addition, it is important to delete and remove all persistent store files or tables that are associated with such a resource group before migration, even after all the persistent messages are consumed and acknowledged. For example, without a thorough cleanup, durable subscriptions might be abandoned but continue to accumulate messages on the target location, which may cause the server to run out of memory. As another example, UOO messages may be routed to a non-existent location.

The following are detailed considerations in the key cluster distributed messaging use cases:

  • Persistent messages in imported destinations and distributed destinations will not be available at the target location. If messages cannot be lost, they must all be processed prior to starting the migration so that no messages remain in any distributed or imported destinations.

  • Stores that are cluster-targeted with the Distribution Policy set to Distributed, cannot be safely migrated unless you remove all the files or drop the JDBC tables.

  • Additional steps are required when migrating a store-and-forward agent. For more information, see Migrating Store-and-Forward Messages.

  • If you have a remote store-and-forward (SAF) agent that forwards to a distributed destination that you are migrating, additional steps are required on the SAF agent when you migrate the remote distributed destination. For more information, see Migrating Store-and-Forward Messages.

  • Distributed durable bridges that forward from any topic should be migrated with caution. The corresponding durable subscriptions generated at the original location may be abandoned because the subscription names will be different at the new location, and so the original subscriptions may still accumulate messages on the source topic. Administrators must ensure that the subscriptions that were generated by such a bridge running at the original location are deleted during the migration.

  • Distributed destinations that service persistent UOO messages usually use a path service. The path service's store tables or files must be deleted prior to migration to ensure that new UOO messages at the new location are correctly routed.

  • With distributed stores, pending global transactions that started before a resource group migration, may not resolve after the migration because the XA resource name of the store instances change after migration. Oracle recommends that administrators make sure that there are no ongoing transactions before performing a resource group migration in a cluster.

Migrating Store-and-Forward Messages

Store-and-forward messages involve two components, an SAF agent that stores and forwards messages and a final WebLogic JMS destination. These are often deployed to separate WebLogic server instances, clusters, or domains, and therefore, you can migrate the two components together or independently.

If you have a remote SAF agent that forwards with exactly-once QOS to a distributed destination, additional steps are required on the SAF agent when you migrate the remote distributed destination. This is because SAF messages may not be able to be forwarded after the migration, which may in turn block subsequent messages from being forwarded. To prevent this situation, Oracle recommends that you empty the SAF queue before migrating the remote distributed destination by doing the following:

  1. Pause incoming new messages on the SAF agent. For more information, see "Controlling Message Operations on Destinations" in Administering JMS Resources for Oracle WebLogic Server.

  2. Wait for all pending messages to be successfully forwarded.

If you are migrating an SAF agent itself that handles exactly-once forwarding, similar steps are recommended regardless of whether it is non-clustered or clustered. This is true even if the SAF agent handles only non-persistent messages. Oracle recommends that you empty the SAF agent's imported destinations (as described above), or delete all the store files or JDBC tables that are associated with the SAF agent before migrating it.

Migrating Message Driven Beans (MDBs)

When an MDB's source destination is a queue, the MDB can be safely migrated. Similarly, an MDB can be safely migrated when an MDB consumes from a topic and its subscription-durability is set to NonDurable.

Additional caution is required when you migrate an MDB that works with a topic and its subscription-durability is set to Durable, regardless of whether the MDB or topic is hosted on a cluster or a non-clustered server and regardless of whether the MDB's source destination is hosted in the same cluster or server location as the MDB itself.

In detail:

  • You may be able to safely migrate such a durable subscription topic MDB only if it is configured with TopicMessagesDistributionMode=Compatibility mode and generate-unique-client-id=false, or it is configured with TopicMessagesDistributionMode=one-copy-per-application mode. For more information, see "Deployment Elements and Annotations for MDBs" in Developing Message-Driven Beans for Oracle WebLogic Server.

  • For all other durable topic MDB use cases, TopicMessageDistributionMode=one-copy-per-server, or TopicMessageDistributionMode=Compatibility and generate-unique-client-id=true, the MDB's subscription name contains the local server name, and migrating such MDBs will cause their original durable subscriptions to be abandoned. Not only will the unprocessed messages that are associated with an abandoned durable subscription be lost, also an abandoned durable subscription will continue to accumulate messages, which consumes system memory and eventually may take the system down. As a result, migrating such a durable MDB requires additional caution even if it connects to a non-clustered, cluster singleton, distributed topic, or third party topic. Administrators must ensure that the subscriptions that were generated by such an MDB running at the original location are deleted during the migration.

When the destination that a MDB listens to is in a separate resource group or partition, you can perform a live migration of the MDB containing resource group but the above considerations for topic MDBs about abandoned durable subscriptions still apply.

Note that when a durable MDB listens to a distributed topic, the associated durable subscriptions may be abandoned when the distributed topic migrates even when the MDB does not migrate. For details about deleting such distributed topic state prior to completing a migration, see Migrating Persistent Data for a Cluster Distributed Service.

Global Transaction Considerations With Third Party JMS

It is important to complete all pending, also known as "in doubt", transactions before migrating a Java EE application that integrates with a third party JMS provider. (In this case, a third party JMS provider is a provider that is neither WL JMS or AQ JMS).

When a Java EE application deployed on a WebLogic Server instance or cluster integrates a third party JMS provider with global transactions, an internal XA resource name is formed under which a foreign XA resource is registered with a WebLogic transaction manager. Those names often contain the local WebLogic Server instance name, which may change after such an application migrates as part of a resource group migration. This may cause in-doubt transactions that cannot be resolved after a migration because the transaction managers will try to resolve them using the original name.

Client Failover During Resource Group Migration

In order to work seamlessly with remote JMS resources that can migrate with their containing resource group, JMS client code needs to follow the known best practice of closing all JMS and JNDI connections and reconnecting to JMS after a JMS failure. For more information, see "Best Practices for JMS Beginners and Advanced Users" in Administering JMS Resources for Oracle WebLogic Server. Note that certain containers and services can do this automatically on a client's behalf, namely MDBs, in-bound SOA JMS adapters, and messaging bridges. In addition, such a JMS client needs to ensure that the reconnection logic uses a URL that will resolve to the new location.

When a JMS resource is migrated in a resource group migration, a client may get an exception via exception listener callbacks, or get an exception on a sync call. The client then needs to reestablish connectivity using a new URL that points to the new location of the resource, or using the original URL if an address remapping capability is used during the migration. Examples of DNS remapping include a direct change to a DNS mapping or by using Oracle Traffic Director TCP proxy. When re-establishing, the client will need to retrace the steps it used when establishing its initial connectivity: namely obtaining the initial context, destination lookup, connection factory lookup, connection creation, session creation, producer creation, and finally consumer creation.

Note that JMS client failover behaviors described in the following sections apply to not only standalone clients, but remote or local Java EE applications, such as MDB, and SOA JMS adapter, and messaging bridge as well.

Manual Failover

When no address mapping is used, a client that connects to the original location before migration will need to perform the following steps after the resource group that it accesses migrates:

  • Close the existing initial context and JMS objects

  • Change the URL to point to the new location

  • Reestablish a new initial context using the new URL

  • Use the initial context to lookup JMS destinations and connection factories

  • Use the connection factories and destinations to connect to JMS

A client that connects only to a WebLogic Sever instance after the migration has to use a URL that points to the new location.

Using Oracle Traffic Director TCP Proxy

The Oracle Traffic Director TCP proxy mechanism provides a mapping capability for a WebLogic Server's hosting host and port to an Oracle Traffic Director proxy's host and port.

In order to use the Oracle Traffic Director capability, you must configure the Oracle Traffic Director mappings for a partition that hosts any to-be-migrated resource groups, the partition's virtual target to use the Oracle Traffic Director proxy's host name, and the applications to use the Oracle Traffic Director proxy's URL to establish an initial context when they connect to any to-be-migrated resource group resources. For more information, see "Creating a TCP Proxy" in Oracle Traffic Director Administrator's Guide.

After you configure the Oracle Traffic Director proxy and the resource group virtual target to use Oracle Traffic Director, an application client is able to reconnect and fail over to the new location after a resource group migration with less manual intervention if the client does one of the following:

  • Reestablishes a new initial context

  • Creates a new JMS connection after the migration using a connection factory stub that it obtained before the migration

With the existing Oracle Traffic Director support, the only manual step that is required is to re-map the target URL to the proxy's URL and restart the proxy instances after migration. The restart is required to ensure that all the existing connections to the old server are dropped. Without the restart, new clients that connect after migration will be routed to the new location, but the clients that were connected before the migration may not fail over to the new location.