Java Message Service (JMS) is an application program interface (API) that supports the formal communication known as messaging between computers in a network.
Java Transaction API (JTA) specifies standard Java interfaces between a transaction manager and parties involved in a distributed transaction system: the resource manager, the application server, and the transactional applications.
In WebLogic JMS, a message is available only if its host JMS server for the destination is running. If a message is in a central persistent store, the only JMS server that can access the message is the server that originally stored the message. WebLogic has features to restart and/or migrate a JMS server automatically after failures. It also has features for clustering (distributing) a destination across multiple JMS servers within the same cluster.
You automatically restart and / or migrate (fail over) JMS Servers using either Whole Server Migration or Automatic Service Migration.
For more on working with JMS or JTA, see:
Configuring WebLogic JMS Clustering in Oracle Fusion Middleware Administering JMS Resources for Oracle WebLogic Server
Interoperating with Oracle AQ JMS in Oracle Fusion Middleware Administering JMS Resources for Oracle WebLogic Server
Configuring JTA in Developing JTA Applications for Oracle WebLogic Server.
For more on Whole Server Migration, see Whole Server Migration.
To configure JMS and JTA services for high availability, you deploy them to a migratable target, a special target that can migrate from one server in a cluster to another.
A migratable target groups migratable services that should move together. When a migratable target migrates, all services that it hosts also migrate.
A migratable target specifies a set of servers that can host a target. Only one server can host a migratable target at any one time. It can also specify:
A user-preferred host for services
An ordered list of backup servers if a preferred server fails
After you configure a service to use a migratable target, it is independent from the server member that currently hosts it. For example, if you configure a JMS server with a deployed JMS queue to use a migratable target, the queue is independent of when a specific server member is available. The queue is always available when any server in the cluster hosts the migratable target.
You can migrate pinned migratable services manually from one server to another in the cluster if 1) a server fails, or 2) as part of scheduled maintenance. If you do not configure a migratable target in the cluster, migratable services can migrate to any server in the cluster.
See Configuring Migratable Targets for JMS and JTA High Availability to configure migratable targets.
To configure a migratable target, you specify servers that can host a target; only one server can host a migratable target at any one time. You also set the host you prefer for services and back up servers if the preferred host fails.
To configure migratable targets, see these topics in Administration Console Online Help:
When you deploy a JMS service to a migratable target, you can select a user-preferred server target to host the service. You can also specify constrained candidate servers (CCS) that can host a service if the user-preferred server fails.
If a migratable target doesn't specify a CCS, you can migrate the JMS server to any available server in the cluster.
You can create separate migratable targets for JMS services so that you can always keep each service running on a different server in the cluster, if necessary. Conversely, you can configure the same selection of servers as the CCSs for both JTA and JMS, to ensure that services stay co-located on the same server in the cluster.
You can configure the file system to store JMS messages and JTA logs. For high availability, you must use a shared file system.
See Using Shared Storage for what to consider when you use a shared file system.
For more information, see WebLogic JMS Architecture and Environment in Administering JMS Resources for Oracle WebLogic Server.
If you store JMS messages and transaction logs on an NFS-mounted directory, Oracle strongly recommends that you verify server restart behavior after an abrupt machine failure. Depending on the NFS implementation, different issues can arise after a failover/restart.
To verify server restart behavior, abruptly shut down the node that hosts WebLogic servers while the servers are running.
If you configured the server for server migration, it should start automatically in failover mode after the failover period.
If you did not configure the server for server migration, you can manually restart the WebLogic Server on the same host after the node completely reboots.
If WebLogic Server doesn't restart after abrupt machine failure, review server log files to verify whether or not it is due to an I/O exception similar to the following:
<MMM dd, yyyy hh:mm:ss a z> <Error> <Store> <BEA-280061> <The persistent store "_WLS_server_1" could not be deployed: weblogic.store.PersistentStoreException: java.io.IOException: [Store:280021]There was an error while opening the file store file "_WLS_SERVER_1000000.DAT" weblogic.store.PersistentStoreException: java.io.IOException: [Store:280021]There was an error while opening the file store file "_WLS_SERVER_1000000.DAT" at weblogic.store.io.file.Heap.open(Heap.java:168) at weblogic.store.io.file.FileStoreIO.open(FileStoreIO.java:88) ... java.io.IOException: Error from fcntl() for file locking, Resource temporarily unavailable, errno=11
This error occurs when the NFSv3 system doesn't release locks on file stores. WebLogic Server maintains locks on files that store JMS data and transaction logs to prevent data corruption that can occur if you accidentally start two instances of the same . Because the NFSv3 storage device doesn't track lock owners, NFS holds the lock indefinitely if a lock owner fails. As a result, after abrupt machine failure followed by a restart, subsequent attempts by WebLogic Server to acquire locks may fail.
How you resolve this error depends on your NFS environment: (See Oracle Fusion Middleware Release Notes for updates on this topic.)
For NFSv4 environments, you can set a tuning parameter on the NAS server to release locks within the approximate time required to complete server migration; you don't need to follow procedures in this section. See your storage vendor's documentation for information on locking files stored in NFS-mounted directories on the storage device, and test the results.
For NFSv3 environments, the following sections describe how to disable WebLogic file locking mechanisms for: the default file store, a custom file store, a JMS paging file store, a diagnostics file store.
NFSv3 file locking prevents severe file corruptions that occur if more than one writes to the same file store at any point in time.
If you disable NFSv3 file locking, you must implement administrative procedures /policies to ensure that only one writes to a specific file store. Corruption can occur with two s in the same cluster or different clusters, on the same node or different nodes, or on the same domain or different domains.
Your policies could include: never copy a domain, never force a unique naming scheme of WLS-configured objects (servers, stores), each domain must have its own storage directory, no two domains can have a store with the same name that references the same directory.
When you use a file store, always configure the database-based leasing option for server migration. This option enforces additional locking mechanisms using database tables and prevents automated restart of more than one instance of a particular .
If WebLogic Server doesn't restart after abrupt machine failure and server log files show the NFS system doesn't release locks on file stores, you can disable file locking.
To disable file locking for the default file store using the Administration Console:
config.xml entry looks like the following:
<server> <name>examplesServer</name> ... <default-file-store> <synchronous-write-policy>Direct-Write</synchronous-write-policy> <io-buffer-size>-1</io-buffer-size> <max-file-size>1342177280</max-file-size> <block-size>-1</block-size> <initial-size>0</initial-size> <file-locking-enabled>false</file-locking-enabled> </default-file-store> </server>
If WebLogic Server doesn't restart after abrupt machine failure and server log files show the NFS system doesn't release locks on custom file stores, you can disable file locking.
To disable file locking for a custom file store using the Administration Console:
config.xml entry looks like the following example:
<file-store> <name>CustomFileStore-0</name> <directory>C:\custom-file-store</directory> <synchronous-write-policy>Direct-Write</synchronous-write-policy> <io-buffer-size>-1</io-buffer-size> <max-file-size>1342177280</max-file-size> <block-size>-1</block-size> <initial-size>0</initial-size> <file-locking-enabled>false</file-locking-enabled> <target>examplesServer</target> </file-store>
If WebLogic Server doesn't restart after abrupt machine failure and server log files show the NFS system doesn't release locks on JMS paging file stores, you can disable file locking.
To disable file locking for a JMS paging file store using the Administration Console:
config.xml file entry looks like the following example:
<jms-server> <name>examplesJMSServer</name> <target>examplesServer</target> <persistent-store>exampleJDBCStore</persistent-store> ... <paging-file-locking-enabled>false</paging-file-locking-enabled> ... </jms-server>
If WebLogic Server doesn't restart after abrupt machine failure and server log files show the NFS system doesn't release locks on diagnostics paging file stores, you can disable file locking.
To disable file locking for a Diagnostics file store using the Administration Console:
config.xml file looks like this:
<server> <name>examplesServer</name> ... <server-diagnostic-config> <diagnostic-store-dir>data/store/diagnostics</diagnostic-store-dir> <diagnostic-store-file-locking-enabled>false</diagnostic-store-file-locking- enabled> <diagnostic-data-archive-type>FileStoreArchive</diagnostic-data-archive-type> <data-retirement-enabled>true</data-retirement-enabled> <preferred-store-size-limit>100</preferred-store-size-limit> <store-size-check-period>1</store-size-check-period> </server-diagnostic-config> </server>
For more information, see Configure JMS Servers and Persistent Stores in Oracle Fusion Middleware Administering JMS Resources for Oracle WebLogic Server.
You can change WLS JMS configuration from a file-based persistent store (default configuration) to a database persistent store.
The persistent store is a built-in storage solution for WebLogic Server subsystems and services that require persistence. For example, it can store persistent JMS messages.
The persistent store supports persistence to a file-based store or to a JDBC-accessible store in a database. For information on the persistent store, see The WebLogic Persistent Store in Administering the WebLogic Server Persistent Store.
For information on typical tasks to monitor, control, and configure WebLogic messaging components, see WebLogic Server Messaging in Administering Oracle WebLogic Server with Fusion Middleware Control.
To configure WLS JMS with database persistent stores, verify that your setup meets specific requirements.
Your setup must meet these requirements:
An Oracle Fusion Middleware installation with at least one cluster and one or more JMS servers
JMS servers that use file persistent stores, the default configuration.
You can swap JMS servers from file-based to database persistent stores.
You must follow steps in this procedure for each JMS server that you must configure to use database persistent stores.
Create a JDBC store. See Using a JDBC Store in Oracle Fusion Middleware Administering Server Environments for Oracle WebLogic Server.
You must specify a prefix to uniquely name the database table for the JDBC store.
Associate the JDBC store with the JMS server:
In the Weblogic Server Administration Console, go to Services->Messaging->JMS Servers.
Verify that there are no pending messages in this server. In the Control tab, stop production and insertion of messages for all destinations and wait for any remaining messages to drain.
Select the General Configuration tab. Under Persistent Store, select the new JDBC store then click Save.
The JMS server starts using the database persistent store.
After you confirm that your setup has a standard Oracle Fusion Middleware installation, you can configure JDBC Transaction Logs (TLOG) Stores.
You must have a standard Oracle Fusion Middleware installation before you configure a JDBC Transaction Logs (TLOG) Store.
Post installation, the TLOG Store is configured in the file system. In some instances, Oracle recommends that you configure TLOGs to store in the database. To configure JDBC TLOGs to stored to a database store, see Using a JDBC TLOG Store in Administering the WebLogic Server Persistent Store.
There are a few guidelines to follow when you configure JDBC TLOG Stores for Managed Servers in a cluster.
When you configure JDBC TLOG Stores:
You must repeat the procedure for each Managed Server in the cluster.
Use the Managed Server name as a prefix to create a unique TLOG store name for each Managed Server.
Verify that the data source that you created for the persistent store targets the cluster for a high availability setup.
When you finish the configuration, TLOGs are directed to the configured database-based persistent store.
When you add a new Managed Server to a cluster by scaling up or scaling out, you must also create the corresponding JDBC TLOG Store for the new Managed Server.