Each server instance, including the Administration Server, has a default persistent store that requires no configuration. In addition to using the default file store, you can also configure a file-based store or JDBC-accessible store, JDBC TLOG store, and a file-based paging store.
Each server instance, including the administration server, has a default persistent store that requires no configuration. The default store is a file-based store that maintains its data in a group of files in a server instance's
data\store\default directory. A directory for the default store is automatically created if one does not already exist. This default store is available to subsystems that do not require explicit selection of a particular store and function best by using the system's default storage mechanism. For example, a JMS Server with no persistent store configured will use the default store for its Managed Server and will support persistent messaging. For high availability, it is a best practice to configure custom file or JDBC stores instead of a default store. See:
In addition to using the default file store, you can also configure a file store or JDBC store to suit your specific needs. A custom file store, like the default file store, maintains its data in a group of files in a directory. However, you may want to create a custom file store so that the file store's data is persisted to a particular storage device. When configuring a file store directory, the directory must be accessible to the server instance on which the file store is located.
A JDBC store can be configured when a relational database is used for storage. A JDBC store enables you to store persistent messages in a standard JDBC-capable database, which is accessed through a designated JDBC data source. The data is stored in the JDBC store's database table, which has a logical name of
WLStore. It is up to the database administrator to configure the database for high availability and performance. See:
When to Use a Custom Persistent Store in Administering the WebLogic Persistent Store.
Comparing File Stores and JDBC Stores in Administering the WebLogic Persistent Store.
Creating a Custom (User-Defined) File Store in Administering the WebLogic Persistent Store.
Creating a JDBC Store in Administering the WebLogic Persistent Store.
You can configure a JDBC TLOG store to persist transaction logs to a database, which allows you to leverage replication and HA characteristics of the underlying database, simplify disaster recovery, and improve Transaction Recovery service migration. See Using a JDBC TLog Store in Administering the WebLogic Persistent Store.
Each JMS server implicitly creates a file based paging store. When the WebLogic Server JVM runs low on memory, this store is used to page non-persistent messages as well as persistent messages. Depending on the application, paging stores may generate heavy disk activity.
You can optionally change the directory location and the threshold settings at which paging begins. You can improve performance by locating paging store directories on a local file system, preferably in a temporary directory. Paging store files do not need to be backed up, replicated, or located in universally accessible location as they are automatically repopulated each time a JMS server restarts. See JMS Server Behavior in WebLogic Server 9.x and Later in Administering JMS Resources for Oracle WebLogic Server.
Paged persistent messages are potentially physical stored in two different places:
Always in a recoverable default or custom store.
Potentially in a paging directory.
You can often improve paging performance for JMS messages (persistent or non-persistent) by configuring JMS server paging directories to reference a directory on a locally mounted enterprise-class flash storage device. This can be significantly faster than other technologies
Most Flash storage devices are a single point of failure and are typically only accessible as a local device. They are suitable for JMS server paging stores which do not recover data after a failure and automatically reconstruct themselves as needed.
In most cases, Flash storage devices are not suitable for custom or default stores which typically contains data that must be safely recoverable. A configured
Directory attribute of a default or custom store should not normally reference a directory that is on a single point of failure device.
Use the following steps to use a Flash storage device to page JMS messages:
Message Paging Directoryattribute to the path of your flash storage device, see Specifying a Message Paging Directory.
Message Buffer Sizeattribute (it controls when paging becomes active). You may be able to use lower threshold values as faster I/O operations provide improved load absorption. See Tuning the Message Buffer Size Option.
The Diagnostics store is a file store that implicitly always uses the
Disabled synchronous write policy. It is dedicated to storing WebLogic server diagnostics information. One diagnostic store is configured per WebLogic Server instance. See Configuring Diagnostic Archives in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server.
Learn the best practices for using WebLogic persistent stores.
For subsystems that share the same server instance, share one store between multiple subsystems rather than using a store per subsystem. Sharing a store is more efficient for the following reasons:
A single store batches concurrent requests into single I/Os which reduces overall disk usage.
Transactions in which only one resource participates are lightweight one-phase transactions. Conversely, transactions in which multiple stores participate become heavier weight two-phase transactions.
For example, configure all SAF agents and JMS servers that run on the same server instance so that they share the same store.
Add a new store only when the old store(s) no longer scale.
Review information on tuning JDBC stores.
By default, a WebLogic JDBC store instance obtains two JDBC connections from its data source and caches these connections for its entire lifetime. The JDBC store can be tuned to retry more often on a connection failure, and the data source should be tuned to test connections. See Using a JDBC Store in Administering the WebLogic Persistent Store.
Under heavy JDBC store I/O loads, you can improve performance by configuring a JDBC store to use multiple JDBC connections to concurrently process I/O operations. See Enabling I/O Multithreading for JDBC Stores in Administering the WebLogic Persistent Store.
The location of the JDBC store DDL that is used to initialize empty stores is now configurable. This simplifies the use of custom DDL for database table creation, which is sometimes used for database specific performance tuning. See Create JDBC stores in Oracle WebLogic Server Administration Console Online Help and Using the WebLogic Persistent Store in Administering the WebLogic Persistent Store.
Learn about tuning file stores.
The following section provides general information on tuning File Stores:
Take care when configuring file store directory locations.
Paging stores should reference a location on a local disk for best performance (paging stores are not reloaded after a failure and do not need to be on a highly available storage).
Custom or default file stores that may migrate to a different machine or JVM must be configured to reference a directory that is in a centrally accessible shared location.
See High Availability Best Practices in Administering JMS Resources for Oracle WebLogic Server.
See File Locations in Administering the WebLogic Persistent Store.
For basic (non-RAID) disk hardware, consider dedicating one disk per file store. A store can operate up to four to five times faster if it does not have to compete with any other store on the disk. Remember to consider the existence of the default file store in addition to each configured store and a JMS paging store for each JMS server.
For custom and default file stores, tune the Synchronous Write Policy.
There are three transactionally safe synchronous write policies:
Direct-Write-With-Cache is generally has the best performance of these policies,
Cache-Flush generally has the lowest performance, and
Direct-Write is the default. Unlike other policies,
Direct-Write-With-Cache creates cache files in addition to primary files.
Disabled synchronous write policy is transactionally unsafe. The
Disabled write-policy can dramatically improve performance, especially at low client loads. However, it is unsafe because writes become asynchronous and data can be lost in the event of Operating System or power failure.
See Guidelines for Configuring a Synchronous Write Policy in Administering the WebLogic Persistent Store.
Certain older versions of Microsoft Windows may incorrectly report storage device synchronous write completion if the Windows default
Write Cache Enabled setting is used. This violates the transactional semantics of transactional products (not specific to Oracle), including file stores configured with a
Direct-Write (default) or
Direct-Write-With-Cache policy, as a system crash or power failure can lead to a loss or a duplication of records/messages. One of the visible symptoms is that this problem may manifest itself in high persistent message/transaction throughput exceeding the physical capabilities of your storage device. You can address the problem by applying a Microsoft supplied patch, disabling the Windows
Write Cache Enabled setting, or by using a power-protected storage device. See
When performing head-to-head vendor comparisons, make sure all the write policies for the persistent store are equivalent. Some non-WebLogic vendors default to the equivalent of
Depending on the synchronous write policy, custom and default stores have a variety of additional tunable attributes that may improve performance. These include
MaxFileSize. See JMSFileStoreMBean in the MBean Reference for Oracle WebLogic Server.
JMSFileStoreMBean is deprecated, but the individual bean attributes apply to the non-deprecated beans for custom and default file stores.
If disk performance continues to be a bottleneck, consider purchasing disk or RAID controller hardware that has a built-in write-back cache. These caches significantly improve performance by temporarily storing persistent data in volatile memory. To ensure transactionally safe write-back caches, they must be protected against power outages, host machine failure, and operating system failure. Typically, such protection is provided by a battery-backed write-back cache.
Direct-Write-With-Cache synchronous write policy is commonly the highest performance option that still provides transactionally safe disk writes. It is typically not as high-performing as the
Disabled synchronous write policy, but the
Disabled policy is not a safe option for production systems unless you have some means to prevent loss of buffered writes during a system failure.
Direct-Write-With-Cache file stores write synchronously to a primary set of files in the location defined by the
Directory attribute of the file store configuration. They also asynchronously write to a corresponding temporary cache file in the location defined by the
CacheDirectory attribute of the file store configuration. The cache directory and the primary file serve different purposes and require different locations. In many cases, primary files should be stored in remote storage for high availability, whereas cache files are strictly for performance and not for high availability and can be stored locally.
Direct-Write-With-Cache synchronous write policy is selected, there are several additional tuning options that you should consider:
CacheDirectory. For performance reasons, the cache directory should be located on a local file system. It is placed in the operating system temp directory by default.
IOBufferSize attributes. These tune native memory usage of the file store.
MaxFileSize tuning attributes. These tune the initial size of a store, and the maximum file size of a particular file in the store respectively.
BlockSize attribute. See Tuning the File Store Block Size.
For more information on individual tuning parameters, see the JMSFileStoreMBean in the MBean Reference for Oracle WebLogic Server.
You can gain additional I/O performance by using enterprise-class flash drives, which can be significantly faster than spinning disks for accessing data in real-time applications and allows you to free up memory for other processing tasks.
Simply update the
CacheDirectory attribute with the path to your flash storage device and ensure that the device contains sufficient free storage to accommodate a full copy of the store's primary files. See the
CacheDirectory attribute in the MBean Reference for Oracle WebLogic Server.
Consider the following when tuning the
There may be additional security and file locking considerations when using the
Direct-Write-With-Cache synchronous write policy. See Securing a Production Environment for Oracle WebLogic Server and the
LockingEnabled attributes of the JMSFileStoreMBean in the MBean Reference for Oracle WebLogic Server.
JMSFileStoreMBean is deprecated, but the individual bean attributes apply to the non-deprecated beans for custom and default file stores.
It is safe to delete a cache directory while the store is not running, but this may slow down the next store boot. Cache files are re-used to speed up the file store boot and recovery process, but only if the store's host WebLogic server has been shut down cleanly prior to the current boot (not after
kill -9, nor after an OS/JVM crash) and there was no off-line change to the primary files (such as a store admin compaction). If the existing cache files cannot be safely used at boot time, they are automatically discarded and new files are created. In addition, a
Warning log 280102 is generated. After a migration or failover event, this same
Warning message is generated, but can be ignored.
If the a
Direct-Write-With-Cache file store fails to load a
wlfileio native driver, the synchronous write policy automatically changes to the equivalent of
AvoidDirectIO=true. To view a running custom or default file store's configured and actual synchronous write policy and driver, examine the server log for WL-280008 and WL-280009 messages.
To prevent unused cache files from consuming disk space, test and development environments may need to be modified to periodically delete cache files that are left over from temporarily created domains. In production environments, cache files are managed automatically by the file store.
AvoidDirectIO properties described in this section are still supported in this release, but have been deprecated as of 11gR1PS2. Use the configurable
Direct-Write-With-Cache synchronous write policy as an alternative to the
For file stores with the synchronous write policy of
Direct-Write, you may be directed by Oracle Support or a release note to set
weblogic.Server options on the command line or start script of the JVM that runs the store:
Globally changes all stores running in the JVM:
For a single store, where
store-name is the name of the store:
For the default store, where
server-name is the name of the server hosting the store:
AvoidDirectIO on an individual store overrides the setting of the global
-Dweblogic.store.AvoidDirectIO option. For example: If you have two stores, A and B, and set the following options:
then only store B has the setting
AvoidDirectIO option may have performance implications which often can be mitigated using the block size setting described in Tuning the File Store Block Size.
You may want to tune the file store block size for file stores that are configured with a synchronous write policy of
Cache-Flush, especially when using
AvoidDirectIO=true as described in Tuning the File Store Direct-Write Policy or for systems with a hard-drive-based write-back cache where you see that performance is limited by physical storage latency.
Consider the following example:
A single WebLogic JMS producer sends persistent messages one by one.
The network overhead is known to be negligible.
The file store's disk drive has a 10,000 RPM rotational rate.
The disk drive has a battery-backed write-back cache.
and the messaging rate is measured at 166 messages per second.
In this example, the low messaging rate matches the disk drive's latency (10,000 RPM / 60 seconds = 166 RPS) even though a much higher rate is expected due to the battery-backed write-back cache. Tuning the store's block size to match the file systems' block size could result in a significant improvement.
In some other cases, tuning the block size may result in marginal or no improvement:
The caches are observed to yield low latency (so the I/O subsystem is not a significant bottleneck).
Write-back caching is not used and performance is limited by larger disk drive latencies.
There may be a trade off between performance and file space when using higher block sizes. Multiple application records are packed into a single block only when they are written concurrently. Consequently, a large block size may cause a significant increase in store file sizes for applications that have little concurrent server activity and produce small records. In this case, one small record is stored per block and the remaining space in each block is unused. As an example, consider a Web Service Reliable Messaging (WS-RM) application with a single producer that sends small 100 byte length messages, where the application is the only active user of the store.
Oracle recommends tuning the store block size to match the block size of the file system that hosts the file store (typically 4096 for most file systems) when this yields a performance improvement. Alternately, tuning the block size to other values (such as paging and cache units) may yield performance gains. If tuning the block size does not yield a performance improvement, Oracle recommends leaving the block size at the default as this helps to minimize use of file system resources.
BlockSize command line properties that are described in this section are still supported in 11gR1PS2, but are deprecated. Oracle recommends using the
BlockSize configurable on custom and default file stores instead.
To set the block size of a store, use one of the following properties on the command line or start script of the JVM that runs the store:
Globally sets the block size of all file stores that don't have pre-existing files.
Sets the block size for a specific file store that doesn't have pre-existing files.
Sets the block size for the default file store, if the store doesn't have pre-existing files:
The value used to set the block size is an integer between 512 and 8192 which is automatically rounded down to the nearest power of 2.
BlockSize on an individual store overrides the setting of the global
-Dweblogic.store.BlockSize option. For example: If you have two stores, A and B, and set the following options:
then store B has a block size of 8192 and store A has a block size of 512.
Setting the block size using command line properties only takes effect for file stores that have no pre-existing files. If a store has pre-existing files, the store continues to use the block size that was set when the store was first created.
You can verify a file store's current block size and synchronous write policy by viewing the server log of the server that hosts the store. Search for a "280009" store opened message.
To determine your file system's actual block size, consult your operating system documentation. For example:
Linux ext2 and ext3 file systems: run
device-name and look for "Block size"
Windows NTFS: run
fsutil fsinfo ntfsinfo
: and look for "Bytes Per Cluster"
If the data in a store's pre-existing files do not need to be preserved, then simply shutdown the host WebLogic Server instance and delete the files to allow the block size to change when the store is restarted. If you need to preserve the data, convert a store with pre-existing files to a different block size by creating a version of the file store with the new block size using the compact command of the command line store administration utility:
See Store Administration Using a Java Command-line in Administering the WebLogic Persistent Store.
Learn about using a WebLogic persistent store with a Network File System (NFS).
NFS storage may not fully protect transactional data, as it may be configured to silently buffer synchronous write requests in volatile memory. If a file store Directory is located on an NFS mount, and the file store's Synchronous Write Policy is anything other than Disabled, check your NFS implementation and configuration to make sure that it is configured to support synchronous writes. A Disabled synchronous write policy does not perform synchronous writes, but, as a consequence, is generally not transactionally safe. You may detect undesirable buffering of synchronous write requests by observing high persistent message or transaction throughput that exceeds the physical capabilities of your storage device. On the NFS server, check the synchronous write setting of the exported NFS directory hosting your File Store. A SAN based file store, or a JDBC store, may provide an easier solution for safe centralized storage.
Oracle strongly recommends verifying the behavior of a server restart after abrupt machine failures when the JMS messages and transaction logs are stored on an NFS mounted directory. Depending on the NFS implementation, different issues can arise after a failover or restart. The behavior can be verified by abruptly shutting down the node hosting the Web Logic servers while these are running. If the server is configured for server migration, it should be started automatically in the failover node after the corresponding failover period. If not, a manual restart of the WebLogic Server on the same host (after the node has completely rebooted) can be performed.
You can configure a NFS v4 based Network Attached Storage (NAS) server to release locks within the approximate time required to complete server migration. If you tune and test your NFS v4 environment, you do not need to follow the procedures in this section. See your storage vendor's documentation for information on locking files stored in NFS-mounted directories on the storage device.
If Oracle WebLogic Server does not restart after abrupt machine failure when JMS messages and transaction logs are stored on NFS mounted directory, the following errors may appear in the server log files:
Example 7-1 Store Restart Failure Error Message
MMM dd, yyyy hh:mm:ss a z> <Error> <Store> <BEA-280061> <The persistent store "_WLS_server_soa1" could not be deployed: weblogic.store.PersistentStoreException: java.io.IOException: [Store:280021]There was an error while opening the file store file "_WLS_SERVER_SOA1000000.DAT" at weblogic.store.io.file.Heap.open(Heap.java:168) at weblogic.store.io.file.FileStoreIO.open(FileStoreIO.java:88) ... java.io.IOException: Error from fcntl() for file locking, Resource temporarily unavailable, errno=11
This error is due to the NFS system not releasing the lock on the stores. WebLogic Server maintains locks on files used for storing JMS data and transaction logs to protect from potential data corruption if two instances of the same WebLogic Server are accidentally started. The NFS storage device does not become aware of machine failure in a timely manner and the locks are not released by the storage device. As a result, after abrupt machine failure, followed by a restart, any subsequent attempt by WebLogic Server to acquire locks on the previously locked files may fail. Refer to your storage vendor documentation for additional information on the locking of files stored in NFS mounted directories on the storage device. If it is not reasonably possible to tune locking behavior in your NFS environment, use one of the following two solutions to unlock the logs and data files.
Use one of the following two solutions to unlock the logs and data files:
Manually unlock the logs and JMS data files and start the servers by creating a copy of the locked persistence store file and using the copy for subsequent operations. To create a copy of the locked persistence store file, rename the file, and then copy it back to its original name. The following sample steps assume that transaction logs are stored in the
/shared/tlogs directory and JMS data is stored in the
Example 7-2 Sample Steps to Remove NFS Locks
cd /shared/tlogs mv _WLS_SOA_SERVER1000000.DAT _WLS_SOA_SERVER1000000.DAT.old cp _WLS_SOA_SERVER1000000.DAT.old _WLS_SOA_SERVER1000000.DAT cd /shared/jms mv SOAJMSFILESTORE_AUTO_1000000.DAT SOAJMSFILESTORE_AUTO_1000000.DAT.old cp SOAJMSFILESTORE_AUTO_1000000.DAT.old SOAJMSFILESTORE_AUTO_1000000.DAT mv UMSJMSFILESTORE_AUTO_1000000.DAT UMSJMSFILESTORE_AUTO_1000000.DAT.old cp UMSJMSFILESTORE_AUTO_1000000.DAT.old UMSJMSFILESTORE_AUTO_1000000.DAT
With this solution, the WebLogic file locking mechanism continues to provide protection from any accidental data corruption if multiple instances of the same servers were accidently started. However, the servers must be restarted manually after abrupt machine failures. File stores will create multiple consecutively numbered.DAT files when they are used to store large amounts of data. All files may need to be copied and renamed when this occurs.
With this solution, since the WebLogic Server locking is disabled, automated server restarts and failovers should succeed. Be very cautious, however, when using this option. The WebLogic file locking feature is designed to help prevent severe file corruptions that can occur in undesired concurrency scenarios. If the server using the file store is configured for server migration, always configure the database based leasing option. This enforces additional locking mechanisms using database tables, and prevents automated restart of more than one instance of the same WebLogic Server. Additional procedural precautions must be implemented to avoid any human error and to ensure that one and only one instance of a server is manually started at any give point in time. Similarly, extra precautions must be taken to ensure that no two domains have a store with the same name that references the same directory.
You can also use the WebLogic Server Administration Console to disable WebLogic file locking mechanisms for the default file store, a custom file store, a JMS paging file store, and a Diagnostics file store, as described in the following sections:
Follow these steps to disable file locking for the default file store using the WebLogic Server Administration Console:
config.xmlentry looks like:
Example 7-3 Example config.xml Entry for Disabling File Locking for a Default File Store
<server> <name>examplesServer</name> ... <default-file-store> <synchronous-write-policy>Direct-Write</synchronous-write-policy> <io-buffer-size>-1</io-buffer-size> <max-file-size>1342177280</max-file-size> <block-size>-1</block-size> <initial-size>0</initial-size> <file-locking-enabled>false</file-locking-enabled> </default-file-store> </server>
Use the following steps to disable file locking for a custom file store using the WebLogic Server Administration Console:
config.xmlentry looks like:
Example 7-4 Example config.xml Entry for Disabling File Locking for a Custom File Store
<file-store> <name>CustomFileStore-0</name> <directory>C:\custom-file-store</directory> <synchronous-write-policy>Direct-Write</synchronous-write-policy> <io-buffer-size>-1</io-buffer-size> <max-file-size>1342177280</max-file-size> <block-size>-1</block-size> <initial-size>0</initial-size> <file-locking-enabled>false</file-locking-enabled> <target>examplesServer</target> </file-store>
Use the following steps to disable file locking for a JMS paging file store using the WebLogic Server Administration Console:
config.xmlentry looks like:
Example 7-5 Example config.xml Entry for Disabling File Locking for a JMS Paging File Store
<jms-server> <name>examplesJMSServer</name> <target>examplesServer</target> <persistent-store>exampleJDBCStore</persistent-store> ... <paging-file-locking-enabled>false</paging-file-locking-enabled> ... </jms-server>
Use the following steps to disable file locking for a Diagnostics file store using the WebLogic Server Administration Console:
config.xmlentry looks like:
Example 7-6 Example config.xml Entry for Disabling File Locking for a Diagnostics File Store
<server> <name>examplesServer</name> ... <server-diagnostic-config> <diagnostic-store-dir>data/store/diagnostics</diagnostic-store-dir> <diagnostic-store-file-locking-enabled>false</diagnostic-store-file-lockingenabled> <diagnostic-data-archive-type>FileStoreArchive</diagnostic-data-archive-type> <data-retirement-enabled>true</data-retirement-enabled> <preferred-store-size-limit>100</preferred-store-size-limit> <store-size-check-period>1</store-size-check-period> </server-diagnostic-config> </server>