1 The WebLogic Persistent Store

This chapter explains how to configure and monitor the WebLogic Server persistent store, which provides a built-in, high-performance storage solution for WebLogic Server subsystems and services that require persistence. It also describes how to configure high availability for JMS service artifacts that use persistent stores.

What is a Persistent Store

The persistent store provides a built-in, high-performance storage solution for WebLogic Server subsystems and services that require persistence. For example, it can store persistent JMS messages or temporarily store messages sent using the Store-and-Forward feature. The persistent store supports persistence to a file-based store or to a JDBC-accessible store in a database.

Table 1-1 defines many of the WebLogic services and subsystems that can create connections to the persistent store. Each subsystem that uses the persistent store specifies a unique connection ID that identifies that subsystem.

Table 1-1 Persistent Store Users

Subsystem/Service What It Stores More Information

Diagnostic Service

Log records, data events, and harvested metrics.

Understanding WLDF Configuration in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server

JMS Messages

Persistent messages and durable subscribers.

Understanding the Messaging Models in Developing JMS Applications for Oracle WebLogic Server

JMS Paging Store

One per JMS server. Paged persistent and non-persistent messages.

Main Steps for Configuring Basic JMS System Resources in Administering JMS Resources for Oracle WebLogic Server.

JTA Transaction Log (TLOG)

Information about committed transactions coordinated by the server that may not have been completed. TLOGs can be stored in the default persistent store or a JDBC TLOG store.

Path Service

The mapping of a group of messages to a messaging resource.

Using the WebLogic Path Service in Administering JMS Resources for Oracle WebLogic Server

Store-and-Forward (SAF) Service Agents

Messages for a sending SAF agent for retransmission to a receiving SAF agent

Understanding the Store-and-Forward Service in Administering the Store-and-Forward Service for Oracle WebLogic Server.

Web Services

Request and response SOAP messages from an invocation of a reliable WebLogic Web Service.

Using Reliable SOAP Messaging in Programming Advanced Features of JAX-RPC Web Services for Oracle WebLogic Server.

EJB Timer Services

EJB Timer objects.

Understanding Enterprise JavaBeans in Developing Enterprise JavaBeans, Version 2.1, for Oracle WebLogic Server

See Monitoring Store Connections.

Features of the Persistent Store

The key features of the persistent store include:

  • Default file store for each server instance that requires no configuration.

  • The Default and custom stores are shareable by multiple subsystems, as long as they are all targeted to the same server instance, cluster, or migratable target.

  • When configured, a JDBC TLOG store which contains information about committed transactions coordinated by the server that may not have been completed. You can select to persist TLOG information either in the default store or the JDBC TLOG store, depending on your application needs. See Using a JDBC TLog Store.

  • High-performance throughput and transactional support.

  • Modifiable parameters that let you create customized file stores and JDBC stores.

  • Monitoring capabilities for persistent store statistics and open store connections.

  • In a clustered environment, the JDBC TLOG store and customized stores can be migrated from an unhealthy server to a backup server, either on the whole-server level or on the service level.

  • When targeted to a cluster, the high availability parameters of the persistent store control the distribution and high availability behavior of JMS services. It also eliminates the need to configure Migratable Targets. See Simplified JMS Cluster and High Availability Configuration in Administering JMS Resources for Oracle WebLogic Server.

High-Performance Throughput and Transactional Support

Throughput is the main performance goal of the persistent store. Multiple subsystems can share the same default or custom store, as long as they are all targeted to the same server instance, cluster, or migratable target.

Note:

  • The JDBC TLOG store is only used to persist information about committed transactions coordinated by the server that may not have been completed. It can not be shared by other subsystems.

  • The JDBC TLOG store does not allow HA configuration settings.

This is a performance advantage because the persistent store is treated as a single resource by the transaction manager for a particular transaction, even if the transaction involves multiple services that use the same store. For example, if the TLOG, JMS and EJB timers share a file store, and a JMS message and an EJB timer are created in a single transaction, the transaction will be one-phase and incur a single resource write, rather than two-phase, which incurs four resource writes (two on each resource), plus a transaction entry write (on the transaction log).

Both a file store and a JDBC store can survive a process crash or hardware power failure without losing any committed updates. Uncommitted updates may be retained or lost, but in no case will a transaction be left partially complete after a crash.

Comparing File Stores and JDBC-accessible Stores

The following are some similarities and differences between file stores and JDBC-accessible stores:

  • The default persistent store can only be a file store. Therefore, a JDBC store cannot be used as a default persistent store.

  • Both have the same transaction semantics and guarantees. As with JDBC store writes, file store writes are guaranteed to be persisted to disk and are not simply left in an intermediate (that is, unsafe) cache.

  • Both have the same application interface (no difference in application code).

  • All things being equal, file stores generally offer better throughput than a JDBC store.

    Note:

    If a database is running on high-end hardware with very fast disks, and WebLogic Server is running on slower hardware or with slower disks, then you may get better performance from the JDBC store.

  • File stores are generally easier to configure and administer, and do not require that WebLogic subsystems depend on any external component.

  • File stores generate no network traffic; whereas, JDBC stores generate network traffic if the database is on a different machine from WebLogic Server.

  • JDBC stores may make it easier to handle failure recovery since the JDBC interface can access the database from any machine on the same network. With the file store, the disk must be shared or migrated.

  • Dynamic Scalability: When custom logical persistent stores are configured and targeted to a cluster, by default, the system automatically creates one physical instance on each of the cluster member, and the instance is named uniquely for monitoring purposes. This allows the store and related JMS artifacts to dynamically scale without the need for individually configuring them on each cluster member. This behavior can be changed such that the system only creates one physical instance and make it high available in the cluster. See Simplified JMS Cluster and High Availability Configuration in Administering JMS Resources for Oracle WebLogic Server.

High Availability For Persistent Stores

For high availability, WebLogic Server offers the following options:

Whole Server Migration

A persistent file-based store (default, or custom) can be migrated along with its parent server as part of the "whole server-level" migration feature, which provides both automatic and manual migration at the server level, rather than on the service level. See Whole Server Migration in Administering Clusters for Oracle WebLogic Server. However, file-based stores must be configured on a shared disk that is available to all servers in the cluster.

Automatic Service Migration

File-based stores and JDBC-accessible stores can also be migrated as part of a "service-level" migration for JMS-related services, such as JMS servers, SAF agents, and the path service, which rely on stores to maintain data. WebLogic Server supports automatic service migration in two ways:

  • By using simplified JMS cluster configuration: This enables the automatic service migration for both store and all the JMS service artifacts that reference the store. The configuration settings will take effect whenever the store is targeted to a cluster. This model offers enhanced HA capabilities such as automatic failback, dynamic load balancing, and failover. See Simplified JMS Cluster and High Availability Configuration in Administering JMS Resources for Oracle WebLogic Server.

  • By using Migratable Target configuration: In this model, a migratable target serves as a grouping mechanism for related JMS services, and the entire group is hosted on only one physical server in a cluster.

Note:

For automatic service migration, use simplified JMS cluster configuration instead of the legacy migratable target model.

In both these models, the related hosted services can be automatically migrated from the current unhealthy hosting server to a healthy active server with the help of the Health Monitoring subsystem. In a cluster targeted Store case, when any store instance migrates, all the associated JMS service instances that are referencing that Store instances are also migrated.

In this release, Service-level migration is controlled by targeting the Store to the same cluster as the associated JMS service artifacts, with appropriate high availability parameter settings on the Store. See Simplified JMS Cluster and High Availability Configuration in Administering JMS Resources for Oracle WebLogic Server. This type of migration is supported in all the cluster types (configured, dynamic, and mixed) and eliminates the need for Migratable Target configuration. This option also supports automatic failback as well as it controls the service migration of the JMS artifact.As in the previous releases, you can still enable Service-level migration by targeting related JMS services to a Migratable Target, which serves as a grouping of JMS-related services and which is hosted on only one physical server in a cluster. In Migratable Target based configuration, the JMS services hosted by a migratable target can also be manually migrated on demand as part of regularly scheduled server maintenance.

In both the cases, JMS services can be automatically migrated from the current unhealthy hosting server to a healthy active server with the help of the Health Monitoring subsystem. When the migration takes place, all pinned services associated with the Store and are hosted by that Server are also migrated.

See Service Migration in Administering Clusters for Oracle WebLogic Server.

In the cluster or migratable target based model, JMS-related services cannot use the default file store, so you must configure a custom file store or JDBC store and target it to the same migratable target as the JMS server, SAF agent, or path service associated with the store.

For best practices, see Additional Requirement for High Availability File Stores.

Service Restart In Place

Service Restart In Place provides options to automatically recover a failed custom store and its dependent services on their original running WebLogic Server. For information about Service Restart In Place for other store types and messaging bridges, see the Service Restart In Place in Combination with Migration and Additional Notes sections below.

When Restart In Place is not configured and in effect, WebLogic Server marks failed custom stores and their dependent JMS services as unhealthy and shuts them down. For example, this can happen when a file store gets an error from a file system or when a JDBC store cannot access its database. Messages persisted prior to a store shutdown are unavailable for consumption until either the store is restarted or is migrated to another server within the same cluster.

The way to enable Service Restart In Place on a custom store varies based on the store target.

Custom Store Target Service Restart In Place Option

Standalone server or cluster

Option 1: Explicitly configure the store Restart In Place setting to true.

Option 2: Set the store Migration Policy to Always or On-Failure. This causes the Restart In Place setting to default to true.

With either option, you can fine-tune Restart In Place behavior by changing Seconds Between Restarts (default 30) and Number Of Restart Attempts (default 6) in the store configuration.

Migratable target

Enable Restart In Place on the migratable target.

You can fine-tune Restart In Place behavior by changing Seconds Between Restarts (default 30) and Number Of RestartAttempts (default 6) in the migratable target configuration. See In-Place Restarting of Failed Migratable Services in Administering Clusters for Oracle WebLogic Server.

Managed Server instance within a cluster

It is not possible to enable Restart In Place on stores that are directly targeted to a server within a cluster. Oracle recommends targeting stores and their dependent services to the cluster or to a migratable target instead.

Service Restart In Place in Combination with Migration

Service Restart In Place can be configured independently of whole server migration or service migration. When Restart In Place and migration are both configured, they work as follows:

Restart In Place and Service Migration

If Restart In Place is enabled, if the store's original host JVM is still running, and if a failed store is configured to migrate from one server to another within a WebLogic cluster, then the system tries to restart the store on its original host JVM before it tries a migration. See Service Migration in Administering Clusters for Oracle WebLogic Server.

Restart In Place and Whole Server Migration

If a globally-scoped store is targeted to a standalone server, is targeted to a server within a cluster, or is targeted to a cluster and has a MigrationPolicy of Off, then the store places its host WebLogic Server instance in a failed health state after all of its restart attempts fail. The failed WebLogic Server health state allows the optional Whole Server Migration framework to detect the problem and attempt to either restart the WebLogic Server JVM or to migrate the JVM to another server. See Whole Server Migration in Administering Clusters for Oracle WebLogic Server.

Additional Notes
  • Service Restart In Place is not applicable to WebLogic default stores, Transaction Log Store, or messaging bridges.

    • Failed default stores cause a server to enter a Failed health state, and require a Whole Server Migration or Whole Server restart to recover. Oracle recommends that you configure services to persist critical information in custom stores instead of default stores.

    • For information about how to tune a Transaction Log Store restart, see Configure the Transaction Log Store in Oracle WebLogic Server Administration Console Online Help.

    • Messaging Bridges ignore Restart In Place settings. Instead, they automatically handle failures by periodically retrying when they fail to connect to their source or target destinations.

  • Custom JDBC Stores have an additional internal retry mechanism that takes effect first before they shutdown and requires the store and its dependent services to restart. This functionality is helpful for silent recovery from brief database outages. See Configuring JDBC Store Reconnect Retry.

High Availability Storage Solutions

If you have applications that need access to persistent stores that reside on remote machines after the migration of a JMS server or JTA transaction log, then you should implement one of the following highly-available storage solutions:

  • File-based stores (default or custom)—Implement a hardware solution, such as a dual-ported SCSI disk or Storage Area Network (SAN) to make a file store available from shareable disks or remote machines.

    Note:

    • Persistent file stores that may migrate to a different JVM or machine must be explicitly configured to reference a shared directory. See Additional Requirement for High Availability File Stores.

    • If a file store is disconnected and re-connected again, its host server instance must be rebooted to successfully continue sending/receiving persistent JMS messages. For example, if for some reason the file system containing a file store is unmounted and then remounted, attempts to send persistent JMS messages will generate JMS exceptions until the host server is rebooted.

  • JDBC-accessible stores—Configure a JDBC store or JDBC TLOG store and use JDBC to access this store, which can be on yet another server. Applications can then take advantage of any high-availability or failover solutions offered by your database vendor. In addition, JDBC stores support GridLink data sources and multi data sources, which provide failover between nodes of a highly available database system, such as Oracle Real Application Clusters (Oracle RAC). For more information, see:

  • Any persistent store—Use high-availability clustering software which provides an integrated, out-of-the-box solution for WebLogic Server-based applications.

Limitations and Considerations of the Persistent Store

The following limitations apply to the persistent store:

  • A persistent file store should not be opened simultaneously by two server instances; otherwise, there is no guarantee that the data in the file will not be corrupted. If possible, the persistent store will attempt to return an error in this case, but it will not be possible to detect this condition in every case. It is the responsibility of the administrator to ensure that the persistent store is being used in an environment in which multiple servers will not try to access the same store at the same time. (Two file stores are considered the "same store" if they have the same name and the same directory.)

  • Two JDBC stores must not share the same database table, because this will result in data corruption. A JDBC store will normally prevent this from happening by detecting if a backing table has already been opened by another instance, but it is not possible to detect this condition in every case. It is the responsibility of the administrator to ensure that the persistent store is being used in an environment in which multiple servers will not try to access the same store at the same time. (Two JDBC stores can reference the same table if they have the same table name prefix and database schema.)

  • A persistent store may not survive arbitrary corruption. If the disk file is overwritten with arbitrary data, then the results are undefined. The store may return inconsistent data in this case, or even fail to recover at all.

  • A file store may return exceptions when its disk is full. However, it will resume normal operation by no longer throwing an exception when disk space has been made available. Also, the data in the persistent store must remain intact as described in the previous points.

  • When using MySQL as the backing database for a JDBC store, Oracle recommends using the InnoDB engine because it provides safe writes. If the MyISAM engine is used, data may be lost in some cases.

Additional Requirement for High Availability File Stores

Custom and default file stores that are configured for high availability via service migration or whole server migration must explicitly configure a directory on a central location on a shared disk. This ensures that the same directory and files are available to all servers and machines that may host a store, and is required to ensure that a store can recover its data after it migrates.

This applies to default and custom file store locations, it does not apply to cache or page file directories as the latter do not need to be highly available and can and should be located on a local drive for performance reasons.

See File Locations.

See Migratable Target and Simplified JMS Cluster and High Availability Configuration in Administering JMS Resources for Oracle WebLogic Server.

File Locations

Persistent stores create a number of files in the file system for different purposes. Among them are file store data files, file store cache files (for file stores with a DirectWriteWithCache synchronous write policy), and JMS server and SAF agent paging files.

Table 1-2 describes the location of various files used by the file store system at the domain level.

Table 1-2 File Locations

Store Type Store Path Not Configured Relative Store Path Absolute Store Path File Name

default

<domainRoot>/servers/<serverName>/data/store/default

<domainRoot>/<relPath>

<absPath>

_WLS_<serverName>NNNNNN.DAT

custom file

<domainRoot>/servers/<serverName>/data/store/<storeName>

<domainRoot>/<relPath>

<absPath>

<storeName>NNNNNN.DAT

cache

${java.io.tmpdir}/WLStoreCache/${domainName}/<storeUuid>

<domainRoot>/<relPath>

<absPath>

<storeName>NNNNNN.CACHE

paging

<domainRoot>/servers/<serverName>/tmp

<domainRoot>/<relPath>

<absPath>

<jmsServerName>NNNNNN.TMP

<safAgentName>NNNNNN.TMP

Table 1-3 shows how each of the prior store types configure their directory location.

Table 1-3 Store Type Directory Configuration

Store Type Directory Configuration

default

The directory configured on a WebLogic Server default store. See Using the Default Persistent Store.

custom file

The directory configured on a custom file store. See Using Custom File Stores.

cache

The cache directory configured on a custom or default file store that has a DirectWriteWithCache synchronous write policy. See Tuning the WebLogic Persistent Store in Tuning Performance of Oracle WebLogic Server.

paging

The paging directory configured on a SAF agent or JMS server. See Paging Out Messages To Free Up Memory in Tuning Performance of Oracle WebLogic Server.