The TopLink cache is an in-memory repository that stores recently read or written objects based on class and primary key values. TopLink uses the cache to do the following:
Improve performance by holding recently read or written objects and accessing them in-memory to minimize database access.
Manage locking and isolation level.
Manage object identity.
This chapter includes the following sections:
TopLink uses two types of cache: the session cache maintains objects retrieved from and written to the data source; and the unit of work cache holds objects while they participate in transactions. When a unit of work successfully commits to the data source, TopLink updates the session cache accordingly.
Note:
You can also configure a query to cache its results (see Section 111.13.1, "How to Cache Results in a ReadQuery")As Figure 102-1 shows, the session cache and the unit of work cache work together with the data source connection to manage objects in a TopLink application. The object life cycle relies on these three mechanisms.
Figure 102-1 Object Life Cycle and the TopLink Caches
The session cache is a shared cache that services clients attached to a given session. When you read objects from or write objects to the data source using a client session, TopLink saves a copy of the objects in the parent server session's cache and makes them accessible to all other processes in the session.
TopLink adds objects to the session cache from the following:
The data store, when TopLink executes a read operation
The unit of work cache, when a unit of work successfully commits a transaction
An isolated client session is a special type of client session that provides its own session cache isolated from the shared object cache of its parent server session. The isolated client session cache can be used to improve user-based security or to avoid caching highly volatile data. For more information, see Section 87.5, "Isolated Client Sessions".
This section describes concepts unique to the TopLink cache, including the following:
TopLink preserves object identity through its cache using the primary key attributes of a persistent entity. These attributes may or may not be assigned through sequencing (see Section 15.2.6, "Projects and Sequencing"). In a Java application, object identity is preserved if each object in memory is represented by one, and only one, object instance. Multiple retrievals of the same object return references to the same object instance–not multiple copies of the same object.
Maintaining object identity is extremely important when the application's object model contains circular references between objects. You must ensure that the two objects are referencing each other directly, rather than copies of each other. Object identity is important when multiple parts of the application may be modifying the same object simultaneously.
Oracle recommends that you always maintain object identity. Disable object identity only if absolutely necessary, for example, for read-only objects (see Section 119.3, "Configuring Read-Only Descriptors").
You can configure how object identity is managed on a class-by-class basis. The ClassDescriptor
object provides the cache and identity map options described in Table 102-1.
Table 102-1 Cache and Identity Map Options
Option (Identity Map) | Caching | Guaranteed Identity | Memory Use |
---|---|---|---|
Yes |
Yes |
Very High |
|
Yes |
Yes |
Low |
|
Yes |
Yes |
High |
|
Soft Cache Weak Identity Map and Hard Cache Weak Identity Map |
Yes |
Yes |
Medium-high |
No |
No |
None |
For more information, see Section 102.2.1.6, "Guidelines for Configuring the Cache and Identity Maps".
This option provides full caching and guaranteed identity: objects are never flushed from memory unless they are deleted.
It caches all objects and does not remove them. Cache size doubles whenever the maximum size is reached. This method may be memory-intensive when many objects are read. Do not use this option on batch operations.
Oracle recommends using this identity map when the data set size is small and memory is in large supply.
This option is similar to the full identity map, except that the map holds the objects by using weak references. This method allows full garbage collection and provides full caching and guaranteed identity.
The weak identity map uses less memory than full identity map but also does not provide a durable caching strategy across client/server transactions. Objects are available for garbage collection when the application no longer references them on the server side (that is, from within the server JVM).
This option is similar to the weak identity map, except that the map uses soft references instead of weak references. This method allows full garbage collection and provides full caching and guaranteed identity.
The soft identity map allows for optimal caching of the objects, while still allowing the JVM to garbage collect the objects if memory is low.
These options are similar to the weak identity map, except that they maintains the most frequently used subcache. The subcache uses soft or hard references to ensure that these objects are garbage-collected only if the system is low on memory.
The soft cache weak identity map and hard cache weak identity map provide more efficient memory use. They release objects as they are garbage-collected, except for a fixed number of most recently used objects. Note that weakly cached objects might be flushed if the transaction spans multiple client-server invocations. The size of the subcache is proportional to the size of the identity map, as specified by the ClassDescriptor
method setIdentityMapSize
. You should set this cache size to be as large as the maximum number of objects (of the same type) referenced within a transaction (see Section 119.12, "Configuring Cache Type and Size at the Descriptor Level").
Oracle recommends using this identity map in most circumstances as a means to control memory used by the cache.
For more information, see Section 102.2.1.7, "What You May Need to Know About the Internals of Weak, Soft, and Hard Identity Maps".
This option does not preserve object identity and does not cache objects.
Oracle does not recommend using the no identity map option. Instead, review the alternatives of cache invalidation and isolated caching.
You can configure the cache at the project (Section 117.10, "Configuring Cache Type and Size at the Project Level") or descriptor (Section 119.12, "Configuring Cache Type and Size at the Descriptor Level") level.
Use the following guidelines when configuring your cache and identity map:
If objects with a long life span and object identity are important, use a SoftIdentityMap
, SoftCacheWeakIdentityMap
or HardCacheWeakIdentityMap
policy. For more information on when to choose one or the other, see Section 102.2.1.7, "What You May Need to Know About the Internals of Weak, Soft, and Hard Identity Maps".
If object identity is important, but caching is not, use a WeakIdentityMap
policy.
If an object has a long life span or requires frequent access, or object identity is important, use a FullIdentityMap
policy.
WARNING:
Use the FullIdentityMap
only if the class has a small number of finite instances. Otherwise, a memory leak will occur.
If an object has a short life span or requires frequent access, and identity is not important, use a CacheIdentityMap
policy.
If objects are discarded immediately after being read from the database, such as in a batch operation, use a NoIdentityMap
policy. The NoIdentityMap
does not preserve object identity.
Note:
Oracle does not recommend the use ofCacheIdentityMap
and NoIdentityMap
policies.The WeakIdentiyMap
and SoftIdentityMap
use JVM weak and soft references to ensure that any object referenced by the application is held in the cache. Once the application releases its reference to the object, the JVM is free to garbage-collect the objects. The timing of a weak and soft reference garbage collection is determined by the JVM. In general, you could expect a weak reference to be garbage-collected on each JVM garbage collection, and a soft reference to be garbage-collected when the JVM determines memory is low.
The SoftCacheWeakIdentityMap
and HardCacheWeakIdentityMap
types of identity map contain the following two caches:
Reference cache: implemented as a LinkedList
that contains soft or hard references, respectively.
Weak cache: implemented as a HashMap
that contains weak references.
When you create a SoftCacheWeakIdentityMap
or HardCacheWeakIdentityMap
with a specified size, the reference cache LinkedList
is exactly this size. The weak cache HashMap
is initialized to 100 percent of the specified size: the weak cache will grow when more objects than the specified size are read in. Because TopLink does not control garbage collection, the JVM can reap the weakly held objects whenever it sees fit.
Because the reference cache is implemented as a LinkedList
, new objects are added to the end of the list. Because of this, it is by nature a least recently used (LRU) cache: fixed size, object at the top of the list is deleted, provided the maximum size has been reached.
The SoftCacheWeakIdentityMap
and HardCacheWeakIdentityMap
are essentially the same type of identity map, with the former being the subclass of the latter. The HardCacheWeakIdentityMap
was constructed to work around an issue with some JVMs; the SoftCacheWeakIdentityMap
inherits this feature.
If your application reaches a low system memory condition frequently enough, or if your platform's JVM treats weak and soft references the same, the objects in the reference cache may be garbage-collected so often that you will not benefit from the performance improvement provided by it. If this is the case, Oracle recommends that you use the HardCacheWeakIdentityMap
. It is identical to the SoftCacheWeakIdentityMap
except that it uses hard references in the reference cache. This guarantees that your application will benefit from the performance improvement provided by it.
When an object in a HardCacheWeakIdentityMap
or SoftCacheWeakIdentityMap
is pushed out of the reference cache, it gets put in the weak cache. Although it is still cached, TopLink cannot guarantee that it will be there for any length of time because the JVM can decide to garbage-collect weak references at anytime.
A query that is run against the shared session cache is known as an in-memory query. Careful configuration of in-memory querying can improve performance (see Section 108.16.2, "How to Use In-Memory Queries").
By default, a query that looks for a single object based on primary key attempts to retrieve the required object from the cache first, searches the data source only if the object is not in the cache. All other query types search the database first, by default. You can specify whether a given query runs against the in-memory cache, the database, or both.
For more information, see Section 108.16, "Queries and the Cache".
Stale data is an artifact of caching, in which an object in the cache is not the most recent version committed to the data source. To avoid stale data, implement an appropriate cache locking strategy.
By default, TopLink optimizes concurrency to minimize cache locking during read or write operations. Use the default TopLink isolation level, unless you have a very specific reason to change it. For more information on isolation levels in TopLink, see Section 102.2.7, "Cache Isolation".
Cache locking regulates when processes read or write an object. Depending on how you configure it, cache locking determines whether a process can read or write an object that is in use within another process.
A well-managed cache makes your application more efficient. There are very few cases in which you turn the cache off entirely, because the cache reduces database access, and is an important part of managing object identity.
To make the most of your cache strategy and to minimize your application's exposure to stale data, Oracle recommends the following:
Make sure you configure a locking policy so that you can prevent or at least identify when values have already changed on an object you are modifying. Typically, this is done using optimistic locking. TopLink offers several locking policies such as numeric version field, time-stamp version field, and some or all fields.
For more information, see Section 119.26, "Configuring Locking Policy".
If other applications can modify the data used by a particular class, use a weaker style of cache for the class. For example, the SoftCacheWeakIdentityMap
or WeakIdentityMap
minimizes the length of time the cache maintains an object whose reference has been removed.
For more information, see Section 119.12, "Configuring Cache Type and Size at the Descriptor Level".
Any query can include a flag that forces TopLink to go to the data source for the most up-to-date version of selected objects and update the cache with this information.
For more information, see the following:
Using descriptor API, you can designate an object as invalid: when any query attempts to read an invalid object, TopLink will go to the data source for the most up to date version of that object and update the cache with this information. You can manually designate an object as invalid or use a CacheInvalidationPolicy
to control the conditions under which an object is designated invalid.
For more information, see Section 102.2.5, "Cache Invalidation".
If your application is primarily read-based and the changes are all being performed by the same Java application operating with multiple, distributed sessions, you may consider using the TopLink cache coordination feature. Although this will not prevent stale data, it should greatly minimize it.
For more information, see Section 102.2.6, "Cache Coordination".
Some distributed systems require only a small number of objects to be consistent across the servers in the system. Conversely, other systems require that several specific objects must always be guaranteed to be upEtoEdate, regardless of the cost. If you build such a system, you can explicitly refresh selected objects from the database at appropriate intervals, without incurring the full cost of distributed cache coordination.
To implement this type of strategy, do the following:
Configure a set of queries that refresh the required objects.
Establish an appropriate refresh policy.
Invoke the queries as required to refresh the objects.
When you execute a query, if the required objects are in the cache, TopLink returns the cached objects without checking the database for a more recent version. This reduces the number of objects that TopLink must build from database results, and is optimal for noncoordinated cache environments. However, this may not always be the best strategy for a coordinated cache environment.
To override this behavior, set a refresh policy that specifies that the objects from the database always take precedence over objects in the cache. This updates the cached objects with the data from the database.
You can implement this type of refresh policy on each TopLink descriptor, or just on certain queries, depending upon the nature of the application.
For more information, see the following:
Section 108.16.5, "How to Refresh the Cache"
Note:
Refreshing does not prevent phantom reads from occurring. See Section 108.16.8.3, "Refreshing Finder Results".When you invoke a findByPrimaryKey
finder, if the object exists in the cache, TopLink returns that copy. This is the default behavior, regardless of the refresh policy. To force a database query, you can configure the query to refresh by calling refreshIdentityMapResult
method on it.
For more information, see the following:
By default, objects remain in the session cache until they are explicitly deleted (see Section 114.7, "Deleting Objects") or garbage collected when using a weak identity map (see Section 117.10, "Configuring Cache Type and Size at the Project Level").
Alternatively, you can configure any object with a CacheInvalidationPolicy
that lets you specify, either automatically or manually, under what circumstances a cached object is invalid: when any query attempts to read an invalid object, TopLink will go to the data source for the most up-to-date version of that object, and update the cache with this information.
You can use any of the following CacheInvalidationPolicy
instances:
DailyCacheInvalidationPolicy
: the object is automatically flagged as invalid at a specified time of day.
NoExpiryCacheInvalidationPolicy
: the object can only be flagged as invalid by explicitly calling oracle.toplink.sessions.IdentityMapAccessor
method invalidateObject
.
TimeToLiveCacheInvalidationPolicy
: the object is automatically flagged as invalid after a specified time period has elapsed since the object was read.
You can configure a cache invalidation policy in the following ways:
At the project level that applies to all objects (Section 117.13, "Configuring Cache Expiration at the Project Level")
At the descriptor level to override the project level configuration on a per-object basis (Section 119.16, "Configuring Cache Expiration at the Descriptor Level")
At the query level that applies to the results returned by the query (Section 111.13.2, "How to Configure Cache Expiration at the Query Level")
If you configure a query to cache results in its own internal cache (see Section 108.16.7, "How to Cache Query Results in the Query Cache"), the cache invalidation policy you configure at the query level applies to the query's internal cache in the same way it would apply to the session cache.
If you are using a coordinated cache (see Section 102.2.6, "Cache Coordination"), you can customize how TopLink communicates the fact that an object has been declared invalid. For more information, see Section 119.15, "Configuring Cache Coordination Change Propagation at the Descriptor Level".
The need to maintain upEtoEdate data for all applications is a key design challenge for building a distributed application. The difficulty of this increases as the number of servers within an environment increases. TopLink provides a distributed cache coordination feature that ensures data in distributed applications remains current.
Cache coordination reduces the number of optimistic lock exceptions encountered in a distributed architecture, and decreases the number of failed or repeated transactions in an application. However, cache coordination in no way eliminates the need for an effective locking policy. To effectively ensure working with up-to-date data, cache coordination must be used with optimistic or pessimistic locking. Oracle recommends that you use cache coordination with an optimistic locking policy (see Section 119.26, "Configuring Locking Policy").
You can use cache invalidation to improve cache coordination efficiency. For more information, see Section 102.2.5, "Cache Invalidation".
For more information, see Section 102.3, "Cache Coordination".
Isolated client sessions provide a mechanism for disabling the shared server session cache. Any classes marked as isolated only cache objects relative to the life cycle of their client session. These classes never utilize the shared server session cache. This is the best mechanism to prevent caching as it is configured on a per-class basis allowing caching for some classes, and denying for others.
For more information, see Section 87.5, "Isolated Client Sessions".
By default, TopLink optimizes concurrency to minimize cache locking during read or write operations. Use the default TopLink transaction isolation configuration unless you have a very specific reason to change it.
For more information, see Section 115.15, "Database Transaction Isolation Levels".
Tune the TopLink cache for each class to help eliminate the need for distributed cache coordination. Always tune these settings before implementing cache coordination.
For more information, see Section 12.10, "Optimizing Cache".
As Figure 102-2 shows, cache coordination is a session feature that allows multiple, possibly distributed, instances of a session to broadcast object changes among each other so that each session's cache is either kept up-to-date or notified that the cache must update an object from the data source the next time it is read.
Note:
You cannot use isolated client sessions (see Section 87.5, "Isolated Client Sessions") with cache coordination.When sessions are distributed, that is, when an application contains multiple sessions (in the same JVM, in multiple JVMs, possibly on different servers), as long as the servers hosting the sessions are interconnected on the network, sessions can participate in cache coordination. Coordinated cache types that require discovery services also require the servers to support User Datagram Protocol (UDP) communication and multicast configuration (for more information, see Section 102.3.2, "Coordinated Cache Architecture").
This section describes the following:
For more information, see Section 103, "Configuring a Coordinated Cache".
Cache coordination can enhance performance and reduce the likelihood of stale data for applications that have the following characteristics:
Changes are all being performed by the same Java application operating with multiple, distributed sessions
Primarily read-based
Regularly requests and updates the same objects
To maximize performance, avoid cache coordination for applications that do not have these characteristics. For more information about alternatives to cache coordination, see Section 12.10, "Optimizing Cache".
Cache coordination enhances performance mainly by avoiding data source access.
Cache coordination reduces the occurrence of stale data by increasing the likelihood that distributed caches are kept up-to-date with changes and are notified when one of the distributed caches must update an object from the data source the next time it is read.
Cache coordination reduces the number of optimistic lock exceptions encountered in a distributed architecture, and decreases the number of failed or repeated transactions in an application. However, cache coordination in no way eliminates the need for an effective locking policy. To effectively ensure working with up-to-date data, cache coordination must be used with optimistic or pessimistic locking. Oracle recommends that you use cache coordination with an optimistic locking policy (see Section 119.26, "Configuring Locking Policy").
For other options to reduce the likelihood of stale data, see Section 102.2.3, "Handling Stale Data".
TopLink provides coordinated cache implementations that perform discovery and message transport services using various technologies including the following:
Java Message Service (JMS)–See Section 102.3.3.1, "JMS Coordinated Cache"
Remote Method Invocation (RMI)–See Section 102.3.3.2, "RMI Coordinated Cache"
Common Object Request Broker Architecture (CORBA)–See Section 102.3.3.3, "CORBA Coordinated Cache"
Regardless of the type of discovery and message transport you choose to use, the following are the principal objects that provide coordinated cache functionality:
When you enable a session for change propagation, the session provides discovery and message transport services using either JMS, RMI, CORBA, or Oracle Application Server Cluster.
Discovery services ensure that sessions announce themselves to other sessions participating in cache coordination. Discovery services use UDP communication and multicast configuration to monitor sessions as they join and leave the coordinated cache. All coordinated cache types (except JMS) require discovery services.
Message transport services allow the session to broadcast object change notifications to other sessions participating in cache coordination when a unit of work from this session commits a change.
You can configure how object changes are broadcast on a descriptor-by-descriptor basis. This lets you fine-tune the type of notification to make.
For example, for an object with few attributes, you can configure its descriptor to send object changes. For an object with many attributes, it may be more efficient to configure its descriptor so that the object is flagged as invalid (so that other sessions will know to update the object from the data source the next time it is read).
Only changes committed by a unit of work are subject to propagation when cache coordination is enabled. The unit of work computes the appropriate change set based on the descriptor configuration of affected objects.
You can create the following types of coordinated cache:
For a JMS coordinated cache, when a particular session's coordinated cache starts up, it uses its JNDI naming service information to locate and create a connection to the JMS server. The coordinated cache is ready when all participating sessions are connected to the same topic on the same JMS server. At this point, sessions can start sending and receiving object change messages. You can then configure all sessions that are participating in the same coordinated cache with the same JMS and JNDI naming service information.
Because you must supply the necessary information to connect to the JMS Topic, a JMS coordinated cache does not use a discovery service.
If you do use cache coordination, Oracle recommends that you use JMS cache coordination: JMS is robust, easy to configure, and provides efficient support for asynchronous change propagation.
For more information, see Chapter 104, "Configuring a JMS Coordinated Cache".
For more information on configuring JMS, see Oracle Fusion Middleware Services Guide for Oracle Containers for Java EE.
For an RMI coordinated cache, when a particular session's coordinated cache starts up, the session binds its connection in its naming service (either an RMI registry or JNDI), creates an announcement message (that includes its own naming service information), and broadcasts the announcement to its multicast group (see Section 103.4, "Configuring a Multicast Group Address" and Section 103.5, "Configuring a Multicast Port"). When a session that belongs to the same multicast group receives this announcement, it uses the naming service information in the announcement message to establish bidirectional connections with the newly announced session's coordinated cache. The coordinated cache is ready when all participating sessions are interconnected in this way, at which point sessions can start sending and receiving object change messages. You can then configure each session with naming information that identifies the host on which the session is deployed.
If you do use cache coordination, Oracle recommends that you use RMI cache coordination only if you require synchronous change propagation (see Section 103.2, "Configuring the Synchronous Change Propagation Mode").
TopLink also supports cache coordination using RMI over the Internet Inter-ORB Protocol (IIOP). An RMI/IIOP coordinated cache uses RMI (and a JNDI naming service) for discovery and message transport services.
Note:
If you use an RMI coordinated cache, Oracle recommends that you use RMI/IIOP only if absolutely necessary.For more information, see Chapter 105, "Configuring an RMI Coordinated Cache".
For a CORBA coordinated cache, when a particular session's coordinated cache starts up, the session binds its connection in JNDI, creates an announcement message (that includes its own JNDI naming service information), and broadcasts the announcement to its multicast group (see Section 103.4, "Configuring a Multicast Group Address" and Section 103.5, "Configuring a Multicast Port"). When a session that belongs to the same multicast group receives this announcement, it uses the naming service information in the announcement message to establish bidirectional connections with the newly announced session's coordinated cache. The coordinated cache is ready when all participating sessions are interconnected in this way, at which point, sessions can start sending and receiving object change messages. You can then configure each session with naming information that identifies the host on which the session is deployed.
Currently, TopLink provides support for the Sun Object Request Broker.
For more information on configuring a CORBA coordinated cache, see Chapter 106, "Configuring a CORBA Coordinated Cache".
Using the classes in oracle.toplink.remotecommand
package, you can define your own coordinated cache for custom solutions. For more information, contact your TopLink support representative.
Once you have created the required cache coordination classes, for more information on configuring a user-defined coordinated cache, see Chapter 107, "Configuring a Custom Coordinated Cache".
To configure the TopLink cache, you use the appropriate API in the following objects:
You configure object identity using the ClassDescriptor
API summarized in Example 102-1.
For more information, see Section 119.12, "Configuring Cache Type and Size at the Descriptor Level".
You configure cache refresh using the ClassDescriptor
API summarized in Example 102-2.
Example 102-2 Cache Refresh ClassDescriptor API
alwaysRefreshCache alwaysRefreshCacheOnRemote disableCacheHits disableCacheHitsOnRemote onlyRefreshCacheIfNewerVersion
You can also configure cache refresh using the following API calls:
Session:
refreshObject
method
DatabaseSession
and UnitOfWork: refreshAndLockObject
methods
ObjectLevelReadQuery: refreshIdentityMapResult
and refreshRemoteIdentityMapResult
methods
For more information, see Section 119.9, "Configuring Cache Refreshing".
You configure cache invalidation using ClassDescriptor
methods getCacheInvalidationPolicy
and setCacheInvalidationPolicy
to configure an oracle.toplink.descriptors.invalidation.CacheInvalidationPolicy
.
You can use any of the following CacheInvalidationPolicy
instances:
DailyCacheInvalidationPolicy
: The object is automatically flagged as invalid at a specified time of day.
NoExpiryCacheInvalidationPolicy
: The object can only be flagged as invalid by explicitly calling oracle.toplink.sessions.IdentityMapAccessor
method invalidateObject
.
TimeToLiveCacheInvalidationPolicy
: The object is automatically flagged as invalid after a specified time period has elapsed since the object was read.
You can also configure cache invalidation using a variety of API calls accessible through the Session
. The oracle.toplink.sessions.IdentityMapAccessor
provides the following methods:
getRemainingValidTime
: Returns the remaining life of the specified object. This time represents the difference between the next expiry time of the object and its read time.
invalidateAll
: Sets all objects for all classes to be invalid in TopLink identity maps.
invalidateClass(Class klass)
and invalidateClass(Class klass, boolean recurse)
: Set all objects of a specified class to be invalid in TopLink identity maps.
invalidateObject(Object object)
, invalidateObject(Record rowWithPrimaryKey, Class klass)
and invalidateObject(Vector primaryKey, Class klass)
: Set an object to be invalid in TopLink identity maps.
invalidateObjects(Expression selectionCriteria)
and invalidateObjects(Vector collection)
: Set all objects from the specified Expression/collection to be invalid in TopLink identity maps.
isValid(Record recordContainingPrimaryKey, Class theClass)
, isValid(Object object)
and isValid(java.util.Vector primaryKey, Class theClass)
: Return true
if the object is valid in TopLink identity maps.
For more information, see the following:
You configure cache coordination using the Session
methods summarized in Example 102-3.
You configure how object changes are propagated using the ClassDescriptor
methods summarized in Example 102-4.
For more information, see Section 103.1, "Configuring Common Coordinated Cache Options".
Example 102-3 Cache Coordination Session API
Session.getCommandManager(). setShouldPropagateAsynchronously(boolean) Session.getCommandManager().getDiscoveryManager(). setAnnouncementDelay() setMulticastGroupAddress() setMulticastPort() setPacketTimeToLive() Session.getCommandManager().getTransportManager(). setEncryptedPassword() setInitialContextFactoryName() setLocalContextProperties(Hashtable) setNamingServiceType() passing in one of: TransportManager.JNDI_NAMING_SERVICE TransportManager.REGISTRY_NAMING_SERVICE setPassword() setRemoteContextProperties(Hashtable) setShouldRemoveConnectionOnError() setUserName()