The following sections are included in this chapter:
Standalone Coherence applications are comprised of distributed processes that perform different roles. For deployment, it is often beneficial to logically group these processes into tiers based on their role; however, it is not a requirement for deployment. The most common tiers are a data tier, application tier, proxy tier, and extend client tier. Tiers facilitate deployment by allowing common artifacts, packaging, and scripts to be defined and targeted specifically for each tier.
This section includes the following topics:
A data tier is comprised of cache servers that are responsible for storing cached objects. A Coherence application may require any number of cache servers in the data tier. The number of cache servers depends on the amount of data that is expected in the cache and whether the data must be backed up and survive a server failure. Each cache server is a Coherence cluster member and runs in its own JVM process and multiple cache server processes can be collocated on a physical server. For details on planning the number of cache servers for an application, see "Cache Size Calculation Recommendations" and "Hardware Recommendations".
Cache servers are typically started using the com.tangosol.net.DefaultCacheServer
class. The class contains a main
method and is started from the command line. For details about starting a cache server, see Developing Applications with Oracle Coherence.
The following application artifacts are often deployed with a cache server:
Configuration files such as the operational override configuration file, the cache configuration file and the POF user type configuration file.
POF serializers and domain objects
Data grid processing implementations such as queries, entry processor, entry aggregators, and so on.
Event processing implementations.
Cache store and loader implementations when caching objects from data sources.
There are no restrictions on how the application artifacts must be packaged on a data tier. However, the artifacts must be found on the server classpath and all configuration files must be found before the coherence.jar
library if the default names are used; otherwise, the default configuration files that are located in the coherence.jar
library are loaded. The following example starts a single cache server using the configuration files in the APPLICATION_HOME
\config
directory and uses the implementations classes in the APPLICATION_HOME
\lib\myClasses
library:
java -server -Xms4g -Xmx4g -cp APPLICATION_HOME\config;APPLICATION_HOME\lib\myClasses.jar;COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
If you choose to include any configuration overrides as system properties (rather than modifying an operational override file), then they can be included as -D
arguments to the java
command. As a convenience, you can reuse the COHERENCE_HOME
\bin\cache-server
script and modify it as required.
GAR Deployment
Coherence application artifacts can be packaged as a Grid ARchive (GAR) and deployed with the DefaultCacheServer
class. A GAR adheres to a specific directory structure and includes an application descriptor. For details about GAR packaging, see "Building a Coherence GAR Module". The instructions are included as part of WebLogic server deployment, but are also applicable to a GAR being deployed with the DefaultCacheServer
class.
The following example starts a cache server and uses the application artifacts that are packaged in the MyGar.gar
file. The default name (MyGAR
) is used as the application name, which provides a scope for the application on the cluster.
java -server -Xms4g -Xmx4g -cp APPLICATION_HOME\config;COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer D:\example\MyGAR.gar
You can override the default name by providing a different name as an argument. For details about valid DefaultCacheServer
arguments, see Developing Applications with Oracle Coherence. For details about application scope, see "Running Multiple Applications in a Single Cluster".
An application tier is comprised of any number of clients that perform cache operations. Cache operations include loading objects in the cache, using cached objects, processing cached data, and performing cache maintenance. The clients are Coherence cluster members, but are not responsible for storing data.
The following application artifacts are often deployed with a client:
Configuration files such as the operational override configuration file, the cache configuration file and the POF user type configuration file.
POF serializers and domain objects
Data grid processing implementations such as queries, entry processor, entry aggregators, and so on.
Event processing implementations.
Cache store and loader implementations when caching objects from data sources.
There are no restrictions on how the application artifacts must be packaged on an application tier. Clients must include the COHERENCE_HOME
/lib/coherence.jar
library on the application classpath. Coherence configuration files must be included in the classpath and must be found before the coherence.jar
library if the default names are used; otherwise, the default configuration files that are located in the coherence.jar
library are loaded. The following example starts a client using the configuration files in the APPLICATION_HOME
\config
directory and uses the implementations classes in the APPLICATION_HOME
\lib\myClasses.jar
library.
java -cp APPLICATION_HOME\config;APPLICATION_HOME\lib\myClasses.jar;COHERENCE_HOME\lib\coherence.jar com.MyApp
If you choose to include any system property configuration overrides (rather than modifying an operational override file), then they can be included as -D
arguments to the java
command. For example, to disable storage on the client, the tangosol.coherence.distributed.localstorage
system property can be used as follows:
java -Dcoherence.distributed.localstorage=false -cp APPLICATION_HOME\config;APPLICATION_HOME\lib\myClasses.jar;COHERENCE_HOME\lib\coherence.jar com.MyApp
Note:
If a GAR is used for deployment on a cache server, then cache services are restricted by an application scope name. Clients must use the same application scope name; otherwise, the clients can not access the cache services. For details about specifying an application scope name, see "Running Multiple Applications in a Single Cluster".
A proxy tier is comprised of proxy servers that are responsible for handling extend client requests. Any number of proxy servers may be required in the proxy tier. The number of proxy servers depends on the expected number of extend clients and the expected request load of the clients. Each proxy server is a cluster member and runs in its own JVM process and multiple proxy server processes can be collocated on a physical server. For details on extend clients and setting up proxies, see Developing Remote Clients for Oracle Coherence.
A proxy server is typically started using the com.tangosol.net.DefaultCacheServer
class. The class contains a main
method and is started from the command line. For details about starting a cache server, see Developing Applications with Oracle Coherence. The difference between a proxy server and a cache server, is that the proxy server is not responsible for storing data and hosts proxy services that are responsible for handling extend client requests.
The following application artifacts are often deployed with a proxy:
Configuration files such as the operational override configuration file, the cache configuration file and the POF user type configuration file.
POF serializers and domain objects. If an extend client is implemented using C++ or .NET, then a Java version of the objects must also be deployed for certain use cases.
Data grid processing implementations such as queries, entry processor, entry aggregators, and so on.
Event processing implementations.
Cache store and loader implementations when caching objects from data sources.
There are no restrictions on how the application artifacts must be packaged on a proxy tier. However, the artifacts must be found on the server classpath and all configuration files must be found before the coherence.jar
library; otherwise, the default configuration files that are located in the coherence.jar
library are loaded. The following example starts a single proxy server using the configuration files in the APPLICATION_HOME
\config
directory and uses the implementations classes in the APPLICATION_HOME
\lib\myClasses
library:
java -server -Xms512m -Xmx512m -Dcoherence.distributed.localstorage=false -cp APPLICATION_HOME\config;APPLICATION_HOME\lib\myClasses.jar;COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
GAR Deployment
Coherence application artifacts can be packaged as a Grid ARchive (GAR) and deployed with the DefaultCacheServer
class. A GAR adheres to a specific directory structure and includes an application descriptor. For details about GAR packaging, see "Building a Coherence GAR Module". The instructions are included as part of WebLogic server deployment, but are also applicable to a GAR being deployed with the DefaultCacheServer
class.
The following example starts a proxy server and uses the application artifacts that are packaged in the MyGar.gar
file. The default name (MyGAR
) is used as the application name, which provides a scope for the application on the cluster.
java -server -Xms512m -Xmx512m -Dcoherence.distributed.localstorage=false -cp APPLICATION_HOME\config;APPLICATION_HOME\lib\myClasses.jar;COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer D:\example\MyGAR.gar
You can override the default name by providing a different name as an argument. For details about valid DefaultCacheServer
arguments, see Developing Applications with Oracle Coherence. For details about application scope, see "Running Multiple Applications in a Single Cluster".
Extend clients are implemented as Java, C++, or .NET applications. In addition, any client technology that provides a REST client API can use the caching services in a Coherence cluster. Extend clients are applications that use Coherence caches, but are not members of a Coherence cluster. For deployment details specific to these clients, see Developing Remote Clients for Oracle Coherence.
The following Coherence artifacts are often deployed with an extend client:
Configuration files such as the operational override configuration file, the cache configuration file and the POF user type configuration file.
POF serializers and domain objects.
Data grid processing implementations such as queries, entry processor, entry aggregators, and so on.
Event processing implementations.
WebLogic Server includes a Coherence integration that standardizes the way Coherence applications can be deployed and managed within a WebLogic Server domain. The integration allows administrators to set up distributed Coherence environments using familiar WebLogic Server components and infrastructure, such as Java EE-styled packaging and deployment, remote server management, server clusters, WebLogic Scripting Tool (WLST) automation, and configuration through the Administration Console.
The instructions in this section assume some familiarity with WebLogic Server and assume that a WebLogic Server domain has already been created. All instructions are provided using the WebLogic Server Administration Console. For details on using the WebLogic Server Administration Console, see Oracle WebLogic Server Administration Console Online Help. For additional details on configuring and managing Coherence clusters, see Administering Clusters for Oracle WebLogic Server.
This section includes the following topics:
Coherence is integrated with WebLogic Server. The integration aligns the lifecycle of a Coherence cluster member with the lifecycle of a managed server: starting or stopping a server JVM starts and stops a Coherence cluster member. The first member of the cluster starts the cluster service and is the senior member.
Like other Java EE modules, Coherence supports its own application module, which is called a Grid ARchive (GAR). The GAR contains the artifacts of a Coherence application and includes a deployment descriptor. A GAR is deployed and undeployed in the same way as other Java EE modules and is decoupled from the cluster service lifetime. Coherence applications are isolated by a service namespace and by class loader.
Coherence is typically setup in tiers that provide functional isolation within a WebLogic Server domain. The most common tiers are: a data tier for caching data and an application tier for consuming cached data. A proxy server tier and an extend client tier should be setup when using Coherence*Extend. An HTTP session tier should be setup when using Coherence*Web. See the Administering HTTP Session Management with Oracle Coherence*Web for instructions on deploying Coherence*Web and managing HTTP session data.
WebLogic managed servers that are associated with a Coherence cluster are referred to as managed Coherence servers. Managed Coherence servers in each tier can be individually managed but are typically associated with respective WebLogic Server clusters. A GAR must be deployed to each data and proxy tier server. The same GAR is then packaged within an EAR and deployed to each application and extend client tier server. The use of dedicated storage tiers that are separate from client tiers is a best practice that ensures optimal performance.
Coherence applications must be packaged as a GAR module for deployment. A GAR module includes the artifacts that comprise a Coherence application and adheres to a specific directory structure. A GAR can be left as an unarchived directory or can be archived with a .gar
extension. A GAR is deployed as both a standalone module and within an EAR. An EAR cannot contain multiple GAR modules.
A GAR module must be packaged in an EAR module to be referenced by other modules. For details on creating an EAR module, see Developing Applications for Oracle WebLogic Server.
To include a GAR module within an EAR module:
Coherence supports different domain topologies within a WebLogic Server domain to provide varying levels of performance, scalability, and ease of use. For example, during development, a single managed Coherence server instance may be used as both a cache server and a cache client. The single-server topology is easy to setup and use, but does not provide optimal performance or scalability. For production, Coherence is typically setup using WebLogic Server clusters. A WebLogic Server cluster is used as a Coherence data tier and hosts one or more cache servers; a different WebLogic Server cluster is used as a Coherence application tier and hosts one or more cache clients; and (if required) different WebLogic Server clusters are used for the Coherence proxy tier that hosts one or more managed Coherence proxy servers and the Coherence extend client tier that hosts extend clients. The tiered topology approach provides optimal scalability and performance. A domain topology should always be based on the requirements of an application.
Use the following guidelines when creating a domain topology for Coherence:
A domain typically contains a single Coherence cluster.
Multiple WebLogic Server clusters can be associated with a Coherence cluster.
A managed server that is associated with a Coherence cluster is referred to as a managed Coherence server and is the same as a Coherence cluster member.
Use different managed Coherence server instances (and preferably different WebLogic Server clusters) to separate Coherence cache servers and clients.
Coherence members managed within a WebLogic Server domain should not join an external Coherence cluster comprised of standalone JVM cluster members. Standalone JVM cluster members cannot be managed within a WebLogic Server domain.
The preferred approach for setting up Coherence in a WLS domain is to separate Coherence cache servers, clients, and proxies into different tiers that are associated with the same Coherence cluster. Typically, each tier is associated with its own WebLogic Server cluster of managed Coherence servers. However, a tier may also be comprised of standalone managed Coherence servers. The former approach provides the easiest way to manage and scale Coherence because the managed Coherence servers can inherit the WebLogic Server cluster's Coherence settings and deployments. Use the instructions in this section to create different WebLogic Server clusters for the data, application, and proxy tiers. For detailed instructions on creating WebLogic Server clusters, see Administering Clusters for Oracle WebLogic Server.
To create Coherence deployment tiers:
Managed servers that are associated with a Coherence cluster are Coherence cluster members and are referred to as managed Coherence servers. Use the instructions in this section to create managed servers and associate them with a WebLogic Server cluster that is configured as a Coherence deployment tier. Managed servers automatically inherit Coherence settings from the WebLogic Server cluster. Existing managed Coherence servers can be associated with a WebLogic Server cluster as well. For detailed instructions on creating and configuring managed servers, see Oracle WebLogic Server Administration Console Online Help.
To create managed servers for a Coherence deployment tier:
Each Coherence deployment tier must include a Coherence application module. Deploying the application module starts the services that are defined in the GAR's cache configuration file. For details on packaging Coherence applications, see "Packaging Coherence Applications for WebLogic Server". For details on using the console to deploy applications, see the WebLogic Server Administration Console Help.
Deploy Coherence modules as follows:
Data Tier (cache servers) – Deploy a standalone GAR to each managed Coherence server of the data tier. If the data tier is setup as a WebLogic Server cluster, deploy the GAR to the cluster and the WebLogic deployment infrastructure copies the module to each managed Coherence server.
Application Tier (cache clients) – Deploy the EAR that contains GAR and the client implementation (Web Application, EJB, and so on) to each managed Coherence server in the cluster. If the application tier is setup as a WebLogic Server cluster, deploy the EAR to the cluster and the WebLogic deployment infrastructure copies the module to each managed Coherence server.
Proxy Tier (proxy servers) – Deploy the standalone GAR to each managed Coherence server of the proxy tier. If the proxy tier is setup as a WebLogic Server cluster, deploy the GAR to the cluster and the WebLogic deployment infrastructure copies the module to each managed Coherence server.
Note:
Proxy tier managed Coherence servers must include a proxy service definition in the cache configuration file. You can deploy the same GAR to each tier, and then override the cache configuration file of just the proxy tier servers by using a cluster-level cache configuration file. For details on specifying a cluster-level cache, see Administering Clusters for Oracle WebLogic Server.
Extend Client Tier (extend clients) – Deploy the EAR that contains the GAR and the extend client implementation to each managed server that hosts the extend client. If the extend client tier is setup as a WebLogic Server cluster, deploy the EAR to the cluster and the WebLogic deployment infrastructure copies the module to each managed server.
Note:
Extend tier managed servers must include a remote cache service definition in the cache configuration file. You can deploy the same GAR to each tier, and then override the cache configuration file of just the extend tier servers by using a cluster-level cache configuration file. For details on specifying a cluster-level cache, see Administering Clusters for Oracle WebLogic Server.
To deploy a GAR on the data tier:
To deploy an EAR on the application tier:
To deploy a GAR on the proxy tier
Administrators use WebLogic Server tools to manage a Coherence environment within a WebLogic domain. These tools simplify the tasks of administering a cluster and cluster members. This section provides an overview of using the Administration Console tool to perform basic administrative task. For details on completing these tasks, see the Oracle WebLogic Server Administration Console Online Help. For details on using the WebLogic Scripting Tool (WLST), see Understanding the WebLogic Scripting Tool.
Table 1-1 Basic Administration Task in the Administration Console
To... | Use the... |
---|---|
Create a Coherence cluster |
Coherence Clusters page |
Add or remove cluster members or WebLogic Server clusters from a Coherence Cluster |
Members Tab located on a Coherence cluster's Settings page. |
Configure unicast or multicast settings for a Coherence cluster |
General Tab located on a Coherence cluster's Settings page. If unicast is selected, the default well known addresses configuration can be overridden using the Well Known Addresses tab. |
Use a custom cluster configuration file to configure a Coherence cluster |
General Tab located on a Coherence cluster's Settings page |
Import a cache configuration file to a cluster member and override the cache configuration file deployed in a GAR |
Cache Configurations Tab located on a Coherence cluster's Settings page |
Configuring Logging |
Logging Tab located on a Coherence cluster's Settings page |
Assign a managed server to a Coherence Cluster |
Coherence Tab located on a managed server's Settings page |
Configure Coherence cluster member properties |
Coherence Tab located on a managed server's Settings page |
Associate a WebLogic Server cluster with a Coherence cluster and enable or disable storage for the managed Coherence servers of the cluster |
Coherence Tab located on a WebLogic Server cluster's Settings page |
Assign a managed server to WebLogic Server cluster that is associated with a Coherence cluster |
General Tab located on a managed server's Settings page |
Java EE applications that are deployed to an application server, other than WebLogic Server, have two options for deploying Coherence: as an application server library or as part of a Java EE module. Coherence cluster members are class loader scoped. Therefore, the option selected results in a different deployment scenario. All modules share a single cluster member if Coherence is deployed as an application server library. Whereas, a Java EE module is its own cluster member if Coherence is deployed as part of the module. Each option has its own benefits and assumptions and generally balances resource utilization with how isolated the cluster member is from other modules.
Note:
See the Administering HTTP Session Management with Oracle Coherence*Web for instructions on deploying Coherence*Web and clustering HTTP session data.
Coherence can be deployed as an application server library. In this deployment scenario, an application server's startup classpath is modified to include the COHERENCE_HOME
/lib/coherence.jar
library. In addition, any objects that are being placed into the cache must also be available in the server's classpath. Consult your application server vendor's documentation for instructions on adding libraries to the server's classpath.
This scenario results in a single cluster member that is shared by all applications that are deployed in the server's containers. This scenario minimizes resource utilization because only one copy of the Coherence classes are loaded into the JVM. See "Running Multiple Applications in a Single Cluster" for detailed instructions on isolating Coherence applications from each other when choosing this deployment style.
Coherence can be deployed within an EAR file or a WAR file. This style of deployment is generally preferred because modification to the application server run-time environment is not required and because cluster members are isolated to either the EAR or WAR.
Coherence can be deployed as part of an EAR. This deployment scenario results in a single cluster member that is shared by all Web applications in the EAR. Resource utilization is moderate because only one copy of the Coherence classes are loaded per EAR. However, all Web applications may be affected by any one module's use of the cluster member. See "Running Multiple Applications in a Single Cluster" for detailed instructions for isolating Coherence applications from each other.
To deploy Coherence within an enterprise application:
Coherence can be deployed as part of a Web application. This deployment scenario results in each Web application having its own cluster member, which is isolated from all other Web applications. This scenario uses the most amount of resources because there are as many copies of the Coherence classes loaded as there are deployed Web applications that include Coherence. This scenario is ideal when deploying only a few Web applications to an application server.
To deploy Coherence within a Web application:
coherence.jar
library to the Web Application's WEB-INF/lib
directory.WEB-INF/lib
or WEB-INF/classes
directory.Coherence can be deployed in shared environments where multiple applications use the same cluster but define their own set of Coherence caches and services. For such scenarios, each application uses its own cache configuration file that includes a scope name that controls whether the caches and services are allowed to be shared among applications.
The following topics are included in this section:
The <scope-name>
element is used to specify a service namespace that uniquely identifies the caches and services in a cache configuration file. If specified, all caches and services are isolated and cannot be used by other applications that run on the same cluster.
The following example configures a scope name called accounts
and results in the use of accounts
as a prefix to all services instantiated by the ConfigurableCacheFactory
instance that is created based on the configuration. The scope name is an attribute of a cache factory instance and only affects that cache factory instance.
Note:
The prefix is only used for service names, not cache names.
<?xml version='1.0'?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <defaults> <scope-name>accounts</scope-name> </defaults> <caching-scheme-mapping> ...
Multiple deployed Coherence applications (GARs) are isolated by a service namespace and by ClassLoader by default in WebLogic Server and do not require scope name configuration. However, a scope name may still be configured to share caches between GARs. Directly configuring the scope in the cache configuration file is typically performed for advanced use cases.
The deployment name is used as the default scope name when deploying a GAR. If a deployment name is not specified during deployment, the artifact name is used as the deployment name. For example, for the MyApp.gar
module, the default deployment name is MyApp
. In the case of a GAR packaged in an EAR, the deployment name is the module name specified for the GAR in the weblogic-application.xml
file.
Deploying Coherence as an application server library, or as part of an EAR, allows multiple applications to use the same cluster as a single cluster member (one JVM). In such deployment scenarios, multiple applications may choose to use a single set of Coherence caches and services that are configured in a single coherence-cache-config.xml
file. This type of deployment is only suggested (and only practical) in controlled environments where application deployment is coordinated. The likelihood of collisions between caches, services and other configuration settings is high and may lead to unexpected results. Moreover, all applications may be affected by any one application's use of the Coherence node.
The alternative is to have each application include its own cache configuration file that defines the caches and services that are unique to the application. The configurations are then isolated by specifying a scope name using the <scope-name>
element in the cache configuration file. Likewise, applications can explicitly allow other applications to share their caches and services if required. This scenario assumes that a single JVM contains multiple ConfigurableCacheFactory
instances that each pertains to an application.
The following example demonstrates the steps that are required to isolate two Web applications (trade.war
and accounts.war
) from using each other's caches and services:
Standalone applications that use a single Coherence cluster can each include their own cache configuration files; however, these configurations are coalesced into a single ConfigurableCacheFactory
. Since there is a 1 to 1 relationship between ConfigurableCacheFactory
and DefaultCacheServer
, application scoping is not feasible within a single cluster node. Instead, one or more instances of DefaultCacheServer
must be started for each cache configuration, and each cache configuration must include a scope name.
The following example isolates two applications (trade and accounts) from using each other's caches and services:
Note:
To share data between applications, the applications must use the same cache configuration file. Coherence does not support using multiple cache configurations which specify the same scope name.
The com.tangosol.net.ScopeResolver
interface allows containers and applications to modify the scope name for a given ConfigurableCacheFactory
at run time to enforce (or disable) isolation between applications. Implement the ScopeResolver
interface and add any custom functionality as required.
To enable a custom scope resolver, the fully qualified name of the implementation class must be defined in the operational override file using the <scope-resolver>
element within the <cache-factory-builder-config>
node. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cache-factory-builder-config> <scope-resolver> <class-name>package.MyScopeResolver</class-name> </scope-resolver> </cache-factory-builder-config> <coherence>
As an alternative, the <instance>
element supports the use of a <class-factory-name>
element to specify a factory class that is responsible for creating ScopeResolver
instances, and a <method-name>
element to specify the static factory method on the factory class that performs object instantiation. The following example gets a custom scope resolver instance using the getResolver
method on the MyScopeResolverFactory
class.
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cache-factory-builder-config> <scope-resolver> <class-factory-name>package.MyScopeReolverFactory</class-factory-name> <method-name>getResolver</method-name> </scope-resolver> </cache-factory-builder-config> <coherence>
Any initialization parameters that are required for an implementation can be specified using the <init-params>
element. The following example sets an isDeployed
parameter to true
.
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cache-factory-builder-config> <scope-resolver> <class-name>package.MyScopeResolver</class-name> <init-params> <init-param> <param-name>isDeployed</param-name> <param-value>true</param-value> </init-param> </init-params> </scope-resolver> </cache-factory-builder-config> <coherence>