3 Configuring Extend Proxies
This chapter includes the following sections:
- Overview of Configuring Extend Proxies
Proxies and caches must be configured before extend clients can retrieve and store data in a cluster. - Defining Extend Proxy Services
The extend proxy service (ProxyService
) is a cluster service that allows extend clients to access a Coherence cluster using TCP/IP. - Defining Caches for Use By Extend Clients
Extend clients read and write data to a cache on the cluster. Any of the cache types can store client data. - Disabling Storage on a Proxy Server
You must explicitly configure a proxy service to not store any data. - Starting a Proxy Server
A proxy server can be started using theDefaultCacheServer
class.
Parent topic: Getting Started
Overview of Configuring Extend Proxies
Extend proxies and cache servers run in the same cluster member process (DefaultCacheServer
process). Collocating extend proxies with cache servers simplifies cluster setup and ensures that proxies automatically scale with the cluster. However, extend proxies can also be configured as separate members of the cluster. In this case, the proxies and cache servers are organized as separate tiers that can scale independently.
Extend proxy services are configured in a cache configuration deployment descriptor. This deployment descriptor is often referred to as the cluster-side cache configuration file. It is the same cache configuration file that is used to set up caches on the cluster. See Specifying a Cache Configuration File in Developing Applications with Oracle Coherence.
Parent topic: Configuring Extend Proxies
Defining Extend Proxy Services
ProxyService
) is a cluster service that allows extend clients to access a Coherence cluster using TCP/IP. A proxy service proxies two types of cluster services: the CacheService
cluster service, which is used by clients to access caches; and, the InvocationService
cluster service, which is used by clients to execute Invocable
objects on the cluster.
This section includes the following topics:
- Defining a Single Proxy Service Instance
- Defining Multiple Proxy Service Instances
- Defining Multiple Proxy Services
- Explicitly Configuring Proxy Addresses
- Disabling Cluster Service Proxies
- Specifying Read-Only NamedCache Access
Parent topic: Configuring Extend Proxies
Defining a Single Proxy Service Instance
Extend proxy services are configured within a <caching-schemes>
node using the <proxy-scheme>
element. Example 3-1 defines a proxy service named ExtendTcpProxyService
and includes the <autostart>
element that is set to true
so that the service automatically starts at a cluster node. See proxy-scheme in Developing Applications with Oracle Coherence.
As configured in Example 3-1, a proxy address and ephemeral port is automatically assigned and registered with a cluster name service. Extend clients connect to the name service, which then redirects the client to the address of the requested proxy. The use of the name service allows proxies to run on ephemeral addresses, which simplifies port management and configuration. See Explicitly Configuring Proxy Addresses.
Example 3-1 Extend Proxy Service Configuration
... <caching-schemes> <proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <autostart>true</autostart> </proxy-scheme> </caching-schemes> ...
Parent topic: Defining Extend Proxy Services
Defining Multiple Proxy Service Instances
Multiple extend proxy service instances can be defined in order to support an expected number of client connections and to support fault tolerance and load balancing. Client connections are automatically balanced across proxy service instances. The algorithm used to balance connections depends on the load balancing strategy that is configured. See Load Balancing Connections.
To define multiple proxy service instances, include a proxy service definition in multiple proxy servers and use the same service name for each proxy service. Proxy services that share the same service name are considered peers.
The following examples define two instances of the ExtendTcpProxyService
proxy service. The proxy service definition is included in each cache server's respective cache configuration file within the <proxy-scheme>
element. The same configuration can be used on all proxies including proxies that are co-located on the same machine.
On proxy server 1:
... <caching-schemes> <proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <autostart>true</autostart> </proxy-scheme> </caching-schemes> ...
On proxy server 2:
... <caching-schemes> <proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <autostart>true</autostart> </proxy-scheme> </caching-schemes> ...
Parent topic: Defining Extend Proxy Services
Defining Multiple Proxy Services
Multiple extend proxy services can be defined in order to provide different applications with their own proxies. Extend clients for a particular application can be directed toward specific proxies to provide a more predictable environment.
The following example defines two extend proxy services: ExtendTcpProxyService1
and ExtendTcpProxyService2
:
... <caching-schemes> <proxy-scheme> <service-name>ExtendTcpProxyService1</service-name> <autostart>true</autostart> </proxy-scheme> <proxy-scheme> <service-name>ExtendTcpProxyService2</service-name> <autostart>true</autostart> </proxy-scheme> </caching-schemes> ...
Parent topic: Defining Extend Proxy Services
Explicitly Configuring Proxy Addresses
Older extend clients that predate the name service or clients that have specific firewall constraints may require specific proxy addresses. In this case, the proxy can be explicitly configured to listen on a specific address and port. See Configuring Firewalls for Extend Clients.
The <tcp-acceptor>
subelement includes the address (IP, or DNS name, and port) that an extend proxy service listens to for TCP/IP client communication. The address can be explicitly defined using the <address-provider>
element, or the address can be defined within an operational override configuration file and referenced using the <address-provider>
element. The latter approach decouples the address configuration from the proxy scheme definition and allows the address to change at runtime without having to change the proxy definition. See Using Address Provider References for TCP Addresses.
Example 3-2 defines a proxy service named ExtendTcpProxyService
and is set up to listen for client requests on a TCP/IP socket that is bound to 198.168.1.5
and port 7077
.
Example 3-2 Explicitly Configured Proxy Service Address
... <caching-schemes> <proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <acceptor-config> <tcp-acceptor> <address-provider> <local-address> <address>192.168.1.5</address> <port>7077</port> </local-address> </address-provider> </tcp-acceptor> </acceptor-config> <autostart>true</autostart> </proxy-scheme> </caching-schemes> ...
The specified port should be outside of the computer's ephemeral port range to ensure that it is not automatically assigned to other applications. If the specified port is not available, then the default behavior is to select the next available port. To disable automatic port adjustment, add a <port-auto-adjust>
element that includes the value false
. Or, to specify a range of ports from which the port is selected, include a port value that represents the upper limit of the port range. The following example sets a port range from 7077 to 8000:
<acceptor-config> <tcp-acceptor> <address-provider> <local-address> <address>192.168.1.5</address> <port>7077</port> <port-auto-adjust>8000</port-auto-adjust> </local-address> </address-provider> </tcp-acceptor> </acceptor-config>
The <address>
element supports using CIDR notation as a subnet and mask (for example 192.168.1.0/24
). CIDR simplifies configuration by allowing a single address configuration to be shared across computers on the same sub-net. Each cluster member specifies the same CIDR address block and a local NIC on each computer is automatically found that matches the address pattern. The /24
prefix size matches up to 256 available addresses: from 192.168.1.0
to 192.168.1.255
. The <address>
element also supports external NAT addresses that route to local addresses; however, both addresses must use the same port number.
For solutions that do not require a firewall, you can omit the IP and port values which causes the proxy to use the same IP address and port as TCMP (7574 by default). The port can also be configured with a listen port of 0
, which causes the proxy to listen on a system assigned ephemeral port. This configuration is the same as omitting the <acceptor-config>
element as shown in Defining a Single Proxy Service Instance. If the proxy is configured to use ephemeral ports, then clients must use the cluster name service to locate the proxy.
Parent topic: Defining Extend Proxy Services
Disabling Cluster Service Proxies
The cache service and invocation service proxies can be disabled within an extend proxy service definition. Both of these proxies are enabled by default and can be explicitly disabled if a client does not require a service.
Cluster service proxies are disabled by setting the <enabled>
element to false
within the <cache-service-proxy>
and <invocation-service-proxy>
respectively.
The following example disables the inovcation service proxy so that extend clients cannot execute Invocable
objects within the cluster:
<proxy-scheme> ... <proxy-config> <invocation-service-proxy> <enabled>false</enabled> </invocation-service-proxy> </proxy-config> ... </proxy-scheme>
Likewise, the following example disables the cache service proxy to restrict extend clients from accessing caches within the cluster:
<proxy-scheme> ... <proxy-config> <cache-service-proxy> <enabled>false</enabled> </cache-service-proxy> </proxy-config> ... </proxy-scheme>
Parent topic: Defining Extend Proxy Services
Specifying Read-Only NamedCache Access
By default, extend clients are allowed to both read and write data to proxied NamedCache
instances. The <read-only>
element can be specified within a <cache-service-proxy>
element to prohibit extend clients from modifying cached content on the cluster. For example:
<proxy-scheme> ... <proxy-config> <cache-service-proxy> <read-only>true</read-only> </cache-service-proxy> </proxy-config> ... </proxy-scheme>
Parent topic: Defining Extend Proxy Services
Defining Caches for Use By Extend Clients
A Basic Partitioned (distributed) Cache
The following example defines a basic partitioned cache named dist-extend
.
... <caching-scheme-mapping> <cache-mapping> <cache-name>dist-extend</cache-name> <scheme-name>dist-default</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>dist-default</scheme-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </caching-schemes> ...
A Basic Near Cache
A typical near cache is configured to use a local cache (thread safe, highly concurrent, size-limited and possibly auto-expiring) as the front cache and a remote cache as a back cache. A near ache is configured by using the near-scheme which has two child elements: a front-scheme for configuring a local (front) cache and a back-scheme for defining a remote (back) cache.
A Near Cache is configured by using the <near-scheme
> element in the coherence-cache-config
file. This element has two required subelements: front-scheme
for configuring a local (front-tier) cache and a back-scheme
for defining a remote (back-tier) cache. While a local cache (<local-scheme
>) is a typical choice for the front-tier, you can also use non-JVM heap based caches, (<external-scheme
> or <paged-external-scheme
>) or schemes based on Java objects (<class-scheme
>).
The remote or back-tier cache is described by the <back-scheme
> element. A back-tier cache can be either a distributed cache (<distributed-scheme
>) or a remote cache (<remote-cache-scheme
>). The <remote-cache-scheme
> element enables you to use a clustered cache from outside the current cluster.
Optional subelements of <near-scheme
> include <invalidation-strategy>
for specifying how the front-tier and back-tier objects are kept synchronized and <listener
> for specifying a listener which is notified of events occurring on the cache.
Example 3-3 demonstrates a near cache configuration.
Example 3-3 Near Cache Configuration
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>dist-extend-near</cache-name> <scheme-name>extend-near</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <near-scheme> <scheme-name>extend-near</scheme-name> <front-scheme> <local-scheme> <high-units>1000</high-units> </local-scheme> </front-scheme> <back-scheme> <remote-cache-scheme> <scheme-ref>extend-dist</scheme-ref> </remote-cache-scheme> </back-scheme> <invalidation-strategy>all</invalidation-strategy> </near-scheme> </caching-schemes> </cache-config>
A Basic Local Cache
A local cache is a cache that is local to (completely contained within) a particular application. There are several attributes of a local cache that are particularly interesting:
-
A local cache implements the same interfaces that the remote caches implement, meaning that there is no programming difference between using a local and a remote cache.
-
A local cache can be size-limited. Size-limited means that the local cache can restrict the number of entries that it caches, and automatically evict entries when the cache becomes full. Furthermore, both the sizing of entries and the eviction policies can be customized, for example allowing the cache to be size-limited based on the memory used by the cached entries. The default eviction policy uses a combination of Most Frequently Used (MFU) and Most Recently Used (MRU) information, scaled on a logarithmic curve, to determine what cache items to evict. This algorithm is the best general-purpose eviction algorithm because it works well for short duration and long duration caches, and it balances frequency versus recentness to avoid cache thrashing. The pure LRU and pure LFU algorithms are also supported, and the ability to plug in custom eviction policies.
-
A local cache supports automatic expiration of cached entries, meaning that each cache entry can be assigned a time-to-live value in the cache. Furthermore, the entire cache can be configured to flush itself on a periodic basis or at a preset time.
-
A local cache is thread safe and highly concurrent.
-
A local cache provides cache "get" statistics. It maintains hit and miss statistics. These run-time statistics accurately project the effectiveness of the cache and are used to adjust size-limiting and auto-expiring settings accordingly while the cache is running.
The element for configuring a local cache is <local-scheme
>. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near-scheme. The <local-scheme>
provides several optional subelements that let you define the characteristics of the cache. For example, the <low-units>
and <high-units>
subelements allow you to limit the cache in terms of size. When the cache reaches its maximum allowable size, it prunes itself back to a specified smaller size, choosing which entries to evict according to a specified eviction-policy (<eviction-policy>
). The entries and size limitations are measured in terms of units as calculated by the scheme's unit-calculator (<unit-calculator>
).
You can also limit the cache in terms of time. The <expiry-delay
> subelement specifies the amount of time from last update that entries are kept by the cache before being marked as expired. Any attempt to read an expired entry results in a reloading of the entry from the configured cache store (<cachestore-scheme>
). Expired values are periodically discarded from the cache based on the flush-delay.
If a <cache-store-scheme>
is not specified, then the cached data only resides in memory, and only reflect operations performed on the cache itself. See <local-scheme
> for a complete description of all of the available subelements.
Example 3-4 demonstrates a local cache configuration.
Example 3-4 Local Cache Configuration
<?xml version='1.0'?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example-local-cache</cache-name> <scheme-name>example-local</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <local-scheme> <scheme-name>example-local</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>32000</high-units> <low-units>10</low-units> <unit-calculator>FIXED</unit-calculator> <expiry-delay>10ms</expiry-delay> <cachestore-scheme> <class-scheme> <class-name>ExampleCacheStore</class-name> </class-scheme> </cachestore-scheme> <pre-load>true</pre-load> </local-scheme> </caching-schemes> </cache-config>
Parent topic: Configuring Extend Proxies
Disabling Storage on a Proxy Server
Note:
Storage-enabled proxies bypass the front cache of a near cache and operate directly against the back cache if it is a partitioned cache.
To disable storage on a proxy server, use the coherence.distributed.localstorage
Java property set to false
when starting the cluster member. For example:
-Dcoherence.distributed.localstorage=false
Storage can also be disabled in the cache configuration file as part of a distributed cache definition by setting the <local-storage>
element to false
. See distributed-scheme in Developing Applications with Oracle Coherence.
... <distributed-scheme> <scheme-name>dist-default</scheme-name> <local-storage>false</local-storage> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> ...
Parent topic: Configuring Extend Proxies
Starting a Proxy Server
A proxy server can be started using the DefaultCacheServer
class.
To start a proxy server:
Parent topic: Configuring Extend Proxies