This chapter includes instructions for setting up Coherence for C++ clients.
This chapter includes the following sections:
Configuring and using Coherence for C++ requires five basic steps:
Implement the C++ Application using the Coherence for C++ API. See Chapter 9, "Using the Coherence for C++ Client API," for more information on the API.
Compile and Link the application.
Configure Coherence*Extend on both the client and on one or more JVMs within the cluster.
Configure a POF context on the client and on all of the JVMs within the cluster that run the Coherence*Extend clustered service.
Make sure the Coherence cluster is up and running.
Launch the C++ client application.
The following sections describe each of these steps in detail.
Coherence for C++ provides an API that allows C++ applications to access Coherence clustered services, including data, data events, and data processing from outside the Coherence cluster.
Coherence for C++ API consists of:
a set of C++ public header files
version of static libraries build by all supported C++ compilers
several samples
The library allows C++ applications to connect to a Coherence*Extend clustered service instance running within the Coherence cluster using a high performance TCP/IP-based communication layer. The library sends all client requests to the Coherence*Extend clustered service which, in turn, responds to client requests by delegating to an actual Coherence clustered service (for example, a Partitioned or Replicated cache service).
Chapter 9, "Using the Coherence for C++ Client API", provides an overview of the key classes in the API. For a detailed description of the classes, see the API itself which is included in the doc
directory of the Coherence for C++ distribution.
The platforms on which you can compile applications that employ Coherence for C++ are listed in the Supported Platforms and Operating Systems topic.
For example, the following build.cmd
file for the Windows 32-bit platform builds, compiles, and links the files for the Coherence for C++ demo. The variables in the file have the following meanings:
OPT
and LOPT
point to compiler options
INC
points to the Coherence for C++ API files in the include directory
SRC
points to the C++ header and code files in the common directory
OUT
points to the file that the compiler/linker should generate when it is finished compiling the code
LIBPATH
points to the library directory
LIBS
points to the Coherence for C++ shared library file
After setting these environment variables, the file compiles the C++ code and header files, the API files and the OPT files, links the LOPT
, the Coherence for C++ shared library, the generated object files, and the OUT
files. It finishes by deleting the object files. A sample run of the build.cmd
file is illustrated in Example 7-1.
Example 7-1 Sample Run of the build.cmd File
@echo off setlocal set EXAMPLE=%1% if "%EXAMPLE%"=="" ( echo You must supply the name of an example to build. goto exit ) set OPT=/c /nologo /EHsc /Zi /RTC1 /MD /GR /DWIN32 set LOPT=/NOLOGO /SUBSYSTEM:CONSOLE /INCREMENTAL:NO set INC=/I%EXAMPLE% /Icommon /I..\include set SRC=%EXAMPLE%\*.cpp common\*.cpp set OUT=%EXAMPLE%\%EXAMPLE%.exe set LIBPATH=..\lib set LIBS=%LIBPATH%\coherence.lib echo building %OUT% ... cl %OPT% %INC% %SRC% link %LOPT% %LIBS% *.obj /OUT:%OUT% del *.obj echo To run this example execute 'run %EXAMPLE%' :exit
Set up the configuration path to the Coherence for C++ library. This involves setting an environment variable to point to the library. The name of the environment variable and the file name of the library are different depending on your platform environment. For a list of the environment variables and library names for each platform, see Chapter 6, "Setting Up C++ Application Builds."
To configure Coherence*Extend, add the appropriate configuration elements to both the cluster and client-side cache configuration descriptors. The cluster-side cache configuration elements instruct a Coherence DefaultCacheServer
to start a Coherence*Extend clustered service that listens for incoming TCP/IP requests from Coherence*Extend clients. The client-side cache configuration elements are used by the client library connect to the cluster. The configuration specifies the IP address and port of one or more servers in the cluster that run the Coherence*Extend clustered service so that it can connect to the cluster. It also contains various connection-related parameters, such as connection and request timeouts.
For a Coherence*Extend client to connect to a Coherence cluster, one or more DefaultCacheServer JVMs within the cluster must run a TCP/IP Coherence*Extend clustered service. To configure a DefaultCacheServer to run this service, a proxy-scheme element with a child tcp-acceptor element must be added to the cache configuration descriptor used by the DefaultCacheServer
.
For example, the cache configuration descriptor in Example 7-2 defines two clustered services, one that allows remote Coherence*Extend clients to connect to the Coherence cluster over TCP/IP and a standard Partitioned cache service. Since this descriptor is used by a DefaultCacheServer
, it is important that the <autostart>
configuration element for each service is set to true
so that clustered services are automatically restarted upon termination. The proxy-scheme
element has a tcp-acceptor
child element which includes all TCP/IP-specific information needed to accept client connection requests over TCP/IP. The acceptor-config
has also been configured to use a ConfigurablePofContext
for its serializer. The C++ Extend client requires the use of POF for serialization.
See Chapter 10, "Building Integration Objects (C++)" for more information on serialization and PIF/POF.
The Coherence*Extend clustered service configured below listens for incoming requests on the localhost
address and port
9099
. When, for example, a client attempts to connect to a Coherence cache called dist-extend
, the Coherence*Extend clustered service proxies subsequent requests to the NamedCache
with the same name which, in this example, is a Partitioned cache.
Example 7-2 Cache Configuration for Two Clustered Services
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>dist-*</cache-name> <scheme-name>dist-default</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>dist-default</scheme-name> <lease-granularity>member</lease-granularity> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> <proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <acceptor-config> <tcp-acceptor> <local-address> <address>localhost</address> <port>9099</port> </local-address> </tcp-acceptor> <serializer> <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name> </serializer> </acceptor-config> <autostart>true</autostart> </proxy-scheme> </caching-schemes> </cache-config>
The key element within the Coherence*Extend client configuration is <cache-config>
. This element contains the path to a cache configuration descriptor which contains the cache configuration. This cache configuration descriptor is used by the DefaultConfigurableCacheFactory
.
A Coherence*Extend client uses the information within an initiator-config
cache configuration descriptor element to connect to and communicate with a Coherence*Extend clustered service running within a Coherence cluster.
For example, the cache configuration descriptor in Example 7-3 defines a caching scheme that connects to a remote Coherence cluster. The remote-cache-scheme
element has a tcp-initiator
child element which includes all TCP/IP-specific information needed to connect the client with the Coherence*Extend clustered service running within the remote Coherence cluster.
When the client application retrieves a named cache with CacheFactory
using, for example, the name dist-extend
, the Coherence*Extend client connects to the Coherence cluster by using TCP/IP (using the address localhost
and port
9099
) and return a NamedCache
implementation that routes requests to the NamedCache
with the same name running within the remote cluster. Note that the remote-addresses
configuration element can contain multiple socket-address
child elements. The Coherence*Extend client attempts to connect to the addresses in a random order, until either the list is exhausted or a TCP/IP connection is established.
Example 7-3 A Caching Scheme that Connects to a Remote Coherence Cluster
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>dist-extend</cache-name> <scheme-name>extend-dist</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <remote-cache-scheme> <scheme-name>extend-dist</scheme-name> <service-name>ExtendTcpCacheService</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <socket-address> <address>localhost</address> <port>9099</port> </socket-address> </remote-addresses> <connect-timeout>10s</connect-timeout> </tcp-initiator> <outgoing-message-handler> <request-timeout>5s</request-timeout> </outgoing-message-handler> </initiator-config> </remote-cache-scheme> </caching-schemes> </cache-config>
A Local Cache is a cache that is local to (completely contained within) a particular C++ application. There are several attributes of the Local Cache that are particularly interesting:
The local cache implements the same interfaces that the remote caches implement, meaning that there is no programming difference between using a local and a remote cache.
The Local Cache can be size-limited. Size-limited means that the Local Cache can restrict the number of entries that it caches, and automatically evict entries when the cache becomes full. Furthermore, both the sizing of entries and the eviction policies can be customized, for example allowing the cache to be size-limited based on the memory used by the cached entries. The default eviction policy uses a combination of Most Frequently Used (MFU) and Most Recently Used (MRU) information, scaled on a logarithmic curve, to determine what cache items to evict. This algorithm is the best general-purpose eviction algorithm because it works well for short duration and long duration caches, and it balances frequency versus recentness to avoid cache thrashing. The pure LRU and pure LFU algorithms are also supported, and the ability to plug in custom eviction policies.
The Local Cache supports automatic expiration of cached entries, meaning that each cache entry can be assigned a time-to-live value in the cache. Furthermore, the entire cache can be configured to flush itself on a periodic basis or at a preset time.
The Local Cache is thread safe and highly concurrent.
The Local Cache provides cache "get" statistics. It maintains hit and miss statistics. These run-time statistics accurately project the effectiveness of the cache and are used to adjust size-limiting and auto-expiring settings accordingly while the cache is running.
The element for configuring the Local Cache is <local-scheme
>. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near-scheme. The <local-scheme>
provides several optional subelements that let you define the characteristics of the cache. For example, the <low-units>
and <high-units>
subelements allow you to limit the cache in terms of size. When the cache reaches its maximum allowable size, it prunes itself back to a specified smaller size, choosing which entries to evict according to a specified eviction-policy (<eviction-policy>
). The entries and size limitations are measured in terms of units as calculated by the scheme's unit-calculator (<unit-calculator>
).
You can also limit the cache in terms of time. The <expiry-delay
> subelement specifies the amount of time from last update that entries are kept by the cache before being marked as expired. Any attempt to read an expired entry results in a reloading of the entry from the configured cache store (<cachestore-scheme>
). Expired values are periodically discarded from the cache based on the flush-delay.
If a <cache-store-scheme>
is not specified, then the cached data only resides in memory, and only reflect operations performed on the cache itself. See <local-scheme
> for a complete description of all of the available subelements.
Example 7-4 demonstrates a local cache configuration.
Example 7-4 Local Cache Configuration
<?xml version='1.0'?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example-local-cache</cache-name> <scheme-name>example-local</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <local-scheme> <scheme-name>example-local</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>32000</high-units> <low-units>10</low-units> <unit-calculator>FIXED</unit-calculator> <expiry-delay>10ms</expiry-delay> <cachestore-scheme> <class-scheme> <class-name>ExampleCacheStore</class-name> </class-scheme> </cachestore-scheme> <pre-load>true</pre-load> </local-scheme> </caching-schemes> </cache-config>
This section describes the Near Cache as it pertains to Coherence for C++ clients. For a complete discussion of the concepts behind a Near Cache, its configuration, and ways to keep it synchronized with the back tier, see "Configuring a Near Cache" in the Developing Applications with Oracle Coherence.
In Coherence for C++, the Near Cache is a coherence::net::NamedCache
implementation that wraps the front cache and the back cache using a read-through/write-through approach. If the back cache implements the ObservableCache
interface, then the Near Cache can use either the listen None
, Present
, All
, or Auto
strategy to invalidate any front cache entries that might have been changed in the back cache.
A typical Near Cache is configured to use a local cache (thread safe, highly concurrent, size-limited and possibly auto-expiring) as the front cache and a remote cache as a back cache. A Near Cache is configured by using the near-scheme which has two child elements: a front-scheme for configuring a local (front) cache and a back-scheme for defining a remote (back) cache.
A Near Cache is configured by using the <near-scheme
> element in the coherence-cache-config
file. This element has two required subelements: front-scheme
for configuring a local (front-tier) cache and a back-scheme
for defining a remote (back-tier) cache. While a local cache (<local-scheme
>) is a typical choice for the front-tier, you can also use non-JVM heap based caches, (<external-scheme
> or <paged-external-scheme
>) or schemes based on Java objects (<class-scheme
>).
The remote or back-tier cache is described by the <back-scheme
> element. A back-tier cache can be either a distributed cache (<distributed-scheme
>) or a remote cache (<remote-cache-scheme
>). The <remote-cache-scheme
> element enables you to use a clustered cache from outside the current cluster.
Optional subelements of <near-scheme
> include <invalidation-strategy>
for specifying how the front-tier and back-tier objects are kept synchronized and <listener
> for specifying a listener which is notified of events occurring on the cache.
Example 7-5 demonstrates a near cache configuration.
Example 7-5 Near Cache Configuration
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>dist-extend-near</cache-name> <scheme-name>extend-near</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <near-scheme> <scheme-name>extend-near</scheme-name> <front-scheme> <local-scheme> <high-units>1000</high-units> </local-scheme> </front-scheme> <back-scheme> <remote-cache-scheme> <scheme-ref>extend-dist</scheme-ref> </remote-cache-scheme> </back-scheme> <invalidation-strategy>all</invalidation-strategy> </near-scheme> <remote-cache-scheme> <scheme-name>extend-dist</scheme-name> <service-name>ExtendTcpCacheService</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <socket-address> <address>localhost</address> <port>9099</port> </socket-address> </remote-addresses> <connect-timeout>10s</connect-timeout> </tcp-initiator> <outgoing-message-handler> <request-timeout>5s</request-timeout> </outgoing-message-handler> </initiator-config> </remote-cache-scheme> </caching-schemes> </cache-config>
When a Coherence*Extend client service detects that the connection between the client and cluster has been severed (for example, due to a network, software, or hardware failure), the Coherence*Extend client service implementation (that is, CacheService
or InvocationService
) raises a MemberEventType
.Left
event (by using the MemberEventHandler delegate
) and the service is stopped. If the client application attempts to subsequently use the service, the service automatically restarts itself and attempt to reconnect to the cluster. If the connection is successful, the service raises a MemberEventType.Joined
event; otherwise, a irrecoverable error exception is thrown to the client application.
A Coherence*Extend service has several mechanisms for detecting dropped connections. Some mechanisms are inherit to the underlying protocol (such as TCP/IP in Extend-TCP), whereas others are implemented by the service itself. The latter mechanisms are configured by using the <outgoing-message-handler>
element. For details on this element, see Developing Applications with Oracle Coherence. In particular, the <request-timeout>
value controls the amount of time to wait for a response before abandoning the request. The <heartbeat-interval>
and <heartbeat-timeout>
values control the amount of time to wait for a response to a ping request before the connection is closed.
A reference to a configured Near Cache can be obtained by name by using the coherence::net::CacheFactory
class as follows:
NamedCache::Handle hCache = CacheFactory::getCache("example-near-cache");
Instances of all NamedCache
implementations should be explicitly released by calling the NamedCache::release()
method when they are no longer needed, to free up any resources they might hold.
If the particular NamedCache
is used for the duration of the application, then the resources are cleaned up when the application is shut down or otherwise stops. However, if it is only used for a period, the application should call its release()
method when finished using it.
To use the Coherence for C++ library in your C++ applications, you must link Coherence for C++ library with your application and provide a Coherence for C++ cache configuration and its location.
The location of the cache configuration file can be set by an environment variable specified in the sample application section or programmatically.
As described in "Setting the run-time Library and Search Path", the tangosol.coherence.cacheconfig
system property specifies the location of the cache configuration file. To set the configuration location on Windows execute:
c:\coherence_cpp\examples> set tangosol.coherence.cacheconfig=config\extend-cache-config.xml
You can set the location programmatically by using either DefaultConfigurableCacheFactory::create
or CacheFactory::configure
(using the CacheFactory::loadXmlFile
helper method, if needed).
Example 7-6 Setting the Configuration File Location
static Handle coherence::net::DefaultConfigurableCacheFactory::create (String::View vsFile = String::NULL_STRING)
The create
method of the DefaultConfigurableCacheFactory
class creates a new Coherence cache
factory. The vsFile
parameter specifies the name and location of the Coherence configuration file to load.
Example 7-7 Creating a Coherence Cache Factory
static void coherence::net::CacheFactory::configure (XmlElement::View vXmlCache, XmlElement::View vXmlCoherence = NULL)
The configure
method configures the CacheFactory
and local member. The vXmlCache
parameter specifies an XML element corresponding to a coherence-cache-config.xsd
and vXmlCoherence
specifies an XML element corresponding to coherence-operational-config.xsd
.
Example 7-8 Configuring a CacheFactory and a Local Member
static XmlElement::Handle coherence::net::CacheFactory::loadXmlFile (String::View vsFile)
The loadXmlFile
method reads an XmlElement
from the named file. This method does not configure the CacheFactory
, but obtains a configuration which can be supplied to the configure
method. The parameter vsFile
specifies the name of the file to read from.
The C++ code in Example 7-9 uses the CacheFactory::configure
method to set the location of the cache configuration files for the server/cluster (coherence-extend-config.xml
) and for the C++ client (tangosol-operation-config.xml
).
The operational configuration override file (called tangosol-coherence-override.xml
by default), controls the operational and run-time settings used by Oracle Coherence to create, configure and maintain its clustering, communication, and data management services. As with the Java client use of this file is optional for the C++ client.
For a C++ client, the file specifies or overrides general operations settings for a Coherence application that are not specifically related to caching. For a C++ client, the key elements are for logging, the Coherence product edition, and the location and role assignment of particular cluster members.
The operational configuration can be configured either programmatically or in the tangosol-coherence-override.xml
file. To configure the operational configuration programmatically, specify an XML file that follows the coherence-operational-config.xsd
schema and contains an element in the vXmlCoherence
parameter of the CacheFactory::configure
method (coherence::net::CacheFactory::configure
(View vXmlCache, View vXmlCoherence))
:
license-config
—The license-config
element contains subelements that allow you to configure the edition and operational mode for Coherence. The edition-name subelement specifies the product edition (such as Grid Edition, Enterprise Edition, Real Time Client, and so on) that the member uses. This allows multiple product editions to be used within the same cluster, with each member specifying the edition that it uses. Only the RTC (real time client) and DC (data client) values are recognized for the Coherence for C++ client. The license-config
is an optional subelement of the coherence
element, and defaults to RTC.
logging-config
— The logging-config
element contains subelements that allow you to configure how messages are logged for your system. This element enables you to specify destination of the log messages, the severity level for logged messages, and the log message format. The logging-config
is a required subelement of the coherence
element. For more information on logging, see "Configuring a Logger".
member-identity
—The member-identity
element specifies detailed identity information that is useful for defining the location and role of the cluster member. You can use this element to specify the name of the cluster, rack, site, computer name, role, and so on, to which the member belongs. The member-identity
is an optional subelement of the cluster-config
element. Example 7-10 illustrates the contents of a sample tangosol-coherence.xml
file.
Example 7-10 Sample Operational Configuration
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <member-identity> <site-name>extend site</site-name> <rack-name>rack 1</rack-name> <machine-name>computer 1</machine-name> </member-identity> </cluster-config> <logging-config> <destination>stderr</destination> <severity-level>5</severity-level> <message-format>(thread={thread}): {text}</message-format> <character-limit>8192</character-limit> </logging-config> <license-config> <edition-name>RTC</edition-name> <license-mode>prod</license-mode> </license-config> </coherence>
Operational Configuration Elements provides more detailed information on the operational configuration file and the elements that it can define.
The Logger is configured using the logging-config
element in the operational configuration file. The element provides the following attributes that can record detailed information about logged errors.
destination
—determines the type of LogOutput
used by the Logger. Valid values are:
stderr
for Console.Error
stdout
for Console.Out
file path if messages should be directed to a file
severity-level
—determines the log level that a message must meet or exceed to be logged.
message-format
—determines the log message format.
character-limit
—determines the maximum number of characters that the logger daemon processes from the message queue before discarding all remaining messages in the queue. Example 7-11 illustrates an operational configuration that contains a logging configuration. For more information on operational configuration, see "Operational Configuration File (tangosol-coherence-override.xml)".
Example 7-11 Operational Configuration File that Includes a Logger
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <logging-config> <destination>stderr</destination> <severity-level>5</severity-level> <message-format>(thread={thread}): {text}</message-format> <character-limit>8192</character-limit> </logging-config> </coherence>
To start a DefaultCacheServer
that uses the cluster-side Coherence cache configuration described earlier to allow Coherence for C++ clients to connect to the Coherence cluster by using TCP/IP, you must do the following:
Change the current directory to the Oracle Coherence library directory (%COHERENCE_HOME%\lib
on Windows and $COHERENCE_HOME/lib
on UNIX).
Make sure that the paths are configured so that the Java command runs.
Start the DefaultCacheServer
using the command line below: