30 Working with Partitions
This chapter includes the following sections:
- Specifying Data Affinity
Learn about using data affinity with Coherence; how to configure data affinity; and review some data affinity examples. - Changing the Number of Partitions
The default partition count for a distributed cache service is 257 partitions. - Changing the Partition Distribution Strategy
Partition distribution defines how partitions are assigned to storage-enabled cluster members.
Parent topic: Performing Data Grid Operations
Specifying Data Affinity
This section includes the following topics:
- Overview of Data Affinity
- Specifying Data Affinity with a KeyAssociation
- Specifying Data Affinity with a KeyAssociator
- Deferring the Key Association Check
- Example of Using Affinity
Parent topic: Working with Partitions
Overview of Data Affinity
Data affinity describes the concept of ensuring that a group of related cache entries is contained within a single cache partition. This ensures that all relevant data is managed on a single primary cache node (without compromising fault-tolerance).
Affinity may span multiple caches (if they are managed by the same cache service, which generally is the case). For example, in a master-detail pattern such as an Order-LineItem, the Order
object may be co-located with the entire collection of LineItem
objects that are associated with it.
The are two benefits for using data affinity. First, only a single cache node is required to manage queries and transactions against a set of related items. Second, all concurrency operations are managed locally and avoids the need for clustered synchronization.
Several standard Coherence operations can benefit from affinity. These include cache queries, InvocableMap
operations, and the getAll
, putAll
, and removeAll
methods.
Note:
Data affinity is specified in terms of entry keys (not values). As a result, the association information must be present in the key class. Similarly, the association logic applies to the key class, not the value class.
Affinity is specified in terms of a relationship to a partitioned key. In the Order
-LineItem
example above, the Order
objects would be partitioned normally, and the LineItem
objects would be associated with the appropriate Order
object.
The association does not have to be directly tied to the actual parent key - it only must be a functional mapping of the parent key. It could be a single field of the parent key (even if it is non-unique), or an integer hash of the parent key. All that matters is that all child keys return the same associated key; it does not matter whether the associated key is an actual key (it is simply a "group id"). This fact may help minimize the size impact on the child key classes that do not contain the parent key information (as it is derived data, the size of the data may be decided explicitly, and it also does not affect the behavior of the key). Note that making the association too general (having too many keys associated with the same "group id") can cause a "lumpy" distribution (if all child keys return the same association key regardless of what the parent key is, the child keys are all assigned to a single partition, and are not spread across the cluster).
Parent topic: Specifying Data Affinity
Specifying Data Affinity with a KeyAssociation
For application-defined keys, the class (of the cache key) can implement com.tangosol.net.cache.KeyAssociation
as follows:
Example 30-1 Creating a Key Association
import com.tangosol.net.cache.KeyAssociation; public class LineItemId implements KeyAssociation { // {...} public Object getAssociatedKey() { return getOrderId(); } // {...} }
Parent topic: Specifying Data Affinity
Specifying Data Affinity with a KeyAssociator
Applications may also provide a class the implements the KeyAssociator
interface:
Example 30-2 A Custom KeyAssociator
import com.tangosol.net.partition.KeyAssociator; public class LineItemAssociator implements KeyAssociator { public Object getAssociatedKey(Object oKey) { if (oKey instanceof LineItemId) { return ((LineItemId) oKey).getOrderId(); } else if (oKey instanceof OrderId) { return oKey; } else { return null; } } public void init(PartitionedService service) { } }
The key associator is configured for a NamedCache
in the <distributed-scheme>
element that defined the cache:
Example 30-3 Configuring a Key Associator
<distributed-scheme> ... <key-associator> <class-name>LineItemAssociator</class-name> </key-associator> </distributed-scheme>
Parent topic: Specifying Data Affinity
Deferring the Key Association Check
Key association can be implemented either on the cluster or on the extend client. When using extend clients, the best practice is to implement key association on the client, which provides the best performance by processing the keys before they are sent to the cluster. Key association is processed on the client by default. Existing client implementations that rely on key association on the cluster must set the defer-key-association-check
parameter in order to force the processing of key classes on the cluster.
To force key association processing to be done on the cluster side instead of by the extend client, set the <defer-key-association-check>
element, within a <remote-cache-scheme>
element, in the client-side cache configuration to true
. For example:
<remote-cache-scheme> ... <defer-key-association-check>true</defer-key-association-check> </remote-cache-scheme>
Note:
If the parameter is set to true
, a key class implementation must be found on the cluster even if key association is no being used.
See Implementing a Java Version of a .NET Object and Implementing a Java Version of a C++ Object) in Developing Remote Clients for Oracle Coherence for more information on deferring key association with .NET and C++ clients, respectively.
Parent topic: Specifying Data Affinity
Example of Using Affinity
Example 30-4 illustrates how to use affinity to create a more efficient query (NamedCache.entrySet(Filter)
) and cache access (NamedCache.getAll(Collection)
).
Example 30-4 Using Affinity for a More Efficient Query
OrderId orderId = new OrderId(1234); // this Filter is applied to all LineItem objects to fetch those // for which getOrderId() returns the specified order identifier // "select * from LineItem where OrderId = :orderId"Filter filterEq = new EqualsFilter("getOrderId", orderId); // this Filter directs the query to the cluster node that currently owns // the Order object with the given identifier Filter filterAsc = new KeyAssociatedFilter(filterEq, orderId); // run the optimized query to get the ChildKey objects Set setLineItemKeys = cacheLineItems.keySet(filterAsc); // get all the Child objects immediately Set setLineItems = cacheLineItems.getAll(setLineItemKeys); // Or remove all immediately cacheLineItems.keySet().removeAll(setLineItemKeys);
Parent topic: Specifying Data Affinity
Changing the Number of Partitions
Each cache server in the cluster that hosts a distributed cache service manages a balanced number of the partitions. For example, each cache server in a cluster of four cache servers manages 64 partitions. The default partition count is typically acceptable for clusters containing up to 16 cache servers. However, larger clusters require more partitions to ensure optimal performance.
All members of the same service must have the same consistent partition count. Changing the partition count with active persistence is not supported out-of-the-box. See Workarounds to Migrate a Persistent Service to a Different Partition Count in Administering Oracle Coherence.
This section includes the following topics:
Define the Partition Count
To change the number of partitions for a distribute cache service, edit the cache configuration file and add a <partition-count>
element, within the <distributed-scheme>
element, that includes the number of partitions to use for the service. For example:
<distributed-scheme> <scheme-name>distributed</scheme-name> <service-name>DistributedCache</service-name> <partition-count>1181</partition-count> ... </distributed-scheme>
Parent topic: Changing the Number of Partitions
Deciding the number of Partitions
There is no exact formula for selecting a partition count. An ideal partition count balances the number of partitions on each cluster member with the amount of data each partition holds. Use the following guidelines when selecting a partition count and always perform tests to verify that the partition count is not adversely affecting performance.
-
The partition count should always be a prime number. A list of primes can be found at
http://primes.utm.edu/lists/
. -
The number of partitions must be large enough to support a balanced distribution without each member managing too few partitions. For example, a partition count that results in only two partitions on each member is too constraining.
-
The number of partitions must not be too large that network bandwidth is wasted with transfer overhead and bookkeeping for many partition transfers (a unit of transfer is a partition). For example, transferring thousands of partitions to a new cache server member requires a greater amount of network resources and can degrade cluster performance especially during startup.
-
The amount of data a partition manages must not be too large (the more data a partition manages: the higher the partition promotion and transfer costs). The amount of data a partition manages is only limited by the amount of available memory on the cache server. A partition limit of 50MB typically ensures good performance. A partition limit between 50MB-100MB (even higher with 10GbE or faster) can be used for larger clusters. Larger limits can be used with the understanding that there will be a slight increase in transfer latency and that larger heaps with more overhead space are required.
As an example, consider a cache server that is configured with a 4G heap and stores approximately 1.3G of primary data not including indexes (leaving 2/3 of the heap for backup and scratch space). If the decided partition limit is a conservative 25MB, then a single cache server can safely use 53 partitions (1365M/25M rounded down to the previous prime). Therefore, a cluster that contains 20 cache servers can safely use 1051 partitions (53*20 rounded down to the previous prime) and stores approximately 25G of primary data. A cluster of 100 cache servers can safely use 5297 partitions and can store approximately 129G of primary data.
Parent topic: Changing the Number of Partitions
Changing the Partition Distribution Strategy
com.tangosol.net.partition.PartitionAssignmentStrategy
interface.
This section includes the following topics:
Parent topic: Working with Partitions
Specifying a Partition Assignment Strategy
The following predefined partition assignment strategies are available:
-
simple
– (default) The simple assignment strategy attempts to balance partition distribution while ensuring machine-safety. -
mirror:<
service-name
>
– The mirror assignment strategy attempts to co-locate the service's partitions with the partitions of the specified service. This strategy is used to increase the likelihood that key-associated, cross-service cache access remains local to a member. -
custom – a class that implements the
com.tangosol.net.partition.PartitionAssignmentStrategy
interface.
To configure a partition assignment strategy for a specific partitioned cache service, add a <partition-assignment-strategy>
element within a distributed cache definition:
<distributed-scheme> ... <partition-assignment-strategy>mirror:<MyService> </partition-assignment-strategy> ... </distributed-scheme>
To configure the partition assignment strategy for all instances of the distributed cache service type, override the partitioned cache service's partition-assignment-strategy
initialization parameter in an operational override file. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <services> <service id="3"> <init-params> <init-param id="21"> <param-name>partition-assignment-strategy</param-name> <param-value>mirror:<MyService></param-value> </init-param> </init-params> </service> </services> </cluster-config> </coherence>
Parent topic: Changing the Partition Distribution Strategy
Enabling a Custom Partition Assignment Strategy
To specify a custom partition assignment strategy, include an <instance>
subelement within the <partition-assignment-strategy>
element and provide a fully qualified class name that implements the com.tangosol.net.partition.PartitionAssignmentStrategy
interface. A custom class can also extend the com.tangosol.net.partition.SimpleAssignmentStrategy
class. See instance. The following example enables a partition assignment strategy that is implemented in the MyPAStrategy
class.
<distributed-scheme> ... <partition-assignment-strategy> <instance> <class-name>package.MyPAStrategy</class-name> </instance> </partition-assignment-strategy> ... </distributed-scheme>
As an alternative, the <instance>
element supports the use of a <class-factory-name>
element to use a factory class that is responsible for creating PartitionAssignmentStrategy
instances, and a <method-name>
element to specify the static factory method on the factory class that performs object instantiation. The following example gets a strategy instance using the getStrategy
method on the MyPAStrategyFactory
class.
<distributed-scheme> ... <partition-assignment-strategy> <instance> <class-factory-name>package.MyPAStrategyFactory</class-factory-name> <method-name>getStrategy</method-name> </instance> </partition-assignment-strategy> ... </distributed-scheme>
Any initialization parameters that are required for an implementation can be specified using the <init-params>
element. The following example sets the iMaxTime
parameter to 2000
.
<distributed-scheme> ... <partition-assignment-strategy> <instance> <class-name>package.MyPAStrategy</class-name> <init-params> <init-param> <param-name>iMaxTime</param-name> <param-value>2000</param-value> </init-param> </init-params> </instance> </partition-assignment-strategy> ... </distributed-scheme>
Parent topic: Changing the Partition Distribution Strategy