10 Using the Engine Cache

This chapter describes how to enable the Oracle Communications WebRTC Session Controller Signaling Engine cache for improved performance with SIP-aware load balancers.

Overview of Engine Caching

A WebRTC Session Controller Signaling Engine cluster manages call-state data in several partitions in the memory of each engine server. Each call-state entry resides in one such partition on a specific engine server in the cluster. In many cases the engine server requesting the call-state entry is not the same engine server where it is stored. Engine servers fetch and write data in the SIP call-state store as necessary. Each call state data partition can have one or more backup copies in another server to provide automatic failover in the event that a SIP call-state store server fails or shuts down for some reason.

WebRTC Session Controller also provides the option for engine servers to cache a portion of the call-state data locally. When a local cache is used, an engine server first checks its local cache. If the cache contains the required data, and the local copy of the data is up-to-date (compared to the SIP call-state store copy), the engine locks the call state in the SIP call-state store but reads directly from its cache. This improves response time performance for the request, because the engine does not have to retrieve the call state data from a SIP call-state store.

The engine cache stores only the call state data that has been most recently used by engine servers. Call state data is moved into an engine's local cache as necessary to respond to client requests or to refresh out-of-date data. If the cache is full when a new call state must be written to the cache, the least-recently accessed call state entry is first removed from the cache. The size of the engine cache is not configurable.

Using a local cache is most beneficial when a SIP-aware load balancer manages requests to the engine cluster. With a SIP-aware load balancer, all of the requests for an established call are directed to the same engine server, which improves the effectiveness of the cache. If you do not use a SIP-aware load balancer, the effectiveness of the cache is limited, because subsequent requests for the same call may be distributed to different engine severs (having different cache contents).

Configuring Engine Caching

By default, engine caching is enabled. To disable partial caching of call state data in the engine, specify the engine-call-state-cache-enabled element in sipserver.xml:

<engine-call-state-cache-enabled>false</engine-call-state-cache-enabled>

When enabled, the cache size is fixed at a maximum of 250 call states. The size of the engine cache is not configurable.

Monitoring and Tuning Cache Performance

The SipPerformanceRuntime MBean monitors the behavior of the engine cache. Table 10-1 describes the MBean attributes.

Table 10-1 SipPerformanceRuntime Attribute Summary

Attribute Description
cacheRequests

Tracks the total number of requests for session data items.

cacheHits

The server increments this attribute each time a request for session data results in a version of that data being found in the engine server's local cache. This counter is incremented even if the cached data is out-of-date and requires updating with data from the SIP call-state store.

cacheValidHits

This attribute is incremented each time a request for session data is fully satisfied by a cached version of the data.


When enabled, the size of the cache is fixed at 250 call states. Because the cache consumes memory, you may need to modify the JVM settings used to run engine servers to meet your performance goals. Cached call states are maintained in the tenured store of the garbage collector. Try reducing the fixed NewSize value when the cache is enabled (for example, -XX:MaxNewSize=32m -XX:NewSize=32m). The actual value depends on the call state size used by applications and the size of the applications themselves.