Skip Headers
Oracle® Communication and Mobility Server Administrator's Guide
10g Release 3 (10.1.3)

Part Number E12656-01
Go to Documentation Home
Home
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

7 Configuring Presence and Presence Web Services

This chapter provides an introduction to the Oracle Communication and Mobility Server (OCMS) in the following sections:

Overview of Presence

Presence represents the end-user's willingness and ability to receive calls. Client presence is often represented as a contact management list, which displays user availability as icons. These icons, which not only represent a user's availability, but also a user's location, means of contact, or current activity, enable efficient communications between users.

The Presence application enables a service provider to extend presence service to end users. The application also enables service providers to base other services on presence information. The MBeans registered to the Presence application enable you to configure the presence service, which accepts, stores, and distributes presence information. See also "Presence Server" in Chapter 1, " An Overview of Oracle Communication and Mobility Server".

The Presence application MBeans enable you to manage the following:

Presence Status Publication

A presentity can publish a PIDF (Presence Information Data Format) document containing presence state to the Presence Server.

Presence Status Subscriptions

The Presence server supports subscriptions to a user's status. The Presence Server notifies the user when the watcher (subscriber) is authorized to view the user's status. The Presence server also notifies all of the active, authorized watchers of the publication of a new presence document.

Watcher-Info Support

The Presence Server enables the user who is publishing presence information to subscribe to watcher-info events to receive information on all watchers currently subscribing to the user's presence information. The Presence Server also notifies users of changes in the watcher subscriptions, such as new or terminated subscriptions.

Presence XDMS Authorization of Subscriptions

Whenever a watcher subscribes to a user's presence, the Presence Server checks the authorization policy that the publisher has set to see if the subscriber has the required authorization.

If no matching rule can be found, the subscriber is put in a pending state and a watcher info notification is sent to the publisher. Usually, the publisher's client (User Agent) presents a pop-up box asking whether to accept or reject a new pending subscriber. The answer is added to the publisher's authorization policy document in the form of a rule for this subscriber. The document is then updated by the client on the XDMS using HTTP. When the document is updated, the Presence Server reads the new policy document and acts on the new rule, changing the subscription state accordingly.

Privacy Filtering

A user can create privacy filtering rules to allow or block a user.

Presence Hard State

The hard state feature enables a user to leave a document in the XDMS that notifies watchers when there are no other documents. In general, this feature is used for leaving an off-line note, such as "On Vacation".

Composition of Multiple Presence Sources

If a user has two or more clients (such as a PC and a mobile phone) both publishing presence documents, the Presence Server combines two or more documents into a unified document as dictated by a composition policy. The Presence server supports two different composition policies: a default policy and a policy that performs composition according to the OMA (Open Mobile Alliance) Presence enabler.

The default composition policy is a simple, but robust, algorithm. It adds <dm:timestamp> elements to the <dm:person> and <dm:device> elements if they are missing, and <pidf:timestamp> elements to the <pidf:tuple> elements if they are missing.

When the Presence Server creates the candidate document, it includes all <pidf:tuple> and <dm:device> elements from the source documents. It includes only one <dm:person> element in the candidate document, and uses the latest published element based on the <dm:timestamp> element. All other <dm:person> elements are ignored.

Configuring Presence

Configuring the following MBeans enables Presence:

Configuring XDMS

The following MBeans enables you to configure the XDMS (XML Document Management Server):

Note:

If you change any attributes of the following MBeans, you must restart OCMS for these changes to take effect.
  • Presence

  • PresenceEventPackage

  • PresenceWInfoEventPackage

  • UAProfileEventPackage

  • XCAPConfig

Bus

The Bus MBean supports presence by setting the thread pool, the high and low watermarks for the job queues, and the duration that a job remains in the queue before notifications are dispatched. Table 7-1 describes the attributes of the Bus MBean.

Table 7-1 Attributes of the Bus MBean

Attribute Value Type Description

HighWatermark

int

The number of pending jobs reached before the bus's exhausted threshold level is reached. The default value is 20.

KeepAlive

long

The number of seconds to keep an idle thread alive before dropping it (if the current number of threads exceeds the value specified for MinThreads). The default value is 60.

LogDuration

long

The duration, in seconds, that an event remains in the queue. A warning is logged to the system log for events that remain in the queue for a period exceeding the specified duration before they are broadcast to the bus. This warning indicates that server is about to be overloaded, since an old job has been sent to the bus. The default value is 60.

LowWatermark

int

Specifies the low threshold level for the number of pending jobs. When this threshold is reached from below, the Bus logs a warning that it is about to be choked. At this point, no more warnings are logged until the high watermark level is reached. The default value is 15.

MinThreads

int

The minimum number of threads held in the thread pool. If no threads are used, then the specified number of threads remains in an idle state, ready for upcoming jobs. The default value is 15.

MaxThreads

int

The maximum number of threads held in the thread pool. When the specified number of threads are occupied with jobs, subsequent jobs are placed in a queue and are dealt with as the threads become available. The default value is 10.


PackageManager

The PresenceEventPackage, PresenceWInfoEventPackage, and UA-ProfileEventPackage MBeans enable you to configure the event packages, which define the state information to be reported by a notifier to a watcher (subscriber). These packages form the core of the Presence Server, as most requests flow through them.

A notifier is a User Agent (UA) that generates NOTIFY requests that alert subscribers to the state of a resource (the entity about which watchers request state information). Notifiers typically accept SUBSCRIBE requests to create subscriptions. A watcher is another type of UA, one that receives the NOTIFY requests issued by a notifier. Such requests contain information about the state of a resource of interest to the watcher. Watchers typically also generate SUBSCRIBE requests and send them to notifiers to create subscriptions.

The PackageManager MBean sets the configuration for the PresenceEventPackage, WatcherinfoPackage, and UA-ProfileEventPackage Means. Table 7-2 describes the attributes of the PackageManger MBean.

Table 7-2 Attributes of the EventPackages MBean

Attribute Description

CaseSensitiveUserPart

Setting this attribute to true enables case-sensitive handling of the user part of the SIP URI. If this parameter is set to false, then the user part of the URI is not a case-sensitive match. For example, foo is considered the same as FoO. The domain part of the URI is always case-insensitive.

EventPackageNames

A comma-separated list of event package names. For example: presence,presence.winfo,ua-profile.

WaitingSubsCleanupInterval

The interval, in seconds, in which the subscription cleanup check runs. The thread sleeps for this period and then awakens to check for any waiting subscriptions with a timestamp older than the MaxWaitingSubsTimeHours parameter. All old subscriptions are then removed from the subscribed resource.

Max WaitingSubsTimeHours

The maximum time, in hours, that a subscription can be in a waiting state before the server removes it. This parameter is used by the subscription cleanup check thread (waitingsubscleanupinterval) to decide if a waiting subscription is old enough to be removed from the subscribed resource.


Presence

The Presence MBean controls how the Presence Server interacts with presentities, Publish User Agents (PUAs) that provide presence information to presence services. The attributes (described in Table 7-3) include those for setting the composition policy for creating a unified document when a user publishes presence documents from two or more clients, as well as setting the blocking, filtering, and presence hard state.

Table 7-3 Attributes of the Presence MBean

Attribute Description/Value

CompositionPolicyFilename

The filename of the composition policy document. Values include compose.xslt, for the OCMS composition policy, and compose_OMA.xslt, for the OMA composition policy.

DefaultSubHandling

The default subscription authorization decision that the server makes when no presence rule is found for an authenticated user. The defined values are:

  • block

  • confirm

  • polite-block

Unauthenticated users will always be blocked if no rule is found. For more information about this, see Chapter 3.2.1: Subscription Handling in the IETF SIMPLE draft for presence rules (http://www.ietf.org/internet-drafts/draft-ietf-simple-presence-rules-04.txt).

DocumentStorageFactory

The name of the DocumentStorage Factory Class. The default value is oracle.sdp.presenceeventpackage.document.XMLDocumentStorageFactoryImpl.

DocumentStorageRootUrl

The system identifier for the document storage. In the file storage case, this is the root file URL path where documents are stored. The content of this directory should be deleted when the server is restarted. The default value is file:/tmp/presencestorage/.

DocumentStorageType

The type of storage to be used for presence documents. If the number of users is large, Oracle recommends that you store the presence documents on file. Valid values:

  • file

  • memory

The default value is memory.

HttpAssertedIdentityHeader

The type of asserted identity header used in all HTTP requests from the Presence Server to the XDMS. Set the value of this attribute to one expected by the XDMS. Valid values:

  • X_3GPP_ASSERTED_IDENTITY

  • X_3GPP_INTENDED_IDENTITY

  • X_XCAP_ASSERTED_IDENTITY (The default value.)

PidfManipulationAuid

The ID of the application usage for PIDF (Presence Information Data Format) manipulation. The default value is pidf-manipulation.

PidfManipulationDocumentName

The document name for pidf manipulation application usage. For example: hardstate. Unauthenticated users are blocked when no rule is found. If the URI contains a domain name instead of an IP address, then you must configure the DNS Server. The default value is hardstate.

PidfManipulationEnabled

Set to true (the default value) to enable PIDF manipulation.

PidfManipulationXcapUri

The SIP URI of the XDMS for the pidf manipulation application usage. The default value is: sip:127.0.0.1;transport=TCP;lr. The loose route (lr) parameter must be included in the SIP URI for the server to function properly.

PoliteBlockPendingSubscription

Set to true if pending subscriptions should be polite-blocked. This feature is used to hide the presentity from the presence watcher with a pending subscription and instead send them fake presence documents. If set to false the subscriptions will remain as pending.

PresRulesAuid

The ID of the application usage for presrules. The default is pres-rules.

PresRulesDocumentName

The document name for presrules application usage. The default value is presrules.

PresRulesXcapUri

The SIP URI of the XDMS for the presence rules application usage. The default value is: sip:127.0.0.1; transport=TCP;lr. The loose route (lr) parameter must be included in the SIP URI for the server to function properly.

PrivacyFilteringEnabled

Set to true to enable privacy filtering. Set to false to disable filtering. If privacy filtering is disabled, then all subscriptions that are allowed to see a user's presence will always see everything that has been published for the presentity.

TransformerFactory

The name of the TransformerFactory class. The default value is oracle.xml.jaxp.JXSAXTransformerFactory.


PresenceEventPackage

Table 7-4 describes the attributes of the PresenceEventPackage MBean. The presence event package has two subgroups: publish and subscribe. Each subgroup has a minexpires and a maxexpires parameter to set the interval of the expiry of a publication or a subscription that is accepted by the Presence Server. A client states when its publication or subscription expires. If a client sends an expiry time that is lower than the configured minexpires time, the server returns a 423 (Subscription Too Brief) response. If a client sends an expires time that is higher than the configured maxexpires time, the server returns the maxexpires time in the response. To keep a publication or subscription alive, the client sends republish or resubscribe to the server within the expiry time. The client must perform this task repeatedly through the lifetime of the publication or subscription.

Table 7-4 Attributes of the PresenceEventPackage

Attribute Value/Description

Description

A description of the PresenceEventPackage. For example: The event package that enables presence.

DocumentFactory

The DocumentFactory class name. The default value is oracle.sdp.presenceeventpackage.document.PresenceDocumentFactoryImpl.

EscMaxDocumentSize

The maximum size, in bytes, for the contents of a publication. If a client attempts to publish a document that is larger than the specified size, the server sends the 413 response, Request entity too long. The default value is 10000.

ESCMaxExpires

The maximum time, in seconds, for a publication to expire. The default value is 3600.

ESCMaxPubPerRes

The maximum number of publications allowed per resource. If the maximum number has been reached for a resource when a new publish is received, the server sends the 503 Response (Service Unavailable).

ESCMinExpires

The minimum time, in seconds, for a publication to expire. The default is 60.

EventStateCompositor

The class name of the EventStateCompositor. The default value is oracle.sdp.presenceeventpackage.PublishControl.

Name

The name of this event package. The default value is Presence.

Notifier

The name of the Notifier class. The default value is oracle.sdp.presenceeventpackage.PresenceSubscriptionControl.

NotifierMaxDocumentSize

The maximum size for a SUBSCRIBE.

NotifierMaxExpires

The maximum time, in seconds, for a SUBSCRIBE to expire. The default is 3600.

NotifierMaxNoOfSubsPerRes

The maximum number of subscriptions allowed per resource. If the maximum number has been reached for a resource, then a new presence subscribe is received and the server sends the 503 Response (Service Unavailable).

NotifierMinExpires

The minimum time, in seconds, for a SUBSCRIBE to expire.

ResourceManagerClassName

The name of the ResourceManager class. The default is oracle.sdp.presenceeventpackage.PresentityManagerImpl.


PresenceWInfoEventPackage

As described in RFC 3857, a Watcher Information Event Package monitors the resources in another event package to ascertain the state of all of the subscriptions to that resource. This information is then sent to the subscriptions of the Watcher Information Event Package. As a result, the subscriber learns of changes in the monitored resources subscriptions.

The PresenceWInfoEventPackage MBean (described in Table 7-5) sets the subscription state information for the Watcher Information Event Package.

Table 7-5 Attributes of the WatcherinfoEventPackage

Attribute Description/Value

Description

A description of the PresenceWInfoEventPackage. For example: The event package that enables watcherinfo.

DocumentFactory

The name of the DocumentFactory class. The default is oracle.sdp.eventnotificationservice.DocumentFactoryImpl.

Name

The name of the event package. The default value is presence.winfo.

Notifier

The Notifier class name. The default value is oracle.sdp.presenceeventpackage.PresenceSubscriptionControl.

NotifierMaxDocumentSize

The maximum document size for SUBSCRIBE.

NotifierMaxExpires

The maximum time, in seconds, for a SUBSCRIBE to expire. The default is 3600.

NotifierMaxNoSubsPerRes

The maximum number of subscriptions allowed per resource. If the maximum number has been reached for a resource when a new presence subscribe is received, the server will send a 503 Response (Service Unavailable). The default value is 100.

NotifierMinExpires

The minimum time, in seconds, for a SUBSCRIBE to expire.

ResourceManagerClassName

The name of the ResourceManager class. The default is oracle.sdp.winfoeventpackage.WatcherinfoResourceManager.


UA-ProfileEventPackage

Table 7-6 describes the attributes of the UA-ProfileEventPackage MBean.

Table 7-6 Attributes of the UA-Profile Event Package

Attributes Description/Value

Description

A description of the UA-ProfileEventPackage. The default value is The event package that enables the ua-profile.

Document Factory

The Document Factory class name. The default value is:

oracle.sdp.eventnotificationservice.DocumentFactoryImpl

Name

The name of the event package. The default value is ua-profile.

Notifier

The name of the Notifier class. The default value is:

oracle.sdp.presenceeventpackage.PresenceSubscriptionControl

NotifierMaxDocumentSize

The maximum document size for a SUBSCRIBE.

NotifierMaxExpires

The maximum time, in seconds, for a SUBSCRIBE to expire. The default is 6000.

NotifierMaxNoOfSubsPerRes

The maximum number of subscriptions allowed per resource. If the maximum number has been reached for a resource when a new presence subscribe is received, the server will send a 503 Response (Service Unavailable). The default value is 100.

NotifierMinExpires

The minimum time, in seconds, for a SUBSCRIBE to expire. The default value is 60.

ResourceManager

The name of the Resource Manager class. The default value is:

oracle.sdp.winfoeventpackage.WatcherinfoResourceManager


UserAgentFactoryService

The UserAgentFactoryService MBean sets the commands for user agent factory service. The Presence Server uses the user agent factory to subscribe to changes in XML documents stored in the XDMS for presence.

Table 7-7 Attributes of the UserAgentFactoryService MBean

Attribute Name Description/Value

DNSNames

A comma-separated list of DNS (Domain Name System) IP addresses used by the user agent.

IpAddress

The IP address for the user agent client; use the empty string (the default setting) for the default network interface on the current system.

PreferredTransport

The preferred transport protocol that enables communication between the Presence Server and the XDMS. The default value is TCP. Valid values are TCP and UDP.

Port

The IP port for the user agent client. The default value is 5070.


Command Service (XDMS Provisioning)

The Command Service MBean enables user provisioning to the XDMS. For more information see "CommandService".

XCapConfig

The XCapConfig MBean controls the configuration of the XDMS, the repository of the XCAP (Extensible Markup Language Configuration Access Protocol) documents containing user presence rules (pres-rules) and hard state information. The XCapConfig MBean settings can be ignored if the XDMS is external to OCMS.

Table 7-8 Attributes of the XCapConfig MBean

Attribute Name Description/Value

CreateNonExistingUserstore

Set to true to create a user store if one does not exist when storing a document; otherwise, set to false. If the parameter is set to false and a client tries to store a document for a user that does not exist, then the store fails. If the parameter is set to true, then the user will first be created in the XDMS and then the document will be stored. The default value is true.

MaxContentLength

The maximum size, in bytes, for an XDMS document. Although Oracle recommends a default maximum size per XDMS document of 1 MB (1000 contacts at about 1 KB each), you can increase or decrease the size of the document. If you increase the document size, then you must be sure to that there is sufficient disk space to accommodate the XDMS document * the number of users * the number of applications. If you set a smaller per-document size, then this calculation is reduced to the sum of (max_doc_size_n * number of users) where each max_doc_size_n is specific to application n.

The default size for the resource-lists document is also 1 MB.

PersistenceRootUrl

The persistent storage location. Use the default value jpa:oc4j if you are running a single node instance. This provides for default caching.

Use the value jpa:multinode if you are running a multinode presence topology that includes a presence server running on a single instance.

PidfManipulationAuid

The ID of the application usage for PIDF (Presence Information Data Format) manipulation. The default value is pidf-manipulation.

PidfManipulationDocname

The document name for pidf manipulation application usage. For example: hardstate. Unauthenticated users are blocked when no rule is found. If the URI contains a domain name instead of an IP address, then you must configure the DNS Server.

The default value is hardstate.

PresRulesAU

The name of the pres-rules application usage. The default value is pres-rules.

PresRulesDocName

The name of the pres-rules document. The default value is presrules.

PublicContentServerRootUrl

The URL to the public content server root. The URL must be set to the public URL of the content server (that is, the URL of the authentication HTTP proxy server).

PublicXCAPRootUrl

The URL to the public XDMS root, entered as http://<your.xdms.domain.com>/services/. For example, enter http://127.0.0.1:8080/services. The URL defined in this parameter gives clients the location of the content server (which can be on a separate server from the XDMS). The XDMS places this URL in the Content-Type header of its outgoing NOTIFY messages. For example, the Content-Type header in the following NOTIFY message from the XDMS to the Presence Server notes that the body of the pres-rules document is stored externally and also includes instructions within the URL for retrieving the document.

CSeq: 1 NOTIFY 
From: <sip:bob_0@144.22.3.45>;tag=66910936-0e31-41b2-abac-10d7616d04ef 
To: <sip:bob_0@144.22.3.45>;tag=ffa3e97bd77f91e6ca727fbf48a5678b 
Content-Type: message/external-body;URL="http://127.0.0.1:8888/contentserver/pres-rules/users/bob_0@144.22.3.45/presrules";access-type="URL" 
... 
Event: 
ua-profile;document="pres-rules/users/sip:bob_0@144.22.3.45/presrules";profile 
-type=application;auid="pres-rules" 

RequireAssertedIdentity

Set to true if all HTTP/XDMS requests require an asserted identity header; otherwise, set this parameter to false. Setting this attribute to true requires all XCAP traffic to be authenticated by the Aggregation Proxy. If this attribute is set to true, then any incoming XCAP request that lacks an asserted identity is denied access.


Configuring Presence Web Services

OCMS enables Web Service clients to access presence services through its support of the Parlay X Presence Web Service as defined in Open Service Access, Parlay X Web Services, Part 14, Presence ETSI ES 202 391-14. A Parlay X Web Service enables an HTTP Web Service client to access such presence services as publishing and subscribing to presence information. The Parlay X Presence Web Service does not require developers to be familiar with the SIP protocol to build such a Web-based client; instead, Parlay X enables Web developers can build this client using their knowledge of Web Services.

The Presence Web Services application, which is deployed as a child application of the Presence application, contains the following MBeans that enable you to configure a Web Services deployment server:

The Presence Web Services application also includes the PresenceSupplierWebService and PresenceConsumerWebService MBeans, which contain attributes for managing presence publication and watcher subscriptions enabled through the OCMS implementation of Presence Consumer and Presence Supplier interfaces.

PresenceWebServiceDeployer

Starts the JMX framework for the Presence Web Services application and deploys all of its Model MBeans. The operations of the PresenceWebServiceDeployer MBean enable you to retrieve information of the objects exposed by the Presence Web Service to this MBean.

Table 7-9 Operations of the PresenceWebServiceDeployer MBean

Operation Description

getManagedObjectNames

Returns a String array containing the object names of the deployed application.

getMBeanInfo

Returns the meta-data for the deployed MBean.

getMBeanInfo (locale)

Returns the localized meta-data for the deployed Mbean.


PresenceSupplierWebService

The PresenceSupplierWebService MBean (described in Table 7-10) enables you to manage the presence data published to watchers.

Table 7-10 Attributes of the PresenceSupplierWebService MBean

Attributes Description

Expires

The default expiry time, in seconds, for the PUBLISH of a presence status. The value entered for this attribute should be optimized to match that entered for the SessionTimeout attribute.

PIDFManipulationAU

The name of the application usage for PIDF (Presence Information Data Format) manipulation. The default value is pidf-manipulation.

PidfManipulationDocname

The document name for pidf manipulation application usage. For example: hardstate. Unauthenticated users are blocked when no rule is found. If the URI contains a domain name instead of an IP address, then you must configure the DNS Server.

The default value is hardstate.

PresRulesAU

The name of the pres-rules application usage. The default value is pres-rules.

PresRulesDocname

The name of the pres-rules document. The default value is presrules.

PublicXCAPRootUrl

The URL to the public XDMS root, entered as http://<your.xdms:domain.com>/services/. For example, enter http://127.0.0.1:8080/services.

SessionTimeout

The timeout of the HTTP session, in seconds. The value entered for this attribute should be optimized to match the value entered for the Expires attribute. This timeout takes effect for new sessions only.

SIPOutboundProxy

The IP address of the outbound proxy server where all requests are sent on the first hop. Enter this address in the following format:

sip:<IP address>;lr;transport=TCP

You can also enter the default port (5060) in this address. For example, enter sip:127.0.0.1:5060;lr;transport=TCP. The shortest format for entering this address is sip:127.0.0.1;lr.

If you do not define this attribute, then no outbound proxy will be used.


PresenceConsumerWebService

The PresenceConsumerWebService MBean (described in Table 7-11) enables you to set the duration of watcher subscriptions.

Table 7-11 Attributes of the PresenceConsumerWebService MBean

Attribute Value

Expires

The default expiry time, in seconds, for watcher subscriptions. The value entered for this attribute should be optimized to match the value entered for the SessionTimeout attribute.

SessionTimeout

The timeout of the HTTP session, in seconds. The value entered for this attribute should be optimized to match the value entered for the Expires attribute. This timeout takes effect for new sessions only.

SIPOutboundProxy

The IP address of the outbound proxy server where all requests are sent on the first hop. Enter this address in the following format:

sip:<IP address>;lr;transport=TCP

You can also enter the default port (5060) in this address. For example, enter sip:127.0.0.1:5060;lr;transport=TCP. The shortest format for entering this address is sip:127.0.0.1;lr.

If you do not define this attribute, then no outbound proxy will be used.


Aggregation Proxy

The Aggregation Proxy is a server-side entry point for OMA clients that authenticates any XCAP traffic and Web Service calls (which are conducted through HTTP, not SIP) by providing identity assertion. This component acts as the gatekeeper for the trusted domain that houses the Presence Server and the XDMS.

The Parlay X Web Service operates within a trusted domain where the Aggregation Proxy authorizes the user of the Web Service. It authenticates XCAP traffic and Web Service calls emanating from a Parlay X client by inserting identity headers that identify the user of the Web Services. The Aggregation Proxy then proxies this traffic (which is sent over HTTP) to the Parlay X Web Service and XDMS.

The attributes of the Aggregation Proxy MBean (Table 7-12) enable you to set the type of identity assertion that is appropriate to the XDMS. In addition, you set the host and port of the Web Server and XDMS that receive the proxied traffic from the Aggregation Proxy.

Table 7-12 Attributes of the Aggregation Proxy

Attribute Description

AssertedIdentityType

Enter the number corresponding to the identity header inserted into proxied HTTP requests that is appropriate to the XDMS:

  1. X_3GPP_ASSERTED_IDENTITY (the default)

  2. X_3GPP_INTENDED_IDENTITY

  3. X_XCAP_ASSERTED_IDENTITY

ContentHost

Hostname of the Content Server where the Aggregation Proxy sends proxied requests.

ContentPort

The port number of the Content Server where the Aggregation Proxy sends proxied requests.

ContentRoot

The root URL of the Content Server.

IgnoreUserpartCase

Set to true if case-sensitive handling of the user name is not required.

JAASLogingContext

The name for the JAAS (Java Authentication and Authorization Service) javax.security.auth.login.LoginContext.

JAASRoles

A comma-separated list of JAAS roles for authentication. If the value is "*", it will allow all JAAS roles.

PresenceConsumerEndpoint

Note: this attribute is deprecated and is only here for backward compatibility.

The path to the endpoint of the Presence Consumer Web Service. The methods of the Presence Consumer interface enable watchers to obtain presence data.

PresenceSupplierEndpoint

Note: this attribute is deprecated and is only here for backward compatibility.

The path to the endpoint of the PresenceSupplier Web Service. The methods of the Presence Supplier Interface enable presentities to provide presence manage the data accessed by watchers.

TrustedHosts

A comma-separated list of IP addresses of trusted hosts. Asserted identity headers are removed from requests with addresses that are not included in this list.

WebServiceHost

Note: this attribute is deprecated and is only here for backward compatibility.

The host name of the Web Services deployment server to which the Aggregation proxies requests.

WebServicePort

Note: this attribute is deprecated and is only here for backward compatibility.

The port of the Web Services deployment server to which the Aggregation proxies requests.

XCAPHost

The host name of the XDMS to which the Aggregation Proxy proxies requests.

XCAPPort

The port of the XDMS to which the Aggregation Proxy proxies requests.

XCAPRoot

The root URL of the XDMS.


Configuring the Aggregation Proxy to Work with Realms

You can configure the Aggregation Proxy to work with one or more realms.

Perform the following steps:

  1. Select aggregationproxy > Administration > Security Provider > OCMSLoginModule > Edit.

    Five attributes are displayed, the most important of which is the realm.

  2. Configure the realm or realms as a comma-separated list in the following format:

    <domain>=<realm>,<domain>=<realm>,...
    

Securing the XDMS with the Aggregation Proxy

Secure the XDMS by deploying it behind the Aggregation Proxy. Access to the XDMS should be restricted only to the Aggregation Proxy and the Presence Server. In addition, securing the XDMS requires that you configure the Presence Sever application's XCapConfig MBean, the Aggregation Proxy and the Oracle Communicator as follows:

  • Deny access to any incoming XCAP request that lacks an asserted identity header by setting the value of the RequiredAssertedIdentity attribute of Presence Server's XCAPConfig MBean to true. Setting this attribute to true requires authentication of all XCAP by the Aggregation Proxy.

  • Set the appropriate XDMS-related values for the XCAPHost, XCAPPort, XCAPRoot, ContentHost, ContentPort and ContentRoot attributes of the Aggregation Proxy MBean.

  • Configure the Oracle Communicator's XDMS settings in customize.xml to point to the Aggregation Proxy -- not to the XDMS -- by defining the <RootContext> element as aggregationproxy, the context root of the Aggregation Proxy and by setting the <host> and <port> elements to the host of the Aggregation Proxy and the HTTPS port on that host, such as 443.

The Aggregation Proxy must be deployed as a child application of Subscriber Data Services. You can bind to the default-web-site for HTTP. To enable HTTP over SSL, you must configure the OC4J Container on which the Aggregation Proxy executes to provide HTTPS. Refer to Oracle Containers for J2EE Security Guide for instructions on configuring HTTPS. To enable access to the Aggregation Proxy over HTTPS, bind the Aggregation Proxy with the secure-web-site. Ensure that the Presence Server binds with the default-web-site if it resides on the same server with the Aggregation Proxy. Because the Presence Server resides in the presence.ear file, all of the HTTP servlets in that EAR file must bind to default-web-site.

Configuring Scalable Presence Deployments with the User Dispatcher

In non-distributed environments, stateful applications function properly because they receive requests from a single node. In distributed environments where applications must be scaled over multiple nodes to accomodate traffic, stateful applications may fail because any node can serve a request, not just to the one running the application that maintains the session state for the request. The User Dispatcher guarantees that SIP and HTTP user requests are dispatched to the node that maintains the session state needed to succesfully process that request; once user requests are directed to the User Dispatcher, they are consistently sent to the same destination.

Failover

Fail-over is a technique that can be used by the User Dispatcher to assert a higher level of availability of the Presence Server. Since the Presence server does not replicate any state (such as established subscriptions) the state has to be recreated by the clients on the new server node by setting up new subscriptions. Also, since a subscription is a SIP dialog and the User Dispatcher is not record routing, it cannot fail-over a subscription from one node to another. All subsequent requests will follow the route set and end up on the old node.

This is not a problem when failing over from a failing server since that node is not processing the traffic anyway and any request within a dialog will eventually get a fail response or timeout and the dialog will be terminated. However, when migrating back a user from the backup node to the original node (when it has been repaired), which has to be done to maintain an even distribution after the failure, this is a problem that can lead to broken presence functionality. The only way to migrate a subscription from one running server to another is to either restart the client or the server.

However, the server that holds the subscription can actively terminate it by sending out a terminating NOTIFY and discarding the subscription state. This will force the client to issue a new initial SUBSCRIBE to establish a new dialog. For a subscription to migrate from one live node to another the User Dispatcher must fail-over the traffic (which is only affecting initial requests) and instruct the current server to terminate the subscriptions.

Presentity Migration

Presentities must be migrated when the set of nodes have changed. This involves having the Presence application to terminate some or all subscriptions to make the migration happen.

Stateless User Dispatcher and Even Distribution

The most basic approach is to contact the Presence application on all nodes to terminate all its subscriptions. The problem with this is that a burst of traffic will be generated although spread out over a period of time. This time period results in incorrect presence states since the longer the termination period is the longer it will take until all users get a correct presence state.

To optimize this you could terminate only those subscriptions that actually need to be terminated (the ones that has been migrated). The problem is that the User Dispatcher does not know which users these are (since it does stateless distribution based on an algorithm) and the Presence application does not either (since it only knows what users it has). However, if the Presence application could iterate over all its subscriptions and for each of them ask the User Dispatcher if this user would go to this Presence node, then the Presence server could terminate only those that will not come back to itself. This may be a heavy operation, but under the constraint that each Presence server is collocated with a User Dispatcher each such callback would be within the same JVM.

Presence Application Broadcast

Another solution is to have the Presence servers guarantee that a user only exists on one Presence node at any given time. This can be done by having the Presence application broadcast a message to all its neighbors when it receives a PUBLISH or SUBSCRIBE for a new presentity (a presentity that it does not already have a state for). If any other Presence node that receives this broadcast message already has active subscriptions for this presentity, that server must terminate that subscription so that the client can establish a new subscription with the new server.

With this functionality in the Presence application, the User Dispatcher would not have to perform additional steps to migrate a user from one live node to another.

Standby Server Pool

Another approach is to have a standby pool of servers that are idling ready to take over traffic from a failing node. When an active node fails the User Dispatcher will redistribute all its traffic to one server from the standby pool. This node will now become active and when the failing node eventually is repaired it will be added to the standby pool. This will eliminate the need for migrating users back from a live node when a failing node resumes.

This approach requires more hardware and the utilization of hardware resources will not be optimal.

Failure Types

There are several types of failures that can occur in a Presence server and different types of failures may require different actions from the User Dispatcher.

Fatal Failures

If the failure is fatal all state information is lost and established sessions will fail. However, depending on the failure response, subscriptions (presence subscribe sessions) can survive using a new SIP dialog. If the response code is a 481 the presence client must according to RFC 3265 establish a new SUBSCRIBE dialog and this is not considered to be a failure from a presence perspective. All other failure responses may (depending on the client implementation) be handled as an error by the client and should therefore be considered a failure.

After a fatal failure the server does not have any dialog states from the time before the failure, which means that all subsequent requests that arrive at this point will receive a 481 response back. During the failure period all transactions (both initial and subsequent) will be terminated with a non-481 error code, most likely a 500 or an internal 503 or 408 (depending on if there is a proxy in the route path or not, and what the nature of the failure is).

Typically a fatal failure will result in the server process or the entire machine being restarted.

Temporary Failures

A temporary failure is one where none or little data is lost so that after the failure session states will remain in the server. This means that a subsequent request that arrives after the server has recovered from the failure will be processed with the same result, as it would have been before the failure.

All requests that arrive during the failure period will be responded with a non-481 failure response, such as 503.

In general a temporary failure has a shorter duration, and a typical example is an overload situation in which case the server will respond 503 on some or all requests.

Failover Actions

The User Dispatcher can take several actions when it has detected a failure in a Presence server node. The goal with the action is to minimize the impact of the failure in terms of number of failed subscriptions and publications and the time it takes to recover. In addition to this the User Dispatcher needs to keep the distribution as even as possible over the active servers.

The fail-over action to be used in this version of the User Dispatcher is to disable the node in the pool. This approach is better than removing the node because when the ResizableBucketServerPool is used since the add and remove operations are not deterministic. This means that the result of adding a node depends on the sequence of earlier add and delete operations, whether as the disable operation will always result in the same change in distribution given the set of active and disabled nodes.

Overload Policy

An activated overload policy can indicate several types of failures but its main purpose is to protect from a traffic load that is to big for the system to handle. If such a situation is detected as a failure, fail-over actions can lead to bringing down the whole cluster since if the distribution of traffic is fairly even all the nodes will be in or near an overloaded situation. If the dispatchers remove one node from the cluster and redistribute that node's traffic over the remaining nodes they will certainly enter an overload situation that causes a chain reaction.

Since it is difficult to distinguish this overload situation from a software failure that triggers the overload policy to be activated even though the system is not under load, it might still be better to take the fail-over action unless Overload Policy is disabled. If the system is really in an overload situation it is probably under dimensioned and then the fail-over should be disabled.

The User Dispatcher will not fail over when it has detected a 503 response (which indicates overload policy activated). However, if a server is in the highest overload policy state where it drops messages instead of responding 503 the User Dispatcher monitor will receive an internal 408, which can never be distinguished from a dead server and failover will occur.

Synchronization of Failover Events

Depending on the failure detection mechanism there may be a need to synchronize the fail-over events (or the resulting state) between the different dispatcher instances. This is required if the detection mechanism is not guaranteed to be consistent across the cluster, such as an Error Response. For instance one server node sends a 503 response on one request but after that works just fine (this can be due to a glitch in the overload policy). If there was only one 503 sent then only one dispatcher instance will receive it and if that event triggers a fail-over then that dispatcher instance will be out of sync with the rest of the cluster. Further, even if the grace period is implemented so that it takes several 503 responses over a time period to trigger the fail-over there is still a risk for a race condition if the failure duration is the same as the grace period.

The following methods can be used to assure that the state after fail-over is synchronized across the cluster of dispatcher instances:

Broadcasting Fail-Over Events

In this approach each dispatcher instance have to send a notification to all other instances (typically using JGroups or some other multicast technique) when it has decided to take a fail-over action and change the set of servers. This method can still lead to race conditions since two instances may fail-over and send a notification at the same time for two different server nodes.

Shared State

If all dispatcher nodes in the cluster share the same state from a single source of truth then when the state is changed (due to a fail-over action) by any instance all other instances will se the change.

Expanding the Cluster

Since the Presence application can generate an exponentially increasing load due to the fact that every user subscribes to multiple (potentially a growing number of) other users, there is a need for a way to dynamically expand the cluster without too much disturbance. Compared to for instance a classic telecom application where it may be acceptable to bring all servers down for an upgrade of the cluster during low traffic hours, a Presence system may have higher availability requirements than that.

Expanding the cluster may involve both adding Presence nodes and User Dispatcher nodes.

When a new Presence server is added to a cluster, some presentities must be migrated from old nodes to the new node in order to keep a fairly even distribution. This migration needs to be minimized to avoid a too big flood of traffic on the system upon changing the cluster.

When a new User Dispatcher is added to the cluster that User Dispatcher node must achieve the same dispatching state as the other dispatcher nodes. This may depending on the pool implementation require a state being synchronized with the other dispatcher nodes (for instance when using the bucket pool implementation with persistence).

Updating the Node Set

Depending on the algorithm used to find the server node for a given presentity, different number of presentity will be migrated to another node when a new node is added or removed. An optimal Pool implementation will minimize this number.

Migrating Presentities

When the node set has been updated some Presentities may have to be migrated to maintain an even distribution. The different ways to do this are described in "Presentity Migration".

Failover Use Cases

These use cases illustrates how the User Dispatcher reacts in different failure situations in one or several Presence server nodes.

One Presence Server Overloaded for 60 Seconds

The cluster consists of four Presence servers, each node consisting of one OCMS instance with a User Dispatcher and a Presence application deployed. 100.000 users are distributed over the four servers evenly (25.000 on each node). Due to an abnormally long GC pause on one of the servers, the processing of messages is blocked by the Garbage Collector, which leads to the SIP queues getting filled up and the overload policy is activated. 60s later the processing resumes and the server continues to process messages.

The User Dispatcher will not do any fail-over but keep sending traffic to the failing node. In this case no sessions will be migrated to another node since all PUBLISH and initial SUBSCRIBE requests will be sent to the failing node. The initial SUBSCRIBES that arrives during the failure period will fail with a non-481 error (likely 503). It is up to the client to try and setup a new subscription when the failing one expires or report a failure. All PUBLISH requests and initial SUBSCRIBE request will generate a failure.

When the failing node resumes to normal operation all traffic will be processed again and no requests should fail. The time it takes until all presence states are correct again will be minimal since no sessions were failed-over.

If the monitoring feature is implemented in a way that detects the node as down in this case, then some users will be migrated to another node and when this node comes back they will be migrated back again. This will generate some increased load for a duration of time. If the overload policy was activated because of a too high traffic load this migration is bad, since is will most likely happen again and since the other servers will most likely also be close to overload. This could lead to a chain reaction resulting in the whole cluster going down and a complete loss of service.

One Presence Server Overloaded Multiple Times for Five Seconds

This use case describes a Presence server that is going in and out from overload with short time periods such as 5 seconds. This is common if the system is under dimensioned and can barely cope with the traffic load, but it could also be caused by some other disturbance only on that particular node. The User Dispatcher will behave exactly as in "One Presence Server Overloaded for 60 Seconds" and the result will be the same except that the number of failed sessions and failed-over sessions will be smaller due to the shorter failure period.

Overload Policy Triggered by an OCMS Software Failure

A failure in the OCMS software or an application deployed on top of it causes all threads to be locked (deadlock). This will eventually lead to that the in queue is filled up and the overload policy is activated even though the system is not actually overloaded. This is a permanent error that can only be solved by restarting the server.

Depending on if and how the monitor function is implemented the number of affected users can be minimized. However this cannot be distinguished from a real overload situation in which case a fail-over may not be the best thing to do.

A Presence Server Hardware Failure

The cluster consists of four Presence servers, each node consisting of one OCMS instance with a User Dispatcher and a Presence application deployed. 100.000 users are distributed over the four servers evenly (25.000 on each node). One of the presence servers crashes due to a hardware failure. A manual operation is required to replace broken server with a new one and only after two hours is the server up and running again. Depending on the type of the failure the response code sent back on transactions proxied to the failed node will be 408 or 503.

In this case all sessions on this node will fail since the failure duration is (most likely) more than the expiration time for the subscriptions. If a monitor server is implemented with fail-over then the failure time will be minimized to the detection time (seconds). The users will be migrated by the migration feature, which will create an increased load for a duration of time.

Because the User Dispatcher was also running on the failed node, all the persisted data for the user dispatcher will be lost when replacing the server with a new machine.

Expanding the Cluster with One Presence Node

The cluster consists of 3 Presence servers, each node consisting of one OCMS instance with a User Dispatcher and a Presence application deployed. 100.000 users are distributed over the four servers evenly (33.000 on each node). A new node is installed and added to the cluster. The following sequence of operations are performed to add the new node:

  1. The User Dispatcher and the Presence application on the new node are configured with the same settings as the rest of the cluster. This includes synchronizing the distribution state to the new User Dispatcher in case of a pool implementation with persistence.

  2. The addServer JMX operation is invoked with the new node on the cluster User Dispatcher MBean. This will invoke the addServer operation on all User Dispatcher nodes (including the new node).

  3. The Load Balancer is reconfigured with the new node so that initial requests are sent to the new User Dispatcher node.

  4. Depending on the migration approach an additional JMX operation may be invoked on the Presence application (using the cluster MBean server).

The result of this is that the new distribution of users is 25.000 on each node after 8.000 users have been migrated. Depending on the migration method this will generate an increased load of traffic on the system over a period of time.

Removing a Node from the Cluster

The cluster consists of four Presence servers, each node consisting of one OCMS instance with a User Dispatcher and a Presence application deployed. 100.000 users are distributed over the four servers evenly (25.000 on each node). One Presence node is removed from the cluster. The following sequence of operations are performed to remove the node:

  1. The Load Balance is reconfigured to not include the node to be removed.

  2. The removeNode JMX operation is invoked to remove the node from all the User Dispatcher's in the cluster. The cluster MBean is used to delegate the operation.

  3. Depending on the migration approach an additional JMX operation may be invoked on the node to be removed.

  4. When all users have been migrated from the node to be removed (the duration of this depends on the migration method) the node is finally stopped and removed from the cluster.

The result of this is that the new distribution of users is 33.000 on each node after 8.000 have been migrated.

OPMN Restart After a Presence Server Crash

Consider a four-node cluster with a User Dispatcher and a Presence application deployed on each node. The Presence server JVM on one of the nodes crashes and OPMN restarts the process. The restart takes one minute.

503 Responses from an Application

Due to a software bug or misbehavior in the application, 503 responses are sent for all incoming traffic. The SIP server itself is not under a significant load and the Overload Policy has not been activated. This may or may not be a permanent error condition.