Oracle® Communication and Mobility Server Administrator's Guide 10g Release 3 (10.1.3) Part Number E12656-01 |
|
|
View PDF |
This chapter discusses OCMS deployment topologies in the following sections:
OCMS supports single-node and clustered, multi-node deployment topologies.
A single-node deployment consists of a single SIP Application Server instance. This deployment, which typically hosts SIP applications along with a database server, is appropriate for running a testing environment or a very small deployment of OCMS.
A SIP Application Server cluster is defined as a set of SIP Application Server instances that share state related to the applications. A cluster consists of one or more application server nodes, with each node running one instance of OCMS.
A highly available OCMS cluster provides the following:
Replication of objects and values contained in a SIP Application Session.
Database backed location service data.
Load balancing of incoming requests across OCMS SIP application servers.
Overload protection protects the server from malfunctioning in the event of overload and rejects traffic which cannot be handled properly.
Transparent failover across applications within the cluster. If an instance of an application fails, it becomes unresponsive and the session can fail over to another instance of the application, on another node in a cluster.
Table 2-1 Additional Information
For more information on... | See: |
---|---|
OCMS installation |
Oracle Communication and Mobility Server Installation Guide |
Operating systems supported by highly available OCMS clusters |
Oracle Communication and Mobility Server Certification Guide |
Configuring a highly available clustered Oracle Application Server environment |
|
Configuring highly available OCMS topologies |
Chapter 5, "Configuring High Availability" in this guide |
Components of a highly available topology include the following:
A third-party load balancer balances the load of incoming traffic among the Edge Proxy nodes. It also deflects failed Edge Proxy nodes and redirects traffic to other Edge Proxy nodes.
The Edge Proxy nodes form sticky connections between clients and servers for the duration of the session, creating a path between a client and server and sending SIP traffic over that path. The Edge Proxy nodes balance the load of SIP traffic among the SIP Application Servers.
Both Edge Proxy Servers are configured with a virtual IP address. Each Edge Proxy node detects failed SIP Application Server nodes and fails over to the remaining SIP Application Server nodes.
The SIP Application Servers are all linked to each Edge Proxy node. The SIP Application Servers are linked to each other through OPMN. The OCMS SIP state on each computer is replicated to the other two nodes. If one SIP Application Server node fails, another node takes over, using the replicated state of the failed node.
For more information on replicating states among OAS instances and configuring clustering, see "Setting Up a Highly Available Cluster of OCMS Nodes".
The Aggregation Proxy authorizes Web Service calls and authenticates XCAP traffic. The Aggregation Proxy then proxies this traffic to the Parlay X Web Service and XDMS. This is an optional component.
The OCMS Proxy Registrar combines the functionality of a SIP Proxy Server and Registrar. Its main tasks include registering subscribers, looking up subscriber locations, and proxying requests onward. The Proxy Registrar stores user location and registration data on the Oracle database. This is an optional component.
For more information, see "Proxy Registrar".
The User Dispatcher enables the Presence and XDMS applications to scale. The User Dispatcher is a proxy that dispatches SIP and XCAP (over HTTP) requests to their appropriate destinations on a consistent basis.
Because the Presence application maintains the states for all users in the deployment, the User Dispatcher enables scaling (distribution) of the Presence application. The User Dispatcher supports request dispatching to the following Presence sub-applications, which use the SIP and XCAP (over HTTP) protocols:
Presence server
Presence XDMS
Shared XDMS
Supported topologies include:
Note:
Only the Oracle Application Server installation mode supports high availability. For more information, refer to the Oracle Communication and Mobility Server Installation Guide.When deployed as a highly available SIP network, OCMS can be used to implement a basic VoIP system, enabling voice and video calls. This topology (illustrated in Figure 2-1) includes the following:
A hardware load balancer
A cluster of Edge Proxy Servers
A cluster of two SIP Application Servers, each running a SIP Servlet Container
Replicated databases, including user data, authentication data, and user location information
Figure 2-1 OCMS as a Highly Available SIP Network
This topology provides a highly available SIP network capable of handling millions of users. Each SIP Application Server must run an Oracle database and Proxy Registrar application.
The SIP network topology includes hardware and software components described in Table 2-2.
Table 2-2 SIP Network Topology Hardware and Software Requirements
Hardware | Software | Installation TypeFoot 1 |
---|---|---|
Load balancer |
N/A |
N/A |
Two computers with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
Edge Proxy |
Custom installation |
Two computers with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
|
Typical installation |
Footnote 1 Refer to the OCMS Installation Guide for more information.
Load Balancer
A load balancer or DNS round-robin algorithm balances the load of incoming traffic among the Edge Proxy nodes. Using the DNS round-robin algorithm requires all clients to implement DNS lookup.
See also:
"Topology Components" for a description of the components used in this topology.OCMS can be deployed as a Presence Server. The Presence Server topology is deployed on two nodes: one running the Presence Server and the other running the Aggregation Proxy and XDMS. This topology (illustrated in Figure 2-2) can be implemented within an IMS network to provide Presence functionality.
The Presence Server topology includes the following:
One SIP Application Server node with the Presence Server
One SIP Application Server node with an XDMS and an Aggregation Proxy
The Presence Server topology includes hardware and software components described in Table 2-3.
Table 2-3 Presence Server Topology Hardware and Software Requirements
Hardware | Software | Installation TypeFoot 1 |
---|---|---|
One computer with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
|
Typical installation |
One computer with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
|
Typical installation |
Footnote 1 Refer to the OCMS Installation Guide for more information.
XDMS
Manual post-installation configuration is required for the XDMS, involving configuring Presence as an XDMS. For more information, refer to the OCMS Installation Guide.
See also:
See "Topology Components" for a description of the components used in this topology.This section describes the recommended and supported deployment topology for a large scale Presence Solution requiring Presence, XDMS, and User Dispatcher. It illustrates the typical flows from a multi-node perspective.
To scale across multiple nodes, the User Dispatcher component dispatches all traffic targeting a particular Presentity to the same Presence Server instance.
A Presence Cluster is defined as a set of Presence Nodes connected after one or more Load Balancers. The Presence Cluster is responsible for processing incoming subscribe and publish requests made towards the presence event-package and of course for sending out notify's whenever appropriate. The Presence Cluster will also accept and processing subscribe requests for the presence.winfo event-package.The Presence Cluster will interact with the XDM Cluster in order to obtain information needed to complete its responsibilities. The information queried of the XDM Cluster is user's presence-rules and pidf-manipulation documents.
The Presence Cluster is layered into the following three distinct tiers:
The load-balancing layer, responsible for dispatching incoming traffic to the User Dispatchers. The load balancers are stateless and do not understand SIP as a protocol.
The user-dispatching layer, responsible for dispatching traffic based on user information. A user is assigned to a particular Presence Server instance and all traffic destined to that user will be dispatched to the same Presence Server instance. Even though each User Dispatcher is stateless and does not share state with the other User Dispatchers, they still need to have the same view of the Presence Server tier.
The bottom layer is where the Presence Server instances reside. Each instance is separated from the others and does not share any state with any other instances. The purpose of the Presence Server tier is to serve incoming SUBSCRIBE and PUBLISH requests destined to the presence event-package as well as servicing subscriptions to the presence.winfo event-package.
The Presence Cluster consists of the following physical nodes:
The Load Balancer, such as an F5.
The Presence Node, which consists of the following components:
User Dispatcher
Presence Server
The XDM cluster is defined as a set of XDM Nodes connected after one or more Load Balancers. The XDM cluster processes all XDM related traffic, that is, SIP subscribe traffic towards the ua-profile event-package and XCAP traffic. As such, it deals with everything that has to do with manipulating XML documents. The XDM Cluster uses a database for actual storage of the XML documents but note that the database, and potentially its cluster, is not part of the XDM Cluster.
The XDM cluster consists of the following layers:
The load-balancing tier, responsible for dispatching both SIP and XCAP traffic to the next layer. For XCAP traffic the next tier is the Aggregation Proxy but for SIP, the traffic goes directly to the User Dispatcher layer.
Aggregation Proxy layer – authenticates incoming traffic and upon successful authentication it forwards the requests to the User Dispatcher layer. All XCAP traffic for external traffic goes through the Aggregation Proxy layer. Internal traffic, however, will not go through the Aggregation Proxy but rather directly to the User Dispatchers.
User Dispatcher layer – from a SIP perspective it carries out the exact same duties and functions as in the Presence Cluster (it is the same kind of traffic after all). The main difference in the XDM Cluster compared to the presence one is that in the XDM Cluster the User Dispatchers will also have to handle XCAP traffic. However, the XCAP traffic is treated in the exact same way as SIP and the purpose of the User Dispatcher for XCAP traffic is the same as for SIP: to extract user information based on the request and then dispatch it to the correct XDMS instance.
The XDM Server layer has the same function as the Presence Servers in the Presence Cluster. The XDMS instances serve incoming SUBSCRIBE requests for the event-package ua-profile and will whenever appropriate send out NOTIFY messages to all registered subscribers. Note that the XDMS does not accept PUBLISH requests and updating the state of the Resources (which are XML documents) is through XCAP operations. An XDM Client can manipulate the documents managed by an XDMS by issuing appropriate XCAP operations. A successful XCAP operation may alter the content of a document whereby the XDMS would send out NOTIFY messages to the subscriber of that document to inform them about the change. Whenever the XDMS needs to get an XML document it queries the next layer, the database layer.
The Database tier physically stores the XML documents managed by the XDMS. This tier guarantees high-availability and scalability so that if one of the nodes in the database layer fails, documents that resided on that node will still be accessible to the XDMS without any loss of data or service.
The XDM Cluster consists of the following physical nodes:
The Load Balancer, such as an F5.
The XDM Node, which consists of the following components:
Aggregation Proxy
User Dispatcher
The XDM Server (XDMS)
The database.
The Presence Node is the main component in the Presence Cluster and is responsible for dispatching the incoming traffic to the correct Presence Server instance and of course servicing users with presence information. The User Dispatcher serves the same purpose both in a single node deployment and in a multi-node deployment. That is, its purpose is to dispatch incoming traffic to a particular PS instance and if this instance is running on the same physical node or not is of no relevance to the User Dispatcher. The User Dispatcher identifies a particular node by its full address, that is, the IP address and port, and has no concept of local instances.
A Presence Node will always have a User Dispatcher deployed that serves as the main entrance into the node itself. Typically, the User Dispatchers listen to port 5060 and the other Presence Servers on that node listen on other ports. In this way, a single node will appear as one Presence Server to clients but is in fact multiple instances running behind the User Dispatcher. Each of the components deployed on the Presence Node is executing in their own separate Java Virtual Machine. That is, the User Dispatcher and the Presence Server instances execute in their own OC4J and SIP containers. The reason for this is to be able to utilize all the available memory on that machine.
The XDM Node always has an Aggregation Proxy deployed that typically listens on port 80 for XCAP traffic. The Aggregation Proxy authenticates incoming traffic and upon successful authentication forwards the request to the User Dispatcher. As with the Presence Node, the XDM Node will also have a User Dispatcher deployed (usually on port 5060) and for SIP traffic there is no difference between the XDM and Presence Nodes. The difference between the two types of nodes is that the User Dispatcher will also dispatch XCAP traffic. As it does with SIP, it extracts the user id out of the request and, based on that, maps the request to a particular XDMS instance to which it forwards the request.
There will be a number of XDMS instances deployed to which the User Dispatcher dispatches both SIP and XCAP traffic. Just as in the case of the Presence Server instances on the Presence Node, each XDMS instance is not aware of the others and executes in isolation.
The Aggregation Proxy and User Dispatcher are deployed onto the same OC4J container and use the same Java Virtual Machine.
Figure 2-3 shows a complete Presence and XDM cluster with all necessary components. This figure also illustrates that the two clusters, Presence and XDM, are treated as two separate clusters and the way into those two networks for initial traffic is always through their respective Load Balancers. Even the Presence Servers will actually go through the Load Balancer of the XDM Cluster when setting up subscriptions. However, once a subscription has been established the subsequent requests will not go through the Load Balancer but rather directly to the XDMS instance hosting the subscription. All nodes in the XDM Cluster are directly accessible from the Presence Cluster.
The OCMS Instant Messaging topology is a highly available, topology that enables messaging, including instant messaging client applications. Figure 2-4 illustrates a sample topology consisting of six nodes in three clusters. The IM topology comprises a four-node SIP network topology and a two-node Presence Server topology, with the addition of the Application Router. The Application Router routes SIP requests to either the Proxy Registrar or the Presence Server, enabling registering and retrieving user contact and location data, as well as handling all aspects of Presence publication and notification. The Aggregation Proxy on either SIP Application Server node is used to authenticate subscriber access to presence documents stored on the XDMS. This topology provides the server-side functionality behind instant messaging client applications.
Figure 2-4 Instant Messaging Service Topology
This topology includes hardware and software components described in Table 2-4.
Table 2-4 Topology Hardware and Software Requirements
Hardware | Software | Installation TypeFoot 1 |
---|---|---|
Load balancer |
N/A |
N/A |
Two computers with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
Edge Proxy |
Custom installation |
Two computers with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
|
Typical installation |
One computer with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
|
Typical installation |
One computer with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
|
Typical installation |
Footnote 1 Refer to the OCMS Installation Guide for more information.
For more information about scaling this hybrid topology, see "Deploying OCMS as a Highly Available SIP Network" and "Deploying OCMS as a Presence Server".
See also:
"Topology Components" for a description of the components used in this topology.An OCMS testing environment is deployed on a single SIP application server node. A single-node OCMS topology is appropriate for testing, demonstrations, and small enterprises.
Note:
Because the testing environment is deployed on a single node, it cannot provide high availability.Figure 2-5 illustrates a single-node deployment which includes a Proxy Registrar.
Figure 2-6 illustrates a single-node deployment with Proxy Registrar, Presence, Aggregation Proxy, XDMS, and Application Router.
Figure 2-6 Single Node Deployment with Presence
The single node topology includes hardware and software components described in Table 2-5.
Table 2-5 Single Node Topology Hardware and Software Requirements
Hardware | Software | Installation TypeFoot 1 |
---|---|---|
One computer with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
|
Typical installation |
If deployment includes Presence:
|
Typical installation |
Footnote 1 Refer to the OCMS Installation Guide for more information.
See also:
"Topology Components" for a description of the components used in this topology.The following is recommended for most topologies:
Use TCP as the transport protocol. SIP clients use UDP or TCP to transport SIP messages. If there is a concern about exceeding UDP MTU size limits, TCP should be used. The preference for TCP can be enforced by adding NAPTR and SRV records to the DNS indicating that TCP is the preferred protocol. Make sure that clients connecting to OCMS fully support NAPTR and SRV records.
Run the OCMS SIP Server in the default Record-Route mode. The best way for clients to handle NAT/FW in the network is to establish a TCP connection to the server upon registration, and reuse this connection for all incoming and outgoing SIP traffic. This requires the client to have an outboundproxy
setting that points to the Proxy Registrar, and that the Proxy Registrar is configured to use record-route
.