Oracle® Communication and Mobility Server Administrator Guide Release 10.1.3 Part Number B31497-01 |
|
|
View PDF |
This chapter discusses OCMS deployment topologies in the following sections:
OCMS supports two main categories of deployment topologies: single node and clustered.
A single node deployment consists of a single SIP Application Server instances running on one computer. Such a deployment typically runs one or two SIP applications along with an in-memory database. A single node deployment is therefore appropriate for running a testing environment or a very small deployment of OCMS.
A SIP Application Server cluster is defined as a set of SIP Application Server instances that share state related to the applications. A cluster includes one or more application server nodes, with each node running one instance.
A highly available OCMS cluster provides the following:
Replication of objects and values contained in a SIP Application Session.
Replication of location service data.
Load balancing of incoming requests across OCMS SIP application servers.
Overload protection keeps internal queues short by rejecting new transactions when necessary, and handling new transactions first when in overload mode.
Transparent failover across applications within the cluster, including application restart which enables applications that fail mid-session to restart and continue execution on another instance.
Table 3-1 Additional Information
For more information on... | See: |
---|---|
OCMS installation |
Oracle Communication and Mobility Server Installation Guide |
Operating systems supported by highly available OCMS clusters |
Oracle Communication and Mobility Server Certification Guide |
Configuring a highly available clustered Oracle Application Server environment |
|
Configuring highly available OCMS topologies |
Chapter 8, "Configuring High Availability" in this guide |
Components of a highly available topology include the following:
A third-party load balancer balances the load of incoming traffic among the Edge Proxy nodes. It also deflects failed Edge Proxy nodes and redirects traffic to other Edge Proxy nodes.
A third-party load balancer is an optional component in most topologies. However, it is useful in an IMS topology.
The Edge Proxy nodes form sticky connections between clients and servers for the duration of the session, creating a path between a client and server and sending SIP traffic over that path. The Edge Proxy nodes balance the load of SIP traffic among the SIP Application Servers.
Both Edge Proxy Servers are configured with a virtual IP address. Each Edge Proxy node detects failed SIP Application Server nodes and fails over to the remaining SIP Application Server nodes.
The SIP Application Servers are all linked to each Edge Proxy node. The SIP Application Servers are linked to each other through OPMN. The OCMS SIP state on each computer is replicated to the other two nodes. If one SIP Application Server node fails, another node takes over, using the replicated state of the failed node.
For more information on replicating states among OAS instances and configuring clustering, see "Setting Up a Highly Available Cluster of OCMS Nodes".
An Oracle TimesTen database runs on each SIP Application Server node in the relevant topologies. Each database is synchronously replicated to the other. If a SIP Application Server node fails, the other node takes over using the replicated database transactions of the failed node.
An Oracle TimesTen database is required for the following OCMS components:
Proxy Registrar
Presence Server
Any user-defined applications that use a database
For more information on configuring replication for Oracle TimesTen In-Memory database instances, see "Configuring the Proxy Registrar for High Availability" .
The Aggregation Proxy authorizes Web Service calls and authenticates XCAP traffic. The Aggregation Proxy then proxies this traffic to the Parlay X Web Service and XDMS Server. This is an optional component.
The OCMS Proxy Registrar combines the functionality of a SIP Proxy Server and Registrar. Its main tasks include registering subscribers, looking up subscriber locations, and proxying requests onward. The Proxy Registrar stores user location and registration data on the Oracle TimesTen database. This is an optional component.
For more information, see "Proxy Registrar".
Supported topologies include:
Note:
Only the Oracle Application Server installation mode supports high availability. For more information, refer to the Oracle Communication and Mobility Server Installation Guide.OCMS may be deployed as a highly available cluster of SIP Application Servers. This topology may be deployed as part of a IMS network, with OCMS providing highly available access to SIP-based services. This topology is useful to carriers wishing to host SIP Servlet applications, and may include the following components:
A hardware load balancer (optional)
Alternatively, the S-CSCF can balance the load in this topology using one of its supported methods.
Two Edge Proxy nodes
Three SIP Application Server nodes
Figure 3-1 OCMS as a SIP Application Server: Five Node SIP Application Server Topology
The SIP Application Server cluster topology does not include authentication or authorization mechanisms. When used in an IMS network, the S-CSCF node typically handles authentication for the SIP Application Server cluster. Applications deployed to this topology must follow certain guidelines in order to support high availability (see "Configuring High Availability"for details).
The OCMS as a SIP application network topology includes hardware and software components described in Table 3-2.
Table 3-2 OCMS as a SIP Application Network Topology Hardware and Software Requirements
Hardware | Software | Installation TypeFoot 1 |
---|---|---|
Load balancer |
||
Two computers with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
Edge Proxy |
Custom installation |
Three computers with 4 GB of RAM and a dual 2 Ghz CPU |
OAS 10.1.3.2 |
Typical installation |
Footnote 1 Refer to the OCMS Installation Guide for more information.
See also:
"Topology Components" for a description of the components used in this topology.When deployed as a highly available SIP network, OCMS can be used to implement a basic VoIP system, enabling voice and video calls. This topology includes the following:
A hardware load balancer
A cluster of Edge Proxy Servers
A cluster of two SIP Application Servers, each running a SIP Servlet Container
Replicated in-memory databases, including user data, authentication data, and user location information
Figure 3-2 OCMS as a Highly Available SIP Network
This topology provides a highly available SIP network capable of handling up to one million usersFoot 1 . Each SIP Application Server must run an Oracle TimesTen database and Proxy Registrar application.
The SIP network topology includes hardware and software components described in Table 3-3.
Table 3-3 SIP Network Topology Hardware and Software Requirements
Hardware | Software | Installation TypeFoot 1 |
---|---|---|
Load balancer |
||
Two computers with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
Edge Proxy |
Custom installation |
Two computers with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
|
Typical installation |
Footnote 1 Refer to the OCMS Installation Guide for more information.
Load Balancer
A load balancer or DNS round-robin algorithm balances the load of incoming traffic among the Edge Proxy nodes. Using the DNS round-robin algorithm requires all clients to implement DNS lookup.
See also:
"Topology Components" for a description of the components used in this topology.OCMS can be deployed as a Presence Server. The Presence Server topology is deployed on two nodes, one running the Presence Server, the other running the Aggregation Proxy and XDMS Server. This topology can be implemented within an IMS network in order to provide Presence functionality.
The Presence Server topology includes the following:
One SIP Application Server node with the Presence application
One SIP Application Server node with an XDMS server and an Aggregation Proxy
The Presence Server topology includes hardware and software components described in Table 3-4.
Table 3-4 Presence Server Topology Hardware and Software Requirements
Hardware | Software | Installation TypeFoot 1 |
---|---|---|
One computer with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
|
Typical installation |
One computer with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
|
Typical installation |
Footnote 1 Refer to the OCMS Installation Guide for more information.
XDMS Server
Manual post-installation configuration is required for the XDMS Server, involving configuring Presence as an XDMS Server. For more information, refer to the OCMS Installation Guide.
See also:
See "Topology Components" for a description of the components used in this topology.The OCMS Instant Messaging topology is a highly available, six node topology that enables instant messaging client applications. The IM topology comprises a four node SIP network topology and a two node Presence Server topology, with the addition of the Application Router. The Application Router routes SIP requests to either the Proxy Registrar or the Presence Server, enabling registering and retrieving user contact and location data, as well as handling all aspects of Presence publication and notification. The Aggregation Proxy on either SIP Application Server node is used to authenticate subscriber access to presence documents stored on the XDMS Server. The IM topology provides the server-side functionality behind instant messaging client applications.
The Instant Messaging topology includes hardware and software components described in Table 3-5.
Table 3-5 Instant Messaging Topology Hardware and Software Requirements
Hardware | Software | Installation TypeFoot 1 |
---|---|---|
Load balancer |
||
Two computers with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
Edge Proxy |
Custom installation |
Two computers with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
|
Typical installation |
One computer with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
|
Typical installation |
One computer with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
|
Typical installation |
Footnote 1 Refer to the OCMS Installation Guide for more information.
For more information about scaling this hybrid topology, see "Deploying OCMS as a Highly Available SIP Network" and "Deploying OCMS as a Presence Server".
See also:
"Topology Components" for a description of the components used in this topology.An OCMS testing environment is deployed on one node, and therefore does not provide high availability. A single node OCMS topology is appropriate for testing, demos, and small enterprises. Such an environment includes:
One SIP Application Server node
Figure 3-6 Single Node Deployment with Presence
The single node topology includes hardware and software components described in Table 3-6.
Table 3-6 Single Node Topology Hardware and Software Requirements
Hardware | Software | Installation TypeFoot 1 |
---|---|---|
One computer with at least 4 GB of RAM and a dual 2.8 Ghz CPU |
|
Typical installation |
If deployment includes Presence:
|
Typical installation |
Footnote 1 Refer to the OCMS Installation Guide for more information.
See also:
"Topology Components" for a description of the components used in this topology.The following is recommended for most topologies:
Use TCP as the transport protocol. SIP clients use UDP or TCP to transport SIP messages. If there is a concern about exceeding UDP MTU size limits, TCP should be used. The preference for TCP can be enforced by adding NAPTR and SRV records to the DNS indicating that TCP is the preferred protocol. Make sure that clients connecting to OCMS fully support NAPTR and SRV records.
Run the OCMS SIP Server in the default Record-Route mode. The best way for clients to handle NAT/FW in the network is to establish a TCP connection to the server upon registration, and reuse this connection for all incoming and outgoing SIP traffic. This requires the client to have an outboundproxy
setting that points to the Proxy Registrar, and that the Proxy Registrar is configured to use record-route
.
Footnote Legend
Footnote 1: See Oracle "Reference Call Model" for more information.