Skip Headers
Oracle® Communications Calendar Server System Administrator's Guide
Release 7.0.5

E54935-01
Go to Documentation Home
Home
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

14 Configuring Calendar Server on a GlassFish Server Cluster

This chapter describes how to deploy and configure Oracle Communications Calendar Server on an Oracle GlassFish Server cluster. The GlassFish Server cluster feature enables you to create a collection of GlassFish Server instances that work together as one logical entity to provide Calendar Server with high availability through failure protection, scalability, and load balancing.

Note:

This is an example deployment to illustrate the basics of setting up a GlassFish Server Cluster for Calendar Server.

Topics:

Prerequisites for Deploying Calendar Server on a GlassFish Cluster

This information assumes that you are familiar with the following tasks:

  • Installing and configuring Calendar Server. See Calendar Server Installation and Configuration Guide for information about installing and configuring Calendar Server.

  • Setting up clusters using GlassFish Server. See the following documents for more information:

    • Oracle GlassFish Server High Availability Administration Guide

    • Oracle GlassFish Server Quick Start Guide.

The hardware and software requirements for configuring Calendar Server in a GlassFish Server cluster are the same as configuring a non-cluster Calendar Server deployment. See the topic on system requirements in Calendar Server Installation and Configuration Guide. You also need to install a load balancer for the cluster. This example uses Oracle iPlanet Web Server (formerly known as Sun Java System Web Server 7.0). If you use a different load balancer, modify the appropriate steps in the procedures that follow.

Example GlassFish Server Cluster Deployment Architecture

Figure 14-1 shows an example cluster architecture consisting of two GlassFish Server nodes.

Figure 14-1 Calendar Server Cluster Deployment Architecture Example

Description of Figure 14-1 follows
Description of "Figure 14-1 Calendar Server Cluster Deployment Architecture Example"

This figure shows that a "davcluster" is formed from machines A and B, which are GlassFish Server cluster nodes for the Calendar Server front-end process. The Calendar Server front ends access the back-end database and document store, as well as the Directory Server. Machine A is the GlassFish Server domain administration server (DAS) that provides the shared location for the Calendar Server configuration information and data.

Configuring a Calendar Server and GlassFish Server Cluster Deployment

Configuring Calendar Server in a GlassFish Server cluster involves the following two high-level steps:

  1. Creating a multi-instance GlassFish Server cluster

  2. Deploying Calendar Server into the GlassFish Server cluster

The following section describes how to create a two-instance GlassFish Server cluster.

To Create a Two-Instance GlassFish Server Cluster

Follow the instructions described in "Deploying an Application to a Two-Instance Cluster" in GlassFish Server Quick Start Guide to set up a GlassFish Server cluster and load balancer. Instead of deploying the sample application in this example, you deploy Calendar Server. That is, replace the example section "To Deploy the Application and Configure the Load Balancer" with the next task, "Deploying Calendar Server on a Two-Instance GlassFish Server Cluster."

Notes:

  • If you plan on using a standalone domain administration server (DAS), and installing and configuring Calendar Server on one of the cluster nodes, make sure that you install GlassFish Server on the DAS machine in a network shared location accessible by the cluster nodes. Then, when you run the Calendar Server initial configuration script (init-config) on the cluster node and are prompted for the GlassFish Server installation path, use the network share path of the GlassFish Server 3 DAS installation.

  • If you plan to install and configure GlassFish as the root user, you must configure the network share so that no root squashing occurs to avoid permission issues. For example, on Solaris OS, use the root= option, and on Linux, use the no_root_squash option when sharing over NFS. However, if you plan to install Calendar Server on the DAS machine itself, then you do not need to install GlassFish Server on a network share location.

Deploying Calendar Server on a Two-Instance GlassFish Server Cluster

To deploy Calendar Server on a two-instance GlassFish Server cluster:

  1. Following the "Example GlassFish Server Cluster Deployment Architecture" described previously, install Calendar Server on cluster node Machine B.

    You must install the Unified Communications Suite distribution on the machine, run the commpkg install command, then select Calendar Server. For more information, see Calendar Server Installation and Configuration Guide.

  2. Create an NFS share on DAS Machine A.

    For both cluster nodes to share the same Calendar Server configuration and data, create a directory on the DAS Machine A, for example, /ocucs, and share it over NFS.

  3. On Machine B and Machine C, mount the /ocucs share on Machine A to /ocucs so all machines can access the /ocucs path.

    Note:

    If you run GlassFish Server as root user, configure the network share so no root squashing occurs.
  4. On Machine B, configure the Calendar Server instance by running the Calendar Server init-config command.

    For more information, see Calendar Server Installation and Configuration Guide.

  5. When prompted by the init-config command, reply as follows:

    1. To the prompt "Location to store application data and config (datadir)," use the network shared location /ocucs/var/opt/sun/comms/davserver.

    2. To the prompt "Runtime user (cs.user)" and "Runtime group (cs.group)," specify the same user and group under which the GlassFish Server instance runs.

    3. To the prompt "Application server install directory (appsrv.dir.install)," answer according to whether are configuring Calendar Server on the DAS machine, or a cluster node machine.

      In this example, because you are configuring from the Machine B, use the network path of the GlassFish Server installation on the DAS machine A to which the current user has read and write access. That is, use /home/gfuser/glassfish3, assuming /home is the automounted network home directory for users.

    4. To the prompt "Application server target instance name (appsrv.target)," use the name of the cluster that was previously created in "To Create a Two-Instance GlassFish Server Cluster", for example, davcluster.

    5. To the prompt "Calendar server access host (appsrv.http.host)," use the load balancer host for the cluster, for example, Machine D.

    6. To the prompt "Calendar server access port (appsrv.http.port)," use the load balancer port for the cluster.

    7. To the prompt "Application server admin server host (appsrv.admin.host)," use the DAS machine host name, for example, Machine A.

  6. Answer the remaining init-config prompts.

  7. Click Configure Now to configure Calendar Server.Configure the load balancer on Machine D by adding the following lines to the webserver7base/admin-server/config-store/machineD/default.acl file:

    acl "uri=/davserver";
    allow (http_propfind,http_proppatch,http_mkcol,http_head,http_delete,http_put,http_copy,http_move,http_lock,http_unlock,http_mkcalendar,http_report) user = "anyone";
    
  8. Add the following lines to the webserver7base/admin-server/config-store/machineD/magnus.conf file:

    Init fn="register-http-method" methods="MKCALENDAR"
    
  9. Run the Web Server wadm deploy-config command to deploy the changed configuration.

Note:

If you install and configure Calendar Server on the DAS machine, when you run the davadmin command, use the -H cluster-node option for the command to succeed, where cluster-node can be any one of the nodes in the cluster.

Limitations of This Deployment

The GlassFish Server load balancer used in this deployment example can only do cookie-based stickiness routing, not IP-based routing. As a consequence, the session-based WCAP protocol does not work well with the load balancer. Requests from the same client are routed among the active instances that could be different than the authenticated session. Therefore, Convergence would not work against this HA deployment. To get around this limitation, use an IP-based load balancer.