Sun Java Communications Suite 5 Deployment Planning Guide

Calendar Server Considerations

Calendar Server consists of five major services:

In a scalable Calendar Server deployment, you would deploy front-end systems in conjunction with a back-end server. The front-end systems would contain one instance of the cshttpd daemon per processor and a single Administration Service. A back-end server would contain an instance of Notification Service, Event Notification Service, Distributed Database Service and Administration Service.

Authentication and XML / XSLT transformation are two Calendar Service activities that generate heavy load. Additional CPUs can be added to meet quality of service requirements. In a scalable environment, these heavy load activities take place on the front-end system(s), permitting more CPUs to be added to individual front-end systems, or more front-end systems to be added, to meet quality of service requirements.

Note –

The preceding paragraph is not applicable if the Communications Express Calendar client is used for calendar access. Communications Express uses the WCAP protocol to access Calendar Server data and therefore the Calendar Server infrastructure is not doing the XML/XSLT translations. See Part V, Deploying Communications Express deploying Communications Express.

Calendar back-end services usually require half the number of CPUs sized for the Calendar front-end services. To support quality of service by the Calendar front-end system, the Calendar back-end system should use around two-thirds of the front-end CPUs.

Consider early on in your deployment planning to separate the Calendar Service into front-end and back-end services.

The Calendar Server HTTP process that is typically a component of the front-end services is a dominant user of CPU time. Thus, account for peak calendar usage and choose sufficient front-end processing power to accommodate the expected peak HTTP sessions. Typically, you would make the Calendar Server front end more available through redundancy, that is, by deploying multiple front-end hosts. As the front-end systems do not maintain any persistent calendar data, they are not good candidates for HA solutions like Sun Cluster or Veritas. Moreover, the additional hardware and administrative overhead of such solutions make deploying HA for Calendar Server front ends both expensive and time-consuming.

Note –

The only configuration for Calendar front ends that might warrant a true HA solution is where you have deployed the Calendar front end on the same host that contains a Messaging Server MTA router. Even in this configuration, however, the overhead of such a solution should be carefully weighed against the slight benefit.

A good choice of hardware for the Calendar Server front ends is a single or dual processor server. You would deploy one instance of the Calendar Server cshttpd process per processor. Such a deployment affords a cost-effective solution, enabling you to start with some level of initial client concurrency capability and add client session capacity as you discover peak usage levels on your existing configuration.

When you deploy multiple front ends, a load balancer (with sticky/persistent connections) is necessary to distribute the load across the front-end services.

Note –

Communications Express does not scale beyond two processors. The same hardware choices explained previously for Calendar Server apply to Communications Express deployments.

The Calendar Server back-end services are well balanced in resource consumption and show no evidence of bottleneck formation either in CPU or I/O (disk or network). Thus, a good choice of hardware for the back end would be a SPARC server with a single striped volume. Such a machine presents considerable capacity for large-peak calendar loads.

If your requirements include high availability, it makes sense to deploy the Calendar Server back end with Sun Cluster, as the back end does contain persistent data.

Note –

In a configuration with both front-end and back-end Calendar Server hosts, all hosts must be running: