This chapter describes the planning information for Calendar Server services.
This chapter contains the following sections:
Calendar Server consists of six major services:
HTTP Service (cshttpd) listens for HTTP requests. It receives user requests and returns data to the caller.
Administration Service (csadmind) is required for each instance of Calendar Server. It provides a single point of authentication and administration for the Calendar Servers and provides most of the administration tools.
Notification Service (csnotify) sends notifications of events and to-dos using either email or the Event Notification Service.
Event Notification Service (enpd) acts as the broker for event alarms.
Distributed Database Service (csdwpd) links multiple database servers together within the same Calendar Server system to form a distributed calendar store.
Backup Service (csstored) implements automatic backups, both archival backups and hot backups. The first backup is a snapshot with log files, the second is a snapshot with log files applied. This service is automatically started when you run the start-cal command. However, it is not enabled at installation time, so you must configure it to function. If left unconfigured, Backup Service sends out a message to the administrator every 24 hours, with the notification that the service is not configured.
In a scalable Calendar Server deployment, you would deploy front-end systems in conjunction with a back-end server. The front-end systems would contain one instance of the cshttpd daemon per processor and a single Administration Service. A back-end server would contain an instance of Notification Service, Event Notification Service, Distributed Database Service and Administration Service.
Authentication and XML /XSLT transformation are two Calendar Service activities that generate heavy load. Additional CPUs can be added to meet quality of service requirements. In a scalable environment, these heavy load activities take place on the front-end system(s), permitting more CPUs to be added to individual front-end systems, or more front-end systems to be added, to meet quality of service requirements.
The preceding paragraph is not applicable if the Communications Express Calendar client is used for calendar access. Communications Express uses the WCAP protocol to access Calendar Server data and therefore the Calendar Server infrastructure is not doing the XML/XSLT translations. See Part V, Deploying Communications Express for information on deploying Communications Express.
Calendar back-end services usually require half the number of CPUs sized for the Calendar front-end services. To support quality of service by the Calendar front-end system, the Calendar back-end system should use around two-thirds of the front-end CPUs.
You will want to consider early on in a deployment separating the Calendar Service into front-end and back-end services. Assign a separate host name for the front-end services and back-end services so that when it comes time to separate the functionality onto different hosts, the changes are essentially internal and do not require users to change their methods of operation.
The Calendar Server HTTP process that is typically a component of the front-end services is a dominant user of CPU time. Account for peak calendar usage to provide enough front-end processing power to accommodate the expected peak HTTP sessions. Typically, you would make the Calendar Server front end more available through redundancy, that is, by deploying multiple front-end hosts. As the front-end systems do not maintain any persistent calendar data, they are not good candidates for HA solutions like Sun Cluster. Moreover, the additional hardware and administrative overhead of such solutions make deploying HA for Calendar Server front ends both expensive and time-consuming.
The only configuration for Calendar front ends that might warrant a true HA solution is where you have deployed the Calendar front end on the same host that contains a Messaging Server MTA. Even in this configuration, however, the overhead of such a solution should be carefully weighed against the slight benefit.
A good choice of hardware for the Calendar Server front ends is a single or dual processor server. You would deploy one instance of the Calendar Server cshttpd daemon per processor. Such a deployment affords a cost-effective solution, enabling you to start with some level of initial client concurrency capability and add client session capacity as you discover peak usage levels on your existing configuration.
When you deploy multiple front ends, a load balancer (with sticky/persistent connections) is necessary to distribute the load across the front-end services.
The Calendar Server back-end services are well balanced in resource consumption and show no evidence of bottleneck formation either in CPU or I/O (disk or network). Thus, a good choice of hardware for the back end would be a SPARC server with a single striped volume. Such a machine presents considerable capacity for large-peak calendar loads.
If your requirements include high availability, it makes sense to deploy the Calendar Server back end with Sun Cluster, as the back end does contain persistent data.
In a configuration with both front-end and back-end Calender Server hosts, all hosts must be running:
The same operating system environment and version; that is, you cannot have a mixture of systems running Solaris SPARC, Solaris x86, Linux Red Hat, and so forth.
The same releases of Calendar Server, including patch or hotfix releases.
The LDAP data cache option ensures that LDAP data is available immediately after it has been committed. In some configurations of the LDAP directory server an update might need to be referred to a (remote) master server from which the update is then replicated down to the local LDAP directory. These kinds of configurations can induce a delay in the availability of committed data on the local LDAP server.
For example, if your site has deployed a master/slave LDAP configuration where Calendar Server accesses the master LDAP directory through a slave LDAP directory server, which in turn introduces a delay in the availability of committed LDAP data, the LDAP data cache can ensure that your Calendar Server clients have accurate LDAP data.
This section covers the following topics:
Use these guidelines to determine if your site should configure the LDAP data cache:
If Calendar Server at your site accesses your master (or root) LDAP directory server directly with no delays in the availability of committed LDAP data, you don’t need to configure the LDAP data cache. Make sure that the local.ldap.cache.enable parameter is set to “no” (which is the default).
If your site has deployed a Master/Slave LDAP Configuration where Calendar Server accesses the master LDAP directory through a slave LDAP directory server, which in turn introduces a delay in the availability of committed LDAP data, configure the LDAP data cache to ensure that your end users have the most recent data.
A Master/Slave LDAP configuration includes a master (root) directory server and one or more slave (consumer or replica) directory servers. Calendar Server can access the master LDAP directory server either directly or through a slave directory server:
If Calendar Server accesses the master LDAP directory server directly, the LDAP should be accurate, and you do not need to configure the LDAP data cache.
If Calendar Server accesses the master LDAP directory server through a slave directory server, LDAP data changes are usually written transparently via an LDAP referral to the master directory server, which in turn replicates the data back to each slave directory server.
In this second type of configuration, problems with inaccurate LDAP data can occur because of the delay in the availability of committed LDAP data to the slave directory servers.
For example, Calendar Server commits an LDAP data change, but the new data is not available for a specific amount of time because of the delay in the master directory server updating each slave directory server. A subsequent Calendar Server client operation uses the old LDAP data and presents an out-of-date view.
If the delay in updating the slave directory servers is short (only a few seconds), clients might not experience a problem. However, if the delay is longer (minutes or hours), clients will display inaccurate LDAP data for the length of the delay.
The following table lists the LDAP attributes that are affected by a delay in a master/slave LDAP server configuration where Calendar Server accesses the master LDAP directory server through a slave LDAP directory server.
Table 19–1 Calendar Server LDAP Attributes Affected by Delays
Operation |
LDAP Attributes Affected |
---|---|
Auto provisioning |
icsCalendar, icsSubscribed, icsCalendarOwned, icsDWPHost |
Calendar groups |
icsSet |
Calendar creation |
icsCalendarOwned, icsSubscribed |
Calendar subscription |
icsSubscribed |
User options |
icsExtendedUserPrefs, icsFirstDay, icsTimeZone, icsFreeBusy |
Calendar searches |
icsCalendarOwned |
To ensure that your end uses have the most recent LDAP data, configure the LDAP data cache as described in the following section, Resolving the Master-Slave Delay Problem.
The LDAP data cache resolves the master/slave LDAP configuration problem by providing Calendar Server clients with the most recent LDAP data, even when the master directory server has not updated each slave directory server.
If the LDAP data cache is enabled, Calendar Server writes committed LDAP data to the cache database (ldapcache.db file). By default, the LDAP cache database is located in the /var/opt/SUNWics5/csdb/ldap_cache directory, but you can configure a different location if you prefer.
When a client makes a change to the LDAP data for a single user, Calendar Server writes the revised data to the LDAP cache database (as well as to the slave directory server). A subsequent client operation retrieves the LDAP data from the cache database. This data retrieval applies to the following operations for a single user:
User’s attributes at login
User’s options (such as color scheme or time zone)
User’s calendar groups
User’s subscribed list of calendars
Thus, the LDAP data cache database provides for:
Data consistency across processes on a single system—The database is available to all Calendar Server processes on a multiprocessor system.
Data persistence across user sessions—The database is permanent and does not require refreshing. You can configure the time to live (TTL) for an LDAP data cache entry and the interval between each database cleanup.
The LDAP data cache does not provide for:
Reading the cache for searches where a list of entries is expected, for example, searching for attendees for a meeting. This type of search is subject to any LDAP delay. For instance, a newly created calendar will not appear in a calendar search if the LDAP search option is active and the search is performed within the delay period following the creation of a new calendar.
Reading and writing of the cache across multiple front-end servers. Each front-end server has its own cache, which is not aware of data in other caches.
The capability to handle a user who doesn’t always log into the same server. Such a user will generate different LDAP data in the cache on each server.
Configure the LDAP data cache by setting the appropriate parameters in the ics.conf file. See the Sun Java System Calendar Server 6 2005Q4 Administration Guide for more information.
If Calendar Server or the server where Calendar Server is running is not properly shut down, manually delete all files in the ldap_cache directory to avoid any database corruption that might cause problems during a subsequent restart.