This chapter explains how to install and configure high availability for Calendar Server 6.3 software using Sun Cluster 3.0 or 3.1.
Configuring Calendar Server for High availability (HA) provides for monitoring of and recovery from software and hardware failures. The Calendar Server HA feature is implemented as a failover service. This chapter describes two Calendar Server HA configurations using Sun Cluster software, one asymmetric and one symmetric.
This chapter includes the following topics to describe how to install and configure HA for Calendar Server:
6.1 Overview of High Availability Choices for Calendar Server Version 6.3
6.2 Prerequisites for an HA Environment for Your Calendar Server Version 6.3 Deployment
6.7 Configuring a Symmetric High Availability Calendar Server System
6.11 Example Output from the Calendar Configuration Program (Condensed)
You can find a set of worksheets to help you plan a Calendar Server HA configuration in the Appendix C, Calendar Server Configuration Worksheet.
High availability can be configured many ways. This section contains an overview of three high availability choices, and information to help you choose which is right for your needs.
This sections covers the following topics:
6.1.1 Understanding Asymmetric High Availability for Calendar Server Version 6.3
6.1.2 Understanding Symmetric High Availability for Calendar Server Version 6.3
6.1.4 Choosing a High Availability Model for Your Calendar Server Version 6.3 Deployment
6.1.5 System Down Time Calculations for High Availability in Your Calendar Server 6.3 Deployment
A simple asymmetric high availability system has two physical nodes. The primary node is usually active, with the other node acting as a backup node, ready to take over if the primary node fails. To accomplish a fail over, the shared disk array is switched so that it is mastered by the backup node. The Calendar Server processes are stopped on the failing primary node and started on the backup node.
There are several advantages of this type of high availability system. One advantage is that the backup node is dedicated and completely reserved for the primary node. This means there is no resource contention on the backup node when a failover occurs. Another advantage is the ability to perform a rolling upgrade; that is, you can upgrade one node while continuing to run Calendar Server software on the other node. Changes you make to the ics.conf file while upgrading the first node will not interfere with the other instance of Calendar Server software running on the secondary node because the configuration file is read only once, at startup. You must stop and restart the calendar processes before the new configuration takes effect. When you want to upgrade the other node, you perform a failover to the upgraded primary node and proceed with the upgrade on the secondary node.
You can, of course, choose to upgrade the secondary node first, and then the primary node.
The asymmetric high availability model also has some disadvantages. One disadvantage is that the backup node stays idle most of the time, making this resource underutilized. Another possible disadvantage is the single storage array. In the event of a disk array failure with a simple asymmetric high availability system, no backup is available
A simple symmetric high availability system has two active physical nodes, each with its own disk array with two storage volumes, one volume for the local calendar store, and the other a mirror image of the other node's calendar store. Each node acts as the backup node for the other. When one node fails over to its backup, two instances of Calendar Server run concurrently on the backup node, each running from its own installation directory and accessing its own calendar store. The only thing shared is the computing power of the back up node.
The advantage of this type of high availability system is that both nodes are active simultaneously, thus fully utilizing machine resources. However, during a failure, the backup node will have more resource contention as it runs services for Calendar Server from both nodes.
Symmetric high availability also provides a backup storage array. In the event of a disk array failure, its redundant image can be picked up by the service on its backup node.
To configure a symmetric high availability system, you install the Calendar Server binaries on your shared disk. Doing so might prevent you from performing rolling upgrades, a feature planned for future releases of Calendar Server that enables you to update your system with a Calendar Server patch release with minimal or no down time.
In addition to the two types of highly available systems described in this chapter, a third type which is a hybrid of the two is also possible. This is a multi-node asymmetric high availability system. In this type, “N” disk arrays and “N” nodes all use the same backup node which is held in reserve and is not active normally. This backup node is capable of running Calendar Server for any of the “N” nodes. It shares each of the “N” node's disk array, as shown in the preceding graphic. If multiple nodes fail at the same time, the backup node must be capable of running up to “N” instances of Calendar Server concurrently. Each of the “N” nodes has its own disk array.
The advantages of the N+1 model are that Calendar Server load can be distributed to multiple nodes, and that only one backup node is necessary to sustain all the possible node failures.
The disadvantage of this type of high availability is the same as any asymmetric system; the backup node is idle most of the time. In addition, the N+1 high availability system backup node must have excess capacity in the event it must host multiple instances of Calendar Server. This means a higher cost machine is sitting idle. However, the machine idle ratio is 1:N as opposed to 1:1, as is the case in a single asymmetric system.
To configure this type of system, use the instructions for the asymmetric high availability system for each of the “N” nodes and the backup. Use the same backup node each time, but with a different primary node.
The following table summarizes the advantages and disadvantages of each high availability model. Use this information to help you determine which model is right for your deployment.
Table 6–1 Advantages and Disadvantages of Both High Availability Model
The following table illustrates the probability that on any given day the calendar service will be unavailable due to system failure. These calculations assume that on average, each server goes down for one day every three months due to either a system crash or server hang, and that each storage device goes down one day every 12 months. These calculations also ignore the small probability of both nodes being down simultaneously.
Table 6–2 System Down Time Calculations
Model |
Server Down Time Probability |
Single server (no high availability) |
Pr(down) = (4 days of system down + 1 day of storage down)/365 = 1.37% |
Asymmetric |
Pr(down) = (0 days of system down + 1 day of storage down)/365 = 0.27% |
Symmetric |
Pr(down) = (0 days of system down + 0 days of storage down)/365 = (near 0) |
N + 1 Asymmetric |
Pr(down) = (5 hours of system down + 1 day of storage down)/(365xN) = 0.27%/N |
This sections lists the prerequisites for installing Calendar Server in an HA environment.
The following prerequisites apply:
Either the Solaris 9 or the Solaris 10 operating system must be installed on all nodes of the cluster, with required patches
Sun Cluster 3.0 or 3.1 must be installed on all nodes of the cluster
Calendar Server HA Agents package (SUNWscics) must be installed on all nodes of the cluster using the Java Enterprise System installer
Specify local file systems as HAStoragePlus Failover File System (FFS) systems, or HAStorage Cluster File System (CFS)
If you have a version of Sun Cluster 3.0 dated December 2001 or earlier, you must use the global file system, specified as a HAStorage Cluster File System (CFS).
If logical volumes are being created, which is true for the symmetric high availability system, use either Solstice DiskSuite or Veritas Volume Manager.
Use the HAStoragePlus resource type to make locally mounted file systems highly available within a Sun Cluster environment. Any file system resident on a Sun Cluster global device group can be used with HAStoragePlus. An HAStoragePlus file system is available on only one cluster node at any given point of time. These locally mounted file systems can only be used in failover mode and in failover resource groups. HAStoragePlus offers Failover File System (FFS), in addition to supporting the older Global File System (GFS), or Cluster File System (CFS).
HAStoragePlus has a number of benefits over its predecessor, HAStorage:
HAStoragePlus bypasses the global file service layer completely. For data services requiring an intensive number of disk accesses, this leads to a significant performance increase.
HAStoragePlus can work with any file system (like UFS, VxFS, and so forth), even those that might not work with the global file service layer. If a file system is supported by the Solaris operating system, it will work with HAStoragePlus.
Use HAStoragePlus resources in a data service resource group with Sun Cluster 3.0 Release May 2002 and later.
For more information on HAStoragePlus, see Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
The following is a list of the tasks necessary to install and configure Calendar Server for Asymmetric High Availability:
Prepare the nodes.
Install the Solaris Operating System software on all nodes of the cluster.
Install Sun Cluster software on all nodes of the cluster.
Install the Calendar Server HA Agents package, SUNWscics, on all nodes of the cluster using the Java Enterprise System installer
Create a file system on the shared disk.
Install Calendar Server on the Primary and Secondary nodes of the cluster, using the Communications Suite 5 installer.
Run the Directory Preparation Script, comm_dssetup.pl on the machine where the Directory Server LDAP directory resides.
Installing and configuring the first (primary) node.
Using the Sun Cluster command-line interface, set up HA on the primary node.
Run the Calendar Server configuration program, csconfigurator.sh, on the primary node.
Using the Sun Cluster command-line interface, switch to the secondary node.
Create a symbolic link from the Calendar Server config directory on the primary node to the shared disk config directory.
Install and configure the second (secondary) node.
Run the Calendar Server configuration program on the secondary node by reusing the state file created when you configured the primary node.
Edit the Configuration File, ics.conf.
Using the Sun Cluster command-line interface, configure and enable a resource group for Calendar Server.
Using the Sun Cluster command-line interface to test the successful creation of the resource group, perform a fail over to the primary node.
For step-by-step instructions, see 6.6 Installing and Configuring Calendar Server 6.3 Software in an Asymmetric High Availability Environment.
The following is a list of the tasks necessary to install and configure Calendar Server for Symmetric High Availability:
Prepare the nodes.
Install the Solaris Operating System software on all nodes of the cluster.
Install Sun Cluster software on all nodes of the cluster.
Create six file systems, either Cluster File Systems (Global File systems) or Fail Over File Systems (Local File systems).
Create the necessary directories.
Install the Calendar Server HA Agents package, SUNWscics, on all nodes of the cluster using the Java Enterprise System installer
Install and Configure the first node.
Using the Communications Suite 5 installer, install Calendar Server on the first node of the cluster.
Run the Directory Preparation Script, comm_dssetup.pl, on the machine where the Directory Server LDAP database resides.
If the instances of Calendar Server on the two nodes share the same LDAP server, it is not necessary to repeat this step after installing Calendar Server software on the second node.
Using the Sun Cluster command-line interface, configure HA on the first node.
Run the Calendar Server configuration program, csconfigurator.sh, on the first node.
Using the Sun Cluster command-line interface, fail over to the second node.
Edit the Configuration File, ics.conf, on the first node.
Using the Sun Cluster command-line interface, configure and enable a resource group for Calendar Server on the first node.
Using the Sun Cluster command-line interface, create and enable a resource group for the first node.
Using the Sun Cluster command-line interface to test the successful creation of the resource group, perform a fail over to the first node.
Install and configure the second node.
Using the Communications Suite 5 installer, install Calendar Server on the second node of the cluster.
Using the Sun Cluster command-line interface, configure HA on the second node.
Run the Calendar Server configuration program, csconfigurator.sh, on the second node by reusing the state file created when you configured the first node.
Using the Sun Cluster command-line interface, fail over to the first node.
Edit the Configuration File, ics.conf, on the second node.
Using the Sun Cluster command-line interface, create and enable a resource group for Calendar Server on the second node.
Using the Sun Cluster command-line interface to test the successful creation of the resource group, perform a fail over to the second node.
For step-by-step instructions, see 6.7 Configuring a Symmetric High Availability Calendar Server System.
Print out this section and record the values you use as you go through the HA installation and configuration process.
This section contains four tables showing the variable names used in all examples:
Table 6–3 Directory Name Variables Used in Asymmetric Examples
Table 6–4 Directory Name Variables Used in Symmetric Examples
Table 6–5 Resource Name Variables for Asymmetric Examples
Table 6–6 Resource Name Variables for Symmetric Examples
Table 6–7 Variable Name for IP Address in Asymmetric Examples
Table 6–8 Variable Name for IP Address in Symmetric Examples
Example Name |
Directory |
Description |
---|---|---|
install-root |
/opt |
The directory in which Calendar Server is installed. |
cal-svr-base |
/opt/SUNWics5/cal |
The directory in which all Calendar Server files are located. |
var-cal-dir |
/var/opt/SUNWics5 |
The /var directory. |
share-disk-dir |
/cal |
A global directory; that is, a directory shared between nodes in an asymmetric high availability system. |
Table 6–4 Directory Name Variables Used in Symmetric Examples
Example Name |
Directory |
Description |
---|---|---|
install-rootCS1 install-rootCS2 |
/opt/Node1 /opt/Node2 |
The directory in which an instance of Calendar Server is installed. |
cal-svr-baseCS1 cal-svr-baseCS2 |
/opt/Node1/SUNWics5/cal /opt/Node2/SUNWics5/cal |
The directory in which all Calendar Server files are located for the node. |
var-cal-dirCS1 var-cal-dirCS2 |
/var/opt/Node1/SUNWics5 /var/opt/Node2/SUNWics5 |
The /var directories for each node. |
share-disk-dirCS1 share-disk-dirCS2 |
/cal/Node1 /cal/Node2 |
The global (shared) directories each instance of Calendar Server shares with its fail over node. This is used in a symmetric high availability system. |
Table 6–5 Resource Name Variables for Asymmetric Examples
Variable Name |
Description |
---|---|
CAL-RG |
A calendar resource group. |
LOG-HOST-RS |
A logical hostname resource. |
LOG-HOST-RS-Domain.com |
The fully qualified logical hostname resource. |
CAL-HASP-RS |
An HAStoragePlus resource. |
CAL-SVR-RS |
A Calendar Server resource group. |
Table 6–6 Resource Name Variables for Symmetric Examples
Variable Name |
Description |
---|---|
CAL-CS1-RG |
A calendar resource group for the first instance of Calendar Server. |
CAL-CS2-RG |
A calendar resource group for the second instance of Calendar Server. |
LOG-HOST-CS1-RS |
A logical hostname resource for the first instance of Calendar Server. |
LOG-HOST-CS1-RS-Domain.com |
The fully qualified logical hostname resource for the first instance of Calendar Server. |
LOG-HOST-CS2-RS |
A logical hostname resource for the second instance of Calendar Server. |
LOG-HOST-CS2-RS-Domain.com |
The fully qualified logical hostname resource for the second instance of Calendar Server. |
CAL-HASP-CS1-RS |
An HAStoragePlus resource for the first instance of Calendar Server. |
CAL-HASP-CS2-RS |
An HAStoragePlus resource for the second instance of Calendar Server. |
CAL-SVR-CS1-RS |
A Calendar Server resource group for the first instance of Calendar Server. |
CAL-SVR-CS2-RS |
A Calendar Server resource group for the second instance of Calendar Server. |
Table 6–7 Variable Name for IP Address in Asymmetric Examples
Logical IP Address |
Description |
---|---|
IPAddress |
The IP Address of the port on which the chsttpd daemon will listen. It should be in the standard IP format, for example: "123.45.67.890" |
Table 6–8 Variable Name for IP Address in Symmetric Examples
Logical IP Address |
Description |
---|---|
IPAddressCS1 |
The IP Address of the port on which the chsttpd daemon for the first instance of Calendar Server will listen. It should be in the standard IP format, for example: "123.45.67.890" |
IPAddressCS2 |
The IP Address of the port on which the chsttpd daemon for the second instance of Calendar Server will listen. It should be in the standard IP format, for example: "123.45.67.890" |
This section contains instructions for configuring an asymmetric high availability Calendar Server cluster.
This sections contains the following topics:
6.6.1 Creating the File Systems for Your Calendar Server 6.3 HA Deployment
6.6.3 Installing and Configuring High Availability for Calendar Server 6.3 Software
Create a file system on the shared disk. The /etc/vfstab should be identical on all the nodes of the cluster.
For CFS, it should look similar to the following example.
## Cluster File System/Global File System ## /dev/md/penguin/dsk/d400 /dev/md/penguin/rdsk/d400 /cal ufs 2 yes global,logging
For example, for FFS:
## Fail Over File System/Local File System ## /dev/md/penguin/dsk/d400 /dev/md/penguin/rdsk/d400 /cal ufs 2 no logging
The fields in these commands are separated by tabs, not just spaces.
For all nodes of the cluster, create a directory, /Cal, on the shared disk where configuration and data is held. For example, do the following command for every shared disk:
mkdir -P /Cal
This section contains instructions for the tasks involved in installing and configuring high availability for Calendar Server.
Perform each of the following tasks in turn to complete the configuration:
Install Calendar Server on the Primary and Secondary nodes of the cluster, using the Communications Suite 5 installer.
Be sure to specify the same installation root on all nodes.
At the Specify Installation Directories panel, answer with the installation root for both nodes.
This will install the Calendar Server binaries in the following directory:/install-root/SUNWics5/cal. This directory is called the Calendar Server base (cal-svr-base).
Choose the Configure Later option.
After the installation is complete, verify that the files are installed.
# pwd /cal-svr-base # ls -rlt total 16 drwxr-xr-x 4 root bin 512 Dec 14 12:52 share drwxr-xr-x 3 root bin 512 Dec 14 12:52 tools drwxr-xr-x 4 root bin 2048 Dec 14 12:52 lib drwxr-xr-x 2 root bin 1024 Dec 14 12:52 sbin drwxr-xr-x 8 root bin 512 Dec 14 12:52 csapi drwxr-xr-x 11 root bin 2048 Dec 14 12:52 html
Run the Directory Preparation Script (comm_dssetup.pl) against your existing Directory Server LDAP.
This prepares your Directory Server by setting up new LDAP schema, index, and configuration data.
For instructions and further information about running comm_dssetup.pl, see Chapter 8, Directory Preparation Tool (comm_dssetup.pl), in Sun Java Communications Suite 5 Installation Guide.
Use the Sun Cluster command line interface as indicated to set up HA on the first node.
Refer to 6.5 Naming Conventions for All Examples in this Deployment Example for Configuring High Availability in Calendar Server Version 6.3 as a key for directory names and Sun Cluster resource names in the examples.
Register the Calendar Server and HAStoragePlus resource
./scrgadm -a -t SUNW.HAStoragePlus ./scrgadm -a -t SUNW.scics
Create a failover Calendar Server resource group.
For example, the following instruction creates the calendar resource group CAL-RG with the primary node as Node1 and the secondary, or failover, node as Node2.
./scrgadm -a -g CAL-RG -h node1,node2
Create a logical hostname resource in the Calendar Server resource group and bring the resource group online.
For example, the following instructions create the logical hostname resource LOG-HOST-RS, and then brings the resource group CAL-RG online.
./scrgadm -a -L -g CAL-RG -l LOG-HOST-RS ./scrgadm -c -j LOG-HOST-RS -y \ R_description="LogicalHostname resource for LOG-HOST-RS" ./scswitch -Z -g CAL-RG
Create and enable the HAStoragePlus resource.
For example, the following instructions create and enable the HAStoragePlus resource CAL-HASP-RS.
scrgadm -a -j CAL-HASP-RS -g CAL-RG -t SUNW.HAStoragePlus:4 -x FilesystemMountPoints=/cal scrgadm -c -j CAL-HASP-RS -y R_description="Failover data service resource for SUNW.HAStoragePlus:4" scswitch -e -j CAL-HASP-RS
Run the configuration program.
For example, from the /cal-svr-base/sbin directory:
# pwd /cal-svr-base/sbin # ./csconfigurator.sh
For further information about running the configuration script, see Chapter 2, Initial Runtime Configuration Program for Calendar Server 6.3 software (csconfigurator.sh)also in this guide.
At the Run Time Configuration panel, deselect both Calendar Server startup options.
At the Directories panel, configure all directories on a shared disk. Use the following locations:
/share-disk-dir/config
/share-disk-dir/csdb
/share-disk-dir/store
/share-disk-dir/logs
/share-disk-dir/tmp
Once you have finished specifying the directories, choose Create Directory.
At the Archive and Hot Backup panel, specify the following choices:
/share-disk-dir/csdb/archive
/share-disk-dir/csdb/hotbackup
When you have finished specifying the directories, choose the Create Directory option.
Verify that the configuration is successful.
Look at the end of the configuration output to make sure it says: “All Tasks Passed.” The following example shows the last part of the configuration output.
... All Tasks Passed. Please check install log /var/sadm/install/logs/Sun_Java_System_Calendar_Server_install.B12141351 for further details.
For a larger sample of the output, see 6.11 Example Output from the Calendar Configuration Program (Condensed)
Click Next to finish configuration.
Switch to the secondary node.
Using the Sun Cluster command line interface, switch to the secondary node. For example, the following command switches the resource group to the secondary (failover) node, Node2:
scswitch -z -g CAL-RG -h Node2
Create a symbolic link from the Calendar Server config directory to the config directory of the Shared File System.
For example, perform the following commands:
# pwd /cal-svr-base # ln -s /share-disk-dir/config .
Do not forget the dot (.) at the end of the ln command.
Configure Calendar Server on the secondary node using the state file from the primary node configuration.
Share the configuration of the primary node by running the state file created when you ran the configuration program.
For example, run the following command:
# /cal-svr-base/sbin/csconfigurator.sh -nodisplay -noconsole -novalidate
Check that all the tasks passed as with the first time you ran the configuration program.
Edit the Configuration File (ics.conf)
Edit the ics.conf file by adding the following parameters to the end of the file. The logical hostname of the calendar resource is LOG-HOST-RS.
Back up your ics.conf file before performing this step.
! The following are the changes for making Calendar Server ! Highly Available ! local.server.ha.enabled="yes" local.server.ha.agent="SUNWscics" service.http.listenaddr="IPAddress" local.hostname="LOG-HOST-RS" local.servername="LOG-HOST-RS" service.ens.host="LOG-HOST-RS" service.http.calendarhostname="LOG-HOST-RS-Domain.com" local.autorestart="yes" service.listenaddr="IPAddress"
Create the Calendar Server resource group and enable it.
For this example, the resource group name is CAL-SVR-RS. You will also be required to supply the logical host resource name and the HAStoragePlus resource name.
./scrgadm -a -j CAL-SVR-RS -g CAL-RG -t SUNW.scics -x ICS_serverroot=/cal-svr-base -y Resource_dependencies=CAL-HASP-RS,LOG-HOST-RS ./scrgadm -e -j CAL-SVR-RS
Test the successful creation of the calendar resource group by performing a fail over.
./ scswitch -z -g CAL-RG -h Node1
When you have finished this step, you have completed the creation and configuration of the asymmetric high availability system for Calendar Server. The section that follows explains how to set up logging on Sun Cluster for debug purposes.
You have now finished the installation and configuration of an asymmetric Calendar Server HA system.
This section contains instructions for configuring a symmetric high availability Calendar Server system
To configure a symmetric high availability Calendar Server system, follow the instructions in the following sections:
6.7.2 Installing and Configuring the First Instance of Calendar Server
6.7.3 Installing and Configuring the Second Instance of Calendar Server
There are two preparatory tasks that must be completed before installing Calendar Server on the nodes.
The preparatory tasks are as follows:
In various places in the examples, you need to provide the installation directory (cal-svr-base) for each node. For a symmetric HA system, the cal-svr-base is different than the asymmetric HA system. For symmetric HA systems, cal-svr-base has the following format: /opt/node/SUNWics5/cal, where /opt/node is the name of the root directory in which Calendar Server is installed (install-root).
For the purposes of the examples, and to differentiate the installation directories of the two Calendar Server instances, they are designated as cal-svr-baseCS1 and cal-svr-baseCS1.
To differentiate the installation roots for the two Calendar Server instances in this example, they are designated as install-rootCS1 and install-rootCS2:
Create six file systems, using either Cluster File Systems (Global File systems) or Fail Over File Systems (Local File systems).
This example is for Global File Systems. The contents of the /etc/vfstab file should look like the following: (Note that the fields are all tab separated.)
# Cluster File System/Global File System ## /dev/md/penguin/dsk/d500 /dev/md/penguin/rdsk/d500 /cal-svr-baseCS1 ufs 2 yes logging,global /dev/md/penguin/dsk/d400 /dev/md/penguin/rdsk/d400 /share-disk-dirCS1 ufs 2 yes logging,global /dev/md/polarbear/dsk/d200 /dev/md/polarbear/rdsk/d200 /cal-svr-baseCS2 ufs 2 yes logging,global /dev/md/polarbear/dsk/d300 /dev/md/polarbear/rdsk/d300 /share-disk-dirCS2 ufs 2 yes logging,global /dev/md/polarbear/dsk/d600 /dev/md/polarbear/rdsk/d300 /var-cal-dirCS1 ufs 2 yes logging,global /dev/md/polarbear/dsk/d700 /dev/md/polarbear/rdsk/d300 /var-cal-dirCS2 ufs 2 yes logging,global
This example is for the Failover File Systems. The contents of the /etc/vfstab file should look like the following: (Note that the fields are all tab separated.)
# Failover File System/Local File System ## /dev/md/penguin/dsk/d500 /dev/md/penguin/rdsk/d500 /cal-svr-baseCS1 ufs 2 yes logging /dev/md/penguin/dsk/d400 /dev/md/penguin/rdsk/d400 /share-disk-dirCS1 ufs 2 yes logging /dev/md/polarbear/dsk/d200 /dev/md/polarbear/rdsk/d200 /cal-svr-baseCS2 ufs 2 yes logging /dev/md/polarbear/dsk/d300 /dev/md/polarbear/rdsk/d300 /share-disk-dirCS2 ufs 2 yes logging /dev/md/polarbear/dsk/d600 /dev/md/polarbear/rdsk/d300 /var-cal-dirCS1 ufs 2 yes logging /dev/md/polarbear/dsk/d700 /dev/md/polarbear/rdsk/d300 /var-cal-dirCS2 ufs 2 yes logging
Create the following required directories on all nodes of the cluster.
# mkdir -p /install-rootCS1 share-disk-dirCS1 install-rootCS2 share-disk-dirCS2 var-cal-dirCS1 var-cal-dirCS2
Install the Calendar Server HA package, SUNWscics, on all nodes of the cluster.
This must be done from the Java Enterprise System installer.
For more information about the Java Enterprise System installer, refer to the Sun Java Enterprise System 5 Installation and Configuration Guide.
Follow the instructions in this section to install and configure the first instance of Calendar Server. This section covers the following topics:
Verify the files are mounted.
On the primary node (Node1), enter the following command:
df -k
The following is an example of the output you should see:
/dev/md/penguin/dsk/d500 35020572 34738 34635629 1% /install-rootCS1 /dev/md/penguin/dsk/d400 35020572 34738 34635629 1% /share-disk-dirCS1 /dev/md/polarbear/dsk/d300 35020572 34738 34635629 1% /share-disk-dirCS2 /dev/md/polarbear/dsk/d200 35020572 34738 34635629 1% /install-rootCS2 /dev/md/polarbear/dsk/d600 35020572 34738 34635629 1% /var-cal-dirCS1 /dev/md/polarbear/dsk/d700 35020572 34738 34635629 1% /var-cal-dirCS2
Using the Sun Java Systems Communications Suite installer, install Calendar Server on the Primary Node.
At the Specify Installation Directories panel, specify the installation root (install-rootCS1):
For example, if your Primary node is named red and the root directory is dawn, the installation root would be /dawn/red. This is the directory where you are installing Calendar Server on the first node.
Choose Configure Later.
Run the Directory Preparation Tool script on the machine with the Directory Server.
Using the Sun Cluster command-line interface, configure Sun Cluster on the first node by performing the following steps:
Register the following resource types:
./scrgadm -a -t SUNW.HAStoragePlus ./scrgadm -a -t SUNW.scics
Create a fail over resource group.
In the following example, the resource group is CAL-CS1-RG, and the two nodes are named Node1 as the primary node and Node2 as the fail over node.
./scrgadm -a -g CAL-CS1-RG -h Node1,Node2
Create a logical hostname resource for this node.
The calendar client listens on this logical hostname. The example that follows uses LOG-HOST-CS1-RS in the place where you will substitute in the actual hostname.
./scrgadm -a -L -g CAL-RG -l LOG-HOST-CS1-RS ./scrgadm -c -j LOG-HOST-CS1-RS -y R_description= "LogicalHostname resource for LOG-HOST-CS1-RS"
Bring the resource group online.
scswitch -Z -g CAL-CS1-RG
Create an HAStoragePlus resource and add it to the fail over resource group.
In this example, the resource is called CAL-HASP-CS1-RS. You will substitute you own resource name. Note that the lines are cut and show as two lines in the example for display purposes in this document.
./scrgadm -a -j CAL-HASP-CS1-RS -g CAL-CS1-RG -t SUNW.HAStoragePlus:4 -x FilesystemMountPoints=/install-rootCS1, /share-disk-dirCS1,/cal-svr-baseCS1 ./scrgadm -c -j CAL-HASP-CS1-RS -y R_description="Failover data service resource for SUNW.HAStoragePlus:4"
Enable the HAStoragePlus resource.
./scswitch -e -j CAL-HASP-CS1-RS
Run the configuration program on the primary node.
# cd /cal-svr-baseCS1/sbin/ # ./csconfigurator.sh
For further information about running the configuration script, see the Sun Java System Calendar Server 6.3 Administration Guide.
On the Runtime Configuration panel, deselect both of the Calendar Server start up options.
On the Directories to Store Configuration and Data Files panel, provide the shared disk directories as shown in the following list:
/share-disk-dirCS1/config
/share-disk-dirCS1/csdb
/share-disk-dirCS1/store
/share-disk-dirCS1/logs
/share-disk-dirCS1/tmp
When you have finished specifying the directories, choose Create Directory.
On the Archive and Hot Backup panel, provide the shared disk directory names as shown in the following list:
/share-disk-dirCS1/csdb/archive
/share-disk-dirCS1/csdb/hotbackup
After specifying these directories, choose Create Directory.
Verify that the configuration was successful.
The configuration program will display a series of messages. If they all start with PASSED, which means it was successful. For an example of the output you might see, check the example at: 6.11 Example Output from the Calendar Configuration Program (Condensed).
Using the Sun Cluster command-line interface, perform a fail over to the second node.
For example:
# /usr/cluster/bin/scswitch -z -g CAL-CS1-RG -h Node2
Edit the configuration file, ics.conf, by adding the parameters shown in the example that follows.
Back up the ics.conf file before starting this step.
! The following changes were made to configure Calendar Server ! Highly Available ! local.server.ha.enabled="yes" local.server.ha.agent="SUNWscics" service.http.listenaddr="IPAddressCS1" local.hostname="LOG-HOST-CS1-RS" local.servername="LOG-HOST-CS1-RS" service.ens.host="LOG-HOST-CS1-RS" service.http.calendarhostname="LOG-HOST-CS1-RS-Domain.com" local.autorestart="yes" service.listenaddr = "IPAddressCS1"
The expected value for service.http.calendarhostname is a fully qualified hostname.
Using the Sun Cluster command-line interface, create the Calendar Server resource group.
Create a calendar resource group and enable it.
For example:
./scrgadm -a -j CAL-SVR-CS1-RS -g CAL-CS1-RG -t SUNW.scics -x ICS_serverroot=/cal-svr-baseCS1 -y Resource_dependencies=CAL-HASP-CS1-RS,LOG-HOST-CS1-RS ./scrgadm -e -j CAL-SVR-CS1-RS
Using the Sun Cluster command-line interface to test the successful creation of the Calendar Server resource group, perform a fail over to the first node, which is the Primary node.
For example:
./scswitch -z -g CAL-CS1-RG -h Node1
The primary node for the second Calendar Server instance is the second node (Node2).
Verify the files are mounted.
On the primary node (Node2), enter the following command:
df -k
The following is an example of the output you should see:
/dev/md/penguin/dsk/d500 35020572 34738 34635629 1% /install-rootCS1 /dev/md/penguin/dsk/d400 35020572 34738 34635629 1% /share-disk-dirCS1 /dev/md/polarbear/dsk/d300 35020572 34738 34635629 1% /share-disk-dirCS2 /dev/md/polarbear/dsk/d200 35020572 34738 34635629 1% /install-rootCS2 /dev/md/polarbear/dsk/d600 35020572 34738 34635629 1% /var-cal-dirCS1 /dev/md/polarbear/dsk/d700 35020572 34738 34635629 1% /var-cal-dirCS2
Using the Sun Java Systems Communications Suite installer, install Calendar Server on the new Primary Node (second node).
Using the Sun Cluster command-line interface, configure the second instance of Calendar Server as described in the following steps:
Create a fail over resource group.
In the following example, the resource group is CAL-CS2-RG, and the two nodes are named Node2 as the primary node and Node1 as the fail over node.
./scrgadm -a -g CAL-CS2-RG -h Node2,Node1
Create a logical hostname resource.
The calendar client listens on this logical hostname. The example that follows uses LOG-HOST-CS2-RS in the place where you will substitute in the actual hostname.
./scrgadm -a -L -g CAL-CS2-RG -l LOG-HOST-CS2-RS ./scrgadm -c -j LOG-HOST-CS2-RS -y R_description="LogicalHostname resource for LOG-HOST-CS2-RS"
Bring the resource group online.
scswitch -Z -g CAL-CS2-RG
Create an HAStoragePlus resource and add it to the fail over resource group.
In this example, the resource is called CAL-SVR-CS2-RS. You will substitute you own resource name.
./scrgadm -a -j CAL-SVR-CS2-RS -g CAL-CS2-RG -t SUNW.HAStoragePlus:4 -x FilesystemMountPoints=/install-rootCS2, /share-disk-dirCS2,/var-cal-dirCS2 ./scrgadm -c -j CAL-HASP-CS2-RS -y R_description="Failover data service resource for SUNW.HAStoragePlus:4"
Enable the HAStoragePlus resource.
./scswitch -e -j CAL-HASP-CS2-RS
Run the configuration program again on the secondary node.
# cd /cal-svr-baseCS2/sbin/ # ./csconfigurator.sh
For further information about running the configuration script, see the Sun Java System Calendar Server 6.3 Administration Guide.
On the Runtime Configuration panel, deselect both of the Calendar Server start up options.
On the Directories to Store Configuration and Data Files panel, provide the proper directories as shown in the following list:
share-disk-dirCS2/config
/share-disk-dirCS2/csdb
/share-disk-dirCS2/store
/share-disk-dirCS2/logs
/share-disk-dirCS2/tmp
When you have finished specifying the directories, choose Create Directory.
On the Archive and Hot Backup panel, provide the appropriate directory names as shown in the following list:
/share-disk-dirCS2/csdb/archive
/share-disk-dirCS2/csdb/hotbackup
After specifying these directories, choose Create Directory.
Verify that the configuration was successful.
The configuration program will display a series of messages. If they all start with PASSED, which means it was successful. For an example of the output you might see, check the example at: 6.11 Example Output from the Calendar Configuration Program (Condensed).
Using the Sun Cluster command-line interface, perform a fail over to the first node.
For example:
# /usr/cluster/bin/scswitch -z -g CAL-CS2-RG -h Node1
Edit the configuration file, ics.confby adding the parameters shown in the example that follows.
The values shown are examples only. You must substitute your own information for the values in the example.
Back up the ics.conf file before starting this step.
! The following changes were made to configure Calendar Server ! Highly Available ! local.server.ha.enabled="yes" local.server.ha.agent="SUNWscics" service.http.listenaddr="IPAddressCS2" local.hostname="LOG-HOST-CS2-RS" local.servername="LOG-HOST-CS2-RS" service.ens.host="LOG-HOST-CS2-RS" service.http.calendarhostname="LOG-HOST-CS2-RS-Domain.com" local.autorestart="yes" service.listenaddr = "IPAddressCS2"
The value for service.http.calendarhostname must be a fully qualified hostname.
Using the Sun Cluster command-line interface, create a Calendar Server resource group.
Create a Calendar Server resource group and enable it.
For example:
./scrgadm -a -j CAL-SVR-CS2-RS -g CAL-CS2-RG -t SUNW.scics -x ICS_serverroot=/cal-svr-baseCS2 -y Resource_dependencies=CAL-HASP-CS2-RS,LOG-HOST-CS2-RS ./scrgadm -e -j CAL-SVR-CS2-RS
Using the Sun Cluster command-line interface to test the successful creation of the calendar resource group, perform a fail over to the second node, which is primary node for this Calendar Server instance.
For example:
./scswitch -z -g CAL-CS2-RG -h Node2
Your have now finished installing and configuring a symmetric HA Calendar Server.
Use the following commands to start, fail over, disable, remove, and restart the Calendar Server HA service:
# scswitch -e -j CAL-SVR-RS
# scswitch -z -g CAL-RG -h Node2
# scswitch -n -j CAL-SVR-RS
# scrgadm -r -j CAL-SVR-RS
# scrgadm -R -j CAL-SVR-RS
This section describes how to undo the HA configuration for Sun Cluster. This section assumes the simple asymmetric example configuration described in this chapter. You must adapt this scenario to fit your own installation.
Become a superuser.
All of the following Sun Cluster commands require that you be running as a superuser.
Bring the resource group offline. Use the following command to shut down all of the resources in the resource group (For example, the Calendar Server and the HA logical host name).
# scswitch -F -g CAL-RG |
Disable the individual resources.
Remove the resources one-by-one from the resource group using the commands:
# scswitch -n -j CAL-SVR-RS # scswitch -n -j CAL-HASP-RS # scswitch -n -j LOG-HOST-RS |
Remove the resource group itself using the command:
# scrgadm -r -g CAL-RG |
Remove the resource types (optional). If you want to remove the resource types from the cluster, use the command:
# scrgadm -r -t SUNW.scics # scrgadm -r -t SUNW.HAStorage |
The Calendar Server Sun Cluster agents use two different API's to log messages:
scds_syslog_debug() — Used by Calendar Server agents. Messages are logged at daemon.debug level.
scds_syslog() — Used by Calendar Server agents and Sun Cluster data services. Messages are logged at daemon.notice, daemon.info, and daemon.errorlevels.
The following task must be done on each HA node since the /var/adm file can't be shared. This file is on the root partition of the individual nodes.
Create a logging directory for Calendar Server agents.
mkdir -p /var/cluster/rgm/rt/SUNW.scics
Set the debug level to 9.
echo 9 >/var/cluster/rgm/rt/SUNW.scics/loglevel
The following example shows log messages you might see in the directory. Note that, in the last line, ICS-serverroot is asking for the cal-svr-base, or installation directory.
Dec 11 18:24:46 mars SC[SUNW.scics,CAL-RG,cal-rs,ics_svc_start]: [ID 831728 daemon.debug] Groupname icsgroup exists. Dec 11 18:24:46 mars SC[SUNW.scics,CAL-RG,LOG-HOST-RS,ics_svc_start]: [ID 383726 daemon.debug] Username icsuser icsgroup Dec 11 18:24:46 mars SC[SUNW.scics,CAL-RG,LOG-HOST-RS,ics_svc_start]: [ID 244341 daemon.debug] ICS_serverroot = /cal-svr-base
Enable Sun Cluster Data Services Logging.
Edit the syslog.conf file by adding the following line .
daemon.debug /var/adm/clusterlog
This will cause all the debug messages to be logged into the daemon.debug /var/adm/clusterlog file.
Restart the syslogd daemon.
pkill -HUP syslogd
All syslog debug messages are prefixed with the following:
SC[resourceTypeName, resourceGroupName, resourceName, methodName]
The following example messages have been split and carried over to multiple lines for display purposes.
Dec 11 15:55:52 Node1 SC [SUNW.scics,CAL-RG,CalendarResource,ics_svc_validate]: [ID 855581 daemon.error] Failed to get the configuration info Dec 11 18:24:46 Node1 SC [SUNW.scics,CAL-RG,LOG-HOST-RS,ics_svc_start]: [ID 833212 daemon.info] Attempting to start the data service under process monitor facility.
This section contains a partial listing of the output from the configuration program. Your output will be much longer. At the end, it should say: “All Tasks Passed.” Inspect the log files. The location of the files is given at the end of the printout.
/usr/jdk/entsys-j2se/bin/java -cp /opt/Node2/SUNWics5/cal/share/lib: /opt/Node2/SUNWics5/cal/share -Djava.library.path= /opt/Node2/SUNWics5/cal/lib configure -nodisplay -noconsole -novalidate # ./csconfigurator.sh -nodisplay -noconsole -novalidate /usr/jdk/entsys-j2se/bin/java -cp /opt/Node2/SUNWics5/cal/share/lib: /opt/Node2/SUNWics5/cal/share -Djava.library.path= /opt/Node2/SUNWics5/cal/lib configure -nodisplay -noconsole -novalidate Java Accessibility Bridge for GNOME loaded. Loading Default Properties... Checking disk space... Starting Task Sequence ===== Mon Dec 18 15:33:29 PST 2006 ===== Running /bin/rm -f /opt/Node2/SUNWics5/cal/config /opt/Node2/SUNWics5/cal/data ===== Mon Dec 18 15:33:29 PST 2006 ===== Running /usr/sbin/groupadd icsgroup ===== Mon Dec 18 15:33:29 PST 2006 ===== Running /usr/sbin/useradd -g icsgroup -d / icsuser ===== Mon Dec 18 15:33:30 PST 2006 ===== Running /usr/sbin/usermod -G icsgroup icsuser ===== Mon Dec 18 15:33:30 PST 2006 ===== Running /bin/sh -c /usr/bin/crle ===== Mon Dec 18 15:33:32 PST 2006 ===== Running /bin/chown icsuser:icsgroup /etc/opt/Node2/SUNWics5/config/watcher. cnf ... Sequence Completed PASSED: /bin/rm -f /opt/Node2/SUNWics5/cal/config /opt/Node2/SUNWics5/cal/data : status = 0 PASSED: /usr/sbin/groupadd icsgroup : status = 9 PASSED: /usr/sbin/useradd -g icsgroup -d / icsuser : status = 9 ... All Tasks Passed. Please check install log /var/sadm/install/logs/Sun_Java_System_Calendar_Server_install.B12181533 for further details.
For more instruction about Sun Cluster, there are many documents that can be found at docs. sun.com.
The following is a partial list of documentation titles:
Sun Cluster Concepts Guide for Solaris OS provides a general background about Sun Cluster software, data services, and terminology resource types, resources, and resource groups.
Sun Cluster Data Services Planning and Administration Guide for Solaris OS provides general information on planning and administration of data services.
Sun Cluster System Administration Guide for Solaris OS provides the software procedures for administering a Sun Cluster configuration.
Sun Cluster Reference Manual for Solaris OS describes the commands and utilities available with the Sun Cluster software, including commands found only in the SUNWscman and SUNWccon packages.