Sun Java System Calendar Server 6.3 Administration Guide

Chapter 6 Configuring Calendar Server 6.3 Software for High Availability (Failover Service)

This chapter explains how to install and configure high availability for Calendar Server 6.3 software using Sun Cluster 3.0 or 3.1.

Configuring Calendar Server for High availability (HA) provides for monitoring of and recovery from software and hardware failures. The Calendar Server HA feature is implemented as a failover service. This chapter describes two Calendar Server HA configurations using Sun Cluster software, one asymmetric and one symmetric.

This chapter includes the following topics to describe how to install and configure HA for Calendar Server:

You can find a set of worksheets to help you plan a Calendar Server HA configuration in the Appendix C, Calendar Server Configuration Worksheet.

6.1 Overview of High Availability Choices for Calendar Server Version 6.3

High availability can be configured many ways. This section contains an overview of three high availability choices, and information to help you choose which is right for your needs.

This sections covers the following topics:

6.1.1 Understanding Asymmetric High Availability for Calendar Server Version 6.3

This figure shows a simple asymmetric HA Calendar Server
installation.

A simple asymmetric high availability system has two physical nodes. The primary node is usually active, with the other node acting as a backup node, ready to take over if the primary node fails. To accomplish a fail over, the shared disk array is switched so that it is mastered by the backup node. The Calendar Server processes are stopped on the failing primary node and started on the backup node.

There are several advantages of this type of high availability system. One advantage is that the backup node is dedicated and completely reserved for the primary node. This means there is no resource contention on the backup node when a failover occurs. Another advantage is the ability to perform a rolling upgrade; that is, you can upgrade one node while continuing to run Calendar Server software on the other node. Changes you make to the ics.conf file while upgrading the first node will not interfere with the other instance of Calendar Server software running on the secondary node because the configuration file is read only once, at startup. You must stop and restart the calendar processes before the new configuration takes effect. When you want to upgrade the other node, you perform a failover to the upgraded primary node and proceed with the upgrade on the secondary node.


Note –

You can, of course, choose to upgrade the secondary node first, and then the primary node.


The asymmetric high availability model also has some disadvantages. One disadvantage is that the backup node stays idle most of the time, making this resource underutilized. Another possible disadvantage is the single storage array. In the event of a disk array failure with a simple asymmetric high availability system, no backup is available

6.1.2 Understanding Symmetric High Availability for Calendar Server Version 6.3

This figure shows a simple symmetric HA system for Calendar
Server. Both nodes contain active instances of Calendar Server.

A simple symmetric high availability system has two active physical nodes, each with its own disk array with two storage volumes, one volume for the local calendar store, and the other a mirror image of the other node's calendar store. Each node acts as the backup node for the other. When one node fails over to its backup, two instances of Calendar Server run concurrently on the backup node, each running from its own installation directory and accessing its own calendar store. The only thing shared is the computing power of the back up node.

The advantage of this type of high availability system is that both nodes are active simultaneously, thus fully utilizing machine resources. However, during a failure, the backup node will have more resource contention as it runs services for Calendar Server from both nodes.

Symmetric high availability also provides a backup storage array. In the event of a disk array failure, its redundant image can be picked up by the service on its backup node.


Note –

To configure a symmetric high availability system, you install the Calendar Server binaries on your shared disk. Doing so might prevent you from performing rolling upgrades, a feature planned for future releases of Calendar Server that enables you to update your system with a Calendar Server patch release with minimal or no down time.


6.1.3 Understanding N+1 (N Over 1): Multiple Asymmetric High Availability for Calendar Server Version 6.3

This configuration is a series of asymmetric HA Calendar
Servers, each failing over to the same standby node.

In addition to the two types of highly available systems described in this chapter, a third type which is a hybrid of the two is also possible. This is a multi-node asymmetric high availability system. In this type, “N” disk arrays and “N” nodes all use the same backup node which is held in reserve and is not active normally. This backup node is capable of running Calendar Server for any of the “N” nodes. It shares each of the “N” node's disk array, as shown in the preceding graphic. If multiple nodes fail at the same time, the backup node must be capable of running up to “N” instances of Calendar Server concurrently. Each of the “N” nodes has its own disk array.

The advantages of the N+1 model are that Calendar Server load can be distributed to multiple nodes, and that only one backup node is necessary to sustain all the possible node failures.

The disadvantage of this type of high availability is the same as any asymmetric system; the backup node is idle most of the time. In addition, the N+1 high availability system backup node must have excess capacity in the event it must host multiple instances of Calendar Server. This means a higher cost machine is sitting idle. However, the machine idle ratio is 1:N as opposed to 1:1, as is the case in a single asymmetric system.

To configure this type of system, use the instructions for the asymmetric high availability system for each of the “N” nodes and the backup. Use the same backup node each time, but with a different primary node.

6.1.4 Choosing a High Availability Model for Your Calendar Server Version 6.3 Deployment

The following table summarizes the advantages and disadvantages of each high availability model. Use this information to help you determine which model is right for your deployment.

Table 6–1 Advantages and Disadvantages of Both High Availability Model

Model 

Advantages 

Disadvantages 

Recommended Users 

Asymmetric 

  • Simple Configuration

  • Backup node is 100 percent reserved

  • Rolling Upgrade, with zero downtime

Machine resources are not fully utilized. 

A small service provider with plans to expand in the future 

Symmetric 

  • Better use of system resources

  • Higher availability

Resource contention on the backup node.  

HA requires fully redundant disks. 

A small corporate deployment that can accept performance penalties in the event of a single server failure 

N+1 

  • Load distribution

  • Easy expansion

Management and configuration complexity. 

A large service provider who requires distribution with no resource constraints 

6.1.5 System Down Time Calculations for High Availability in Your Calendar Server 6.3 Deployment

The following table illustrates the probability that on any given day the calendar service will be unavailable due to system failure. These calculations assume that on average, each server goes down for one day every three months due to either a system crash or server hang, and that each storage device goes down one day every 12 months. These calculations also ignore the small probability of both nodes being down simultaneously.

Table 6–2 System Down Time Calculations

Model 

Server Down Time Probability 

Single server (no high availability) 

Pr(down) = (4 days of system down + 1 day of storage down)/365 = 1.37% 

Asymmetric 

Pr(down) = (0 days of system down + 1 day of storage down)/365 = 0.27% 

Symmetric 

Pr(down) = (0 days of system down + 0 days of storage down)/365 = (near 0) 

N + 1 Asymmetric 

Pr(down) = (5 hours of system down + 1 day of storage down)/(365xN) = 0.27%/N 

6.2 Prerequisites for an HA Environment for Your Calendar Server Version 6.3 Deployment

This sections lists the prerequisites for installing Calendar Server in an HA environment.

The following prerequisites apply:

6.2.1 About HAStoragePlus for a Calendar Server 6.3 HA Deployment

Use the HAStoragePlus resource type to make locally mounted file systems highly available within a Sun Cluster environment. Any file system resident on a Sun Cluster global device group can be used with HAStoragePlus. An HAStoragePlus file system is available on only one cluster node at any given point of time. These locally mounted file systems can only be used in failover mode and in failover resource groups. HAStoragePlus offers Failover File System (FFS), in addition to supporting the older Global File System (GFS), or Cluster File System (CFS).

HAStoragePlus has a number of benefits over its predecessor, HAStorage:


Note –

Use HAStoragePlus resources in a data service resource group with Sun Cluster 3.0 Release May 2002 and later.

For more information on HAStoragePlus, see Sun Cluster Data Services Planning and Administration Guide for Solaris OS.


6.3 High-Level Task List for an Asymmetric High Availability Deployment with Calendar Server 6.3 Software

The following is a list of the tasks necessary to install and configure Calendar Server for Asymmetric High Availability:

  1. Prepare the nodes.

    1. Install the Solaris Operating System software on all nodes of the cluster.

    2. Install Sun Cluster software on all nodes of the cluster.

    3. Install the Calendar Server HA Agents package, SUNWscics, on all nodes of the cluster using the Java Enterprise System installer

    4. Create a file system on the shared disk.

    5. Install Calendar Server on the Primary and Secondary nodes of the cluster, using the Communications Suite 5 installer.

  2. Run the Directory Preparation Script, comm_dssetup.pl on the machine where the Directory Server LDAP directory resides.

  3. Installing and configuring the first (primary) node.

    1. Using the Sun Cluster command-line interface, set up HA on the primary node.

    2. Run the Calendar Server configuration program, csconfigurator.sh, on the primary node.

    3. Using the Sun Cluster command-line interface, switch to the secondary node.

  4. Create a symbolic link from the Calendar Server config directory on the primary node to the shared disk config directory.

  5. Install and configure the second (secondary) node.

    1. Run the Calendar Server configuration program on the secondary node by reusing the state file created when you configured the primary node.

    2. Edit the Configuration File, ics.conf.

    3. Using the Sun Cluster command-line interface, configure and enable a resource group for Calendar Server.

    4. Using the Sun Cluster command-line interface to test the successful creation of the resource group, perform a fail over to the primary node.

For step-by-step instructions, see 6.6 Installing and Configuring Calendar Server 6.3 Software in an Asymmetric High Availability Environment.

6.4 High-Level Task List for a Symmetric High Availability Deployment with Calendar Server 6.3 Software

The following is a list of the tasks necessary to install and configure Calendar Server for Symmetric High Availability:

  1. Prepare the nodes.

    1. Install the Solaris Operating System software on all nodes of the cluster.

    2. Install Sun Cluster software on all nodes of the cluster.

    3. Create six file systems, either Cluster File Systems (Global File systems) or Fail Over File Systems (Local File systems).

    4. Create the necessary directories.

    5. Install the Calendar Server HA Agents package, SUNWscics, on all nodes of the cluster using the Java Enterprise System installer

  2. Install and Configure the first node.

    1. Using the Communications Suite 5 installer, install Calendar Server on the first node of the cluster.

    2. Run the Directory Preparation Script, comm_dssetup.pl, on the machine where the Directory Server LDAP database resides.


      Note –

      If the instances of Calendar Server on the two nodes share the same LDAP server, it is not necessary to repeat this step after installing Calendar Server software on the second node.


    3. Using the Sun Cluster command-line interface, configure HA on the first node.

    4. Run the Calendar Server configuration program, csconfigurator.sh, on the first node.

    5. Using the Sun Cluster command-line interface, fail over to the second node.

    6. Edit the Configuration File, ics.conf, on the first node.

    7. Using the Sun Cluster command-line interface, configure and enable a resource group for Calendar Server on the first node.

    8. Using the Sun Cluster command-line interface, create and enable a resource group for the first node.

    9. Using the Sun Cluster command-line interface to test the successful creation of the resource group, perform a fail over to the first node.

  3. Install and configure the second node.

    1. Using the Communications Suite 5 installer, install Calendar Server on the second node of the cluster.

    2. Using the Sun Cluster command-line interface, configure HA on the second node.

    3. Run the Calendar Server configuration program, csconfigurator.sh, on the second node by reusing the state file created when you configured the first node.

    4. Using the Sun Cluster command-line interface, fail over to the first node.

    5. Edit the Configuration File, ics.conf, on the second node.

    6. Using the Sun Cluster command-line interface, create and enable a resource group for Calendar Server on the second node.

    7. Using the Sun Cluster command-line interface to test the successful creation of the resource group, perform a fail over to the second node.

For step-by-step instructions, see 6.7 Configuring a Symmetric High Availability Calendar Server System.

6.5 Naming Conventions for All Examples in this Deployment Example for Configuring High Availability in Calendar Server Version 6.3


Tip –

Print out this section and record the values you use as you go through the HA installation and configuration process.


This section contains four tables showing the variable names used in all examples:

Table 6–3 Directory Name Variables Used in Asymmetric Examples

Example Name 

Directory 

Description 

install-root

/opt

The directory in which Calendar Server is installed.  

cal-svr-base

/opt/SUNWics5/cal

The directory in which all Calendar Server files are located.  

var-cal-dir

/var/opt/SUNWics5

The /var directory.

share-disk-dir

/cal

A global directory; that is, a directory shared between nodes in an asymmetric high availability system. 

Table 6–4 Directory Name Variables Used in Symmetric Examples

Example Name 

Directory 

Description 

install-rootCS1

install-rootCS2

/opt/Node1

/opt/Node2

The directory in which an instance of Calendar Server is installed. 

cal-svr-baseCS1

cal-svr-baseCS2

/opt/Node1/SUNWics5/cal

/opt/Node2/SUNWics5/cal

The directory in which all Calendar Server files are located for the node.  

var-cal-dirCS1

var-cal-dirCS2

/var/opt/Node1/SUNWics5

/var/opt/Node2/SUNWics5

The /var directories for each node.

share-disk-dirCS1

share-disk-dirCS2

/cal/Node1

/cal/Node2

The global (shared) directories each instance of Calendar Server shares with its fail over node. This is used in a symmetric high availability system. 

Table 6–5 Resource Name Variables for Asymmetric Examples

Variable Name 

Description 

CAL-RG

A calendar resource group. 

LOG-HOST-RS

A logical hostname resource. 

LOG-HOST-RS-Domain.com

The fully qualified logical hostname resource. 

CAL-HASP-RS

An HAStoragePlus resource. 

CAL-SVR-RS

A Calendar Server resource group. 

Table 6–6 Resource Name Variables for Symmetric Examples

Variable Name 

Description 

CAL-CS1-RG

A calendar resource group for the first instance of Calendar Server. 

CAL-CS2-RG

A calendar resource group for the second instance of Calendar Server. 

LOG-HOST-CS1-RS

A logical hostname resource for the first instance of Calendar Server. 

LOG-HOST-CS1-RS-Domain.com

The fully qualified logical hostname resource for the first instance of Calendar Server. 

LOG-HOST-CS2-RS

A logical hostname resource for the second instance of Calendar Server. 

LOG-HOST-CS2-RS-Domain.com

The fully qualified logical hostname resource for the second instance of Calendar Server. 

CAL-HASP-CS1-RS

An HAStoragePlus resource for the first instance of Calendar Server. 

CAL-HASP-CS2-RS

An HAStoragePlus resource for the second instance of Calendar Server. 

CAL-SVR-CS1-RS

A Calendar Server resource group for the first instance of Calendar Server. 

CAL-SVR-CS2-RS

A Calendar Server resource group for the second instance of Calendar Server. 

Table 6–7 Variable Name for IP Address in Asymmetric Examples

Logical IP Address 

Description 

IPAddress

The IP Address of the port on which the chsttpd daemon will listen. It should be in the standard IP format, for example: "123.45.67.890"

Table 6–8 Variable Name for IP Address in Symmetric Examples

Logical IP Address 

Description 

IPAddressCS1

The IP Address of the port on which the chsttpd daemon for the first instance of Calendar Server will listen. It should be in the standard IP format, for example: "123.45.67.890"

IPAddressCS2

The IP Address of the port on which the chsttpd daemon for the second instance of Calendar Server will listen. It should be in the standard IP format, for example: "123.45.67.890"

6.6 Installing and Configuring Calendar Server 6.3 Software in an Asymmetric High Availability Environment

This section contains instructions for configuring an asymmetric high availability Calendar Server cluster.

This sections contains the following topics:

6.6.1 Creating the File Systems for Your Calendar Server 6.3 HA Deployment

Create a file system on the shared disk. The /etc/vfstab should be identical on all the nodes of the cluster.

For CFS, it should look similar to the following example.

## Cluster File System/Global File System ##
/dev/md/penguin/dsk/d400 /dev/md/penguin/rdsk/d400 /cal ufs 2 yes global,logging

For example, for FFS:

## Fail Over File System/Local File System ##
/dev/md/penguin/dsk/d400 /dev/md/penguin/rdsk/d400 /cal ufs 2 no logging

Note –

The fields in these commands are separated by tabs, not just spaces.


6.6.2 Creating the Calendar Directory on All Shared Disks of the Cluster in Your Calendar Server 6.3 HA Deployment

For all nodes of the cluster, create a directory, /Cal, on the shared disk where configuration and data is held. For example, do the following command for every shared disk:

mkdir -P /Cal

6.6.3 Installing and Configuring High Availability for Calendar Server 6.3 Software

This section contains instructions for the tasks involved in installing and configuring high availability for Calendar Server.

Perform each of the following tasks in turn to complete the configuration:

ProcedureTo Prepare Each Node of the Cluster

  1. Install Calendar Server on the Primary and Secondary nodes of the cluster, using the Communications Suite 5 installer.


    Note –

    Be sure to specify the same installation root on all nodes.


    1. At the Specify Installation Directories panel, answer with the installation root for both nodes.

      This will install the Calendar Server binaries in the following directory:/install-root/SUNWics5/cal. This directory is called the Calendar Server base (cal-svr-base).

    2. Choose the Configure Later option.

    3. After the installation is complete, verify that the files are installed.

      # pwd
      /cal-svr-base
      
      # ls -rlt
      
      total 16
      drwxr-xr-x   4 root     bin          512 Dec 14 12:52 share
      drwxr-xr-x   3 root     bin          512 Dec 14 12:52 tools
      drwxr-xr-x   4 root     bin         2048 Dec 14 12:52 lib
      drwxr-xr-x   2 root     bin         1024 Dec 14 12:52 sbin
      drwxr-xr-x   8 root     bin          512 Dec 14 12:52 csapi
      drwxr-xr-x  11 root     bin         2048 Dec 14 12:52 html
  2. Run the Directory Preparation Script (comm_dssetup.pl) against your existing Directory Server LDAP.

    This prepares your Directory Server by setting up new LDAP schema, index, and configuration data.

    For instructions and further information about running comm_dssetup.pl, see Chapter 8, Directory Preparation Tool (comm_dssetup.pl), in Sun Java Communications Suite 5 Installation Guide.

ProcedureTo Set Up the Primary Node

Use the Sun Cluster command line interface as indicated to set up HA on the first node.


Note –

Refer to 6.5 Naming Conventions for All Examples in this Deployment Example for Configuring High Availability in Calendar Server Version 6.3 as a key for directory names and Sun Cluster resource names in the examples.


  1. Register the Calendar Server and HAStoragePlus resource

    ./scrgadm -a -t SUNW.HAStoragePlus
    ./scrgadm -a -t SUNW.scics
  2. Create a failover Calendar Server resource group.

    For example, the following instruction creates the calendar resource group CAL-RG with the primary node as Node1 and the secondary, or failover, node as Node2.

    ./scrgadm -a -g CAL-RG -h node1,node2
  3. Create a logical hostname resource in the Calendar Server resource group and bring the resource group online.

    For example, the following instructions create the logical hostname resource LOG-HOST-RS, and then brings the resource group CAL-RG online.

    ./scrgadm -a -L -g CAL-RG -l LOG-HOST-RS
    ./scrgadm -c -j LOG-HOST-RS -y    \
          R_description="LogicalHostname resource for LOG-HOST-RS"
    ./scswitch -Z -g CAL-RG
  4. Create and enable the HAStoragePlus resource.

    For example, the following instructions create and enable the HAStoragePlus resource CAL-HASP-RS.

    scrgadm -a -j CAL-HASP-RS -g CAL-RG -t 
         SUNW.HAStoragePlus:4 -x FilesystemMountPoints=/cal
    scrgadm -c -j CAL-HASP-RS -y 
         R_description="Failover data service resource for SUNW.HAStoragePlus:4"
    scswitch -e -j CAL-HASP-RS

ProcedureTo Run the Configuration Utility (csconfigurator.sh) on the Primary Node

  1. Run the configuration program.

    For example, from the /cal-svr-base/sbin directory:

    # pwd
         /cal-svr-base/sbin
    
    # ./csconfigurator.sh

    For further information about running the configuration script, see Chapter 2, Initial Runtime Configuration Program for Calendar Server 6.3 software (csconfigurator.sh)also in this guide.

  2. At the Run Time Configuration panel, deselect both Calendar Server startup options.

  3. At the Directories panel, configure all directories on a shared disk. Use the following locations:

    Config Directory

    /share-disk-dir/config

    Database Directory

    /share-disk-dir/csdb

    Attachment Store Directory

    /share-disk-dir/store

    Logs Directory

    /share-disk-dir/logs

    Temporary Files Directory

    /share-disk-dir/tmp

    Once you have finished specifying the directories, choose Create Directory.

  4. At the Archive and Hot Backup panel, specify the following choices:

    Archive Directory

    /share-disk-dir/csdb/archive

    Hot Backup Directory

    /share-disk-dir/csdb/hotbackup

    When you have finished specifying the directories, choose the Create Directory option.

  5. Verify that the configuration is successful.

    Look at the end of the configuration output to make sure it says: “All Tasks Passed.” The following example shows the last part of the configuration output.

    ...
    All Tasks Passed. Please check install log 
    /var/sadm/install/logs/Sun_Java_System_Calendar_Server_install.B12141351
     for further details.

    For a larger sample of the output, see 6.11 Example Output from the Calendar Configuration Program (Condensed)

  6. Click Next to finish configuration.

ProcedureTo Configure the Secondary Node

  1. Switch to the secondary node.

    Using the Sun Cluster command line interface, switch to the secondary node. For example, the following command switches the resource group to the secondary (failover) node, Node2:

    scswitch -z -g CAL-RG -h Node2
  2. Create a symbolic link from the Calendar Server config directory to the config directory of the Shared File System.

    For example, perform the following commands:

    # pwd
    /cal-svr-base
    
    # ln -s /share-disk-dir/config .  

    Note –

    Do not forget the dot (.) at the end of the ln command.


  3. Configure Calendar Server on the secondary node using the state file from the primary node configuration.

    Share the configuration of the primary node by running the state file created when you ran the configuration program.

    For example, run the following command:

    # /cal-svr-base/sbin/csconfigurator.sh -nodisplay -noconsole -novalidate

    Check that all the tasks passed as with the first time you ran the configuration program.

  4. Edit the Configuration File (ics.conf)

    Edit the ics.conf file by adding the following parameters to the end of the file. The logical hostname of the calendar resource is LOG-HOST-RS.


    Note –

    Back up your ics.conf file before performing this step.


    ! The following are the changes for making Calendar Server
    ! Highly Available
    !
    local.server.ha.enabled="yes"
    local.server.ha.agent="SUNWscics"
    service.http.listenaddr="IPAddress"
    local.hostname="LOG-HOST-RS"
    local.servername="LOG-HOST-RS"
    service.ens.host="LOG-HOST-RS"
    service.http.calendarhostname="LOG-HOST-RS-Domain.com"
    local.autorestart="yes"
    service.listenaddr="IPAddress"
  5. Create the Calendar Server resource group and enable it.

    For this example, the resource group name is CAL-SVR-RS. You will also be required to supply the logical host resource name and the HAStoragePlus resource name.

    ./scrgadm -a -j CAL-SVR-RS -g CAL-RG 
         -t SUNW.scics -x ICS_serverroot=/cal-svr-base 
         -y Resource_dependencies=CAL-HASP-RS,LOG-HOST-RS
    
    ./scrgadm -e -j CAL-SVR-RS
  6. Test the successful creation of the calendar resource group by performing a fail over.

    ./ scswitch -z -g CAL-RG -h Node1

    When you have finished this step, you have completed the creation and configuration of the asymmetric high availability system for Calendar Server. The section that follows explains how to set up logging on Sun Cluster for debug purposes.

    You have now finished the installation and configuration of an asymmetric Calendar Server HA system.

6.7 Configuring a Symmetric High Availability Calendar Server System

This section contains instructions for configuring a symmetric high availability Calendar Server system

To configure a symmetric high availability Calendar Server system, follow the instructions in the following sections:

6.7.1 Initial Tasks

There are two preparatory tasks that must be completed before installing Calendar Server on the nodes.

The preparatory tasks are as follows:


Note –

In various places in the examples, you need to provide the installation directory (cal-svr-base) for each node. For a symmetric HA system, the cal-svr-base is different than the asymmetric HA system. For symmetric HA systems, cal-svr-base has the following format: /opt/node/SUNWics5/cal, where /opt/node is the name of the root directory in which Calendar Server is installed (install-root).

For the purposes of the examples, and to differentiate the installation directories of the two Calendar Server instances, they are designated as cal-svr-baseCS1 and cal-svr-baseCS1.

To differentiate the installation roots for the two Calendar Server instances in this example, they are designated as install-rootCS1 and install-rootCS2:


ProcedureCreating the File Systems

  1. Create six file systems, using either Cluster File Systems (Global File systems) or Fail Over File Systems (Local File systems).

    This example is for Global File Systems. The contents of the /etc/vfstab file should look like the following: (Note that the fields are all tab separated.)

    # Cluster File System/Global File System ##
    /dev/md/penguin/dsk/d500  /dev/md/penguin/rdsk/d500  
        /cal-svr-baseCS1  ufs  2  yes  logging,global
    /dev/md/penguin/dsk/d400  /dev/md/penguin/rdsk/d400  
        /share-disk-dirCS1  ufs  2  yes  logging,global
    /dev/md/polarbear/dsk/d200  /dev/md/polarbear/rdsk/d200  
        /cal-svr-baseCS2  ufs  2  yes  logging,global
    /dev/md/polarbear/dsk/d300  /dev/md/polarbear/rdsk/d300
        /share-disk-dirCS2  ufs  2  yes logging,global
    /dev/md/polarbear/dsk/d600  /dev/md/polarbear/rdsk/d300 
        /var-cal-dirCS1  ufs  2   yes  logging,global
    /dev/md/polarbear/dsk/d700  /dev/md/polarbear/rdsk/d300  
        /var-cal-dirCS2  ufs   2   yes  logging,global

    This example is for the Failover File Systems. The contents of the /etc/vfstab file should look like the following: (Note that the fields are all tab separated.)

    # Failover File System/Local File System ##
    /dev/md/penguin/dsk/d500  /dev/md/penguin/rdsk/d500  
        /cal-svr-baseCS1  ufs  2  yes  logging
    /dev/md/penguin/dsk/d400  /dev/md/penguin/rdsk/d400  
        /share-disk-dirCS1  ufs  2  yes  logging
    /dev/md/polarbear/dsk/d200  /dev/md/polarbear/rdsk/d200 
       /cal-svr-baseCS2  ufs  2  yes  logging
    /dev/md/polarbear/dsk/d300  /dev/md/polarbear/rdsk/d300 
        /share-disk-dirCS2  ufs  2  yes  logging
    /dev/md/polarbear/dsk/d600  /dev/md/polarbear/rdsk/d300 
        /var-cal-dirCS1  ufs  2   yes  logging
    /dev/md/polarbear/dsk/d700  /dev/md/polarbear/rdsk/d300
       /var-cal-dirCS2  ufs  2   yes  logging
  2. Create the following required directories on all nodes of the cluster.

    # mkdir -p /install-rootCS1 share-disk-dirCS1 
         install-rootCS2 share-disk-dirCS2 var-cal-dirCS1 
         var-cal-dirCS2

6.7.1.1 Installing the Calendar Server HA Package

Install the Calendar Server HA package, SUNWscics, on all nodes of the cluster.

This must be done from the Java Enterprise System installer.

For more information about the Java Enterprise System installer, refer to the Sun Java Enterprise System 5 Installation and Configuration Guide.

6.7.2 Installing and Configuring the First Instance of Calendar Server

Follow the instructions in this section to install and configure the first instance of Calendar Server. This section covers the following topics:

ProcedureTo Install Calendar Server

  1. Verify the files are mounted.

    On the primary node (Node1), enter the following command:

    df -k

    The following is an example of the output you should see:

    /dev/md/penguin/dsk/d500     35020572   
         34738 34635629   1%   /install-rootCS1
    /dev/md/penguin/dsk/d400     35020572   
         34738 34635629   1%   /share-disk-dirCS1
    /dev/md/polarbear/dsk/d300   35020572   
         34738 34635629   1%   /share-disk-dirCS2
    /dev/md/polarbear/dsk/d200   35020572   
         34738 34635629   1%   /install-rootCS2
    /dev/md/polarbear/dsk/d600   35020572   
         34738 34635629   1%   /var-cal-dirCS1
    /dev/md/polarbear/dsk/d700   35020572   
         34738 34635629   1%   /var-cal-dirCS2
  2. Using the Sun Java Systems Communications Suite installer, install Calendar Server on the Primary Node.

    1. At the Specify Installation Directories panel, specify the installation root (install-rootCS1):

      For example, if your Primary node is named red and the root directory is dawn, the installation root would be /dawn/red. This is the directory where you are installing Calendar Server on the first node.

    2. Choose Configure Later.

  3. Run the Directory Preparation Tool script on the machine with the Directory Server.

ProcedureTo Configure Sun Cluster on the First Node

Using the Sun Cluster command-line interface, configure Sun Cluster on the first node by performing the following steps:

  1. Register the following resource types:

    ./scrgadm -a -t SUNW.HAStoragePlus
    ./scrgadm -a -t SUNW.scics
  2. Create a fail over resource group.

    In the following example, the resource group is CAL-CS1-RG, and the two nodes are named Node1 as the primary node and Node2 as the fail over node.

    ./scrgadm -a -g CAL-CS1-RG -h Node1,Node2
  3. Create a logical hostname resource for this node.

    The calendar client listens on this logical hostname. The example that follows uses LOG-HOST-CS1-RS in the place where you will substitute in the actual hostname.

    ./scrgadm -a -L -g CAL-RG -l LOG-HOST-CS1-RS
    ./scrgadm -c -j LOG-HOST-CS1-RS -y R_description=
         "LogicalHostname resource for LOG-HOST-CS1-RS"
  4. Bring the resource group online.

    scswitch -Z -g CAL-CS1-RG
  5. Create an HAStoragePlus resource and add it to the fail over resource group.

    In this example, the resource is called CAL-HASP-CS1-RS. You will substitute you own resource name. Note that the lines are cut and show as two lines in the example for display purposes in this document.

    ./scrgadm -a -j CAL-HASP-CS1-RS -g CAL-CS1-RG -t 
         SUNW.HAStoragePlus:4 -x FilesystemMountPoints=/install-rootCS1,
         /share-disk-dirCS1,/cal-svr-baseCS1
    ./scrgadm -c -j CAL-HASP-CS1-RS -y R_description="Failover data 
         service resource for SUNW.HAStoragePlus:4"
  6. Enable the HAStoragePlus resource.

    ./scswitch -e -j CAL-HASP-CS1-RS

ProcedureTo Configure the First Instance of Calendar Server

  1. Run the configuration program on the primary node.

    # cd /cal-svr-baseCS1/sbin/
    
    # ./csconfigurator.sh

    For further information about running the configuration script, see the Sun Java System Calendar Server 6.3 Administration Guide.

  2. On the Runtime Configuration panel, deselect both of the Calendar Server start up options.

  3. On the Directories to Store Configuration and Data Files panel, provide the shared disk directories as shown in the following list:

    Config Directory

    /share-disk-dirCS1/config

    Database Directory

    /share-disk-dirCS1/csdb

    Attachment Store Directory

    /share-disk-dirCS1/store

    Logs Directory

    /share-disk-dirCS1/logs

    Temporary Files Directory

    /share-disk-dirCS1/tmp

    When you have finished specifying the directories, choose Create Directory.

  4. On the Archive and Hot Backup panel, provide the shared disk directory names as shown in the following list:

    Archive Directory

    /share-disk-dirCS1/csdb/archive

    Hot Backup Directory

    /share-disk-dirCS1/csdb/hotbackup

    After specifying these directories, choose Create Directory.

  5. Verify that the configuration was successful.

    The configuration program will display a series of messages. If they all start with PASSED, which means it was successful. For an example of the output you might see, check the example at: 6.11 Example Output from the Calendar Configuration Program (Condensed).

ProcedureTo Perform the Final Configuration Steps for the First Instance

  1. Using the Sun Cluster command-line interface, perform a fail over to the second node.

    For example:

    # /usr/cluster/bin/scswitch -z -g CAL-CS1-RG -h Node2
  2. Edit the configuration file, ics.conf, by adding the parameters shown in the example that follows.


    Note –

    Back up the ics.conf file before starting this step.


    ! The following changes were made to configure Calendar Server
    ! Highly Available
    !
    local.server.ha.enabled="yes"
    local.server.ha.agent="SUNWscics"
    service.http.listenaddr="IPAddressCS1"
    local.hostname="LOG-HOST-CS1-RS"
    local.servername="LOG-HOST-CS1-RS"
    service.ens.host="LOG-HOST-CS1-RS"
    service.http.calendarhostname="LOG-HOST-CS1-RS-Domain.com"
    local.autorestart="yes"
    service.listenaddr = "IPAddressCS1"

    Note –

    The expected value for service.http.calendarhostname is a fully qualified hostname.


  3. Using the Sun Cluster command-line interface, create the Calendar Server resource group.

    Create a calendar resource group and enable it.

    For example:

    ./scrgadm -a -j CAL-SVR-CS1-RS -g CAL-CS1-RG
          -t SUNW.scics  -x ICS_serverroot=/cal-svr-baseCS1
          -y Resource_dependencies=CAL-HASP-CS1-RS,LOG-HOST-CS1-RS
    
    ./scrgadm -e -j CAL-SVR-CS1-RS
  4. Using the Sun Cluster command-line interface to test the successful creation of the Calendar Server resource group, perform a fail over to the first node, which is the Primary node.

    For example:

    ./scswitch -z -g CAL-CS1-RG -h Node1

6.7.3 Installing and Configuring the Second Instance of Calendar Server

The primary node for the second Calendar Server instance is the second node (Node2).

ProcedureTo Install Calendar Server on the Second Node

  1. Verify the files are mounted.

    On the primary node (Node2), enter the following command:

    df -k

    The following is an example of the output you should see:

    /dev/md/penguin/dsk/d500     35020572   
         34738 34635629   1%   /install-rootCS1
    /dev/md/penguin/dsk/d400     35020572   
         34738 34635629   1%   /share-disk-dirCS1
    /dev/md/polarbear/dsk/d300   35020572   
         34738 34635629   1%   /share-disk-dirCS2
    /dev/md/polarbear/dsk/d200   35020572   
         34738 34635629   1%   /install-rootCS2
    /dev/md/polarbear/dsk/d600   35020572   
         34738 34635629   1%   /var-cal-dirCS1
    /dev/md/polarbear/dsk/d700   35020572   
         34738 34635629   1%   /var-cal-dirCS2
  2. Using the Sun Java Systems Communications Suite installer, install Calendar Server on the new Primary Node (second node).

    1. At the Specify Installation Directories panel, specify the installation root for the second node (/install-rootNode2):

      For example, if your Node 2 machine is named blue and your root directory is ocean, your installation directory would be /ocean/blue.

    2. Select the Configure Later option.

ProcedureTo Configure Sun Cluster for the Second Instance

Using the Sun Cluster command-line interface, configure the second instance of Calendar Server as described in the following steps:

  1. Create a fail over resource group.

    In the following example, the resource group is CAL-CS2-RG, and the two nodes are named Node2 as the primary node and Node1 as the fail over node.

    ./scrgadm -a -g CAL-CS2-RG -h Node2,Node1
  2. Create a logical hostname resource.

    The calendar client listens on this logical hostname. The example that follows uses LOG-HOST-CS2-RS in the place where you will substitute in the actual hostname.

    ./scrgadm -a -L -g CAL-CS2-RG -l LOG-HOST-CS2-RS
    ./scrgadm -c -j LOG-HOST-CS2-RS -y R_description="LogicalHostname 
         resource for LOG-HOST-CS2-RS"
  3. Bring the resource group online.

    scswitch -Z -g CAL-CS2-RG
  4. Create an HAStoragePlus resource and add it to the fail over resource group.

    In this example, the resource is called CAL-SVR-CS2-RS. You will substitute you own resource name.

    ./scrgadm -a -j CAL-SVR-CS2-RS -g CAL-CS2-RG -t 
         SUNW.HAStoragePlus:4 -x FilesystemMountPoints=/install-rootCS2,
         /share-disk-dirCS2,/var-cal-dirCS2
    ./scrgadm -c -j CAL-HASP-CS2-RS -y R_description="Failover data 
         service resource for SUNW.HAStoragePlus:4"
  5. Enable the HAStoragePlus resource.

    ./scswitch -e -j CAL-HASP-CS2-RS

ProcedureTo Configure the Second Instance of Calendar Server

  1. Run the configuration program again on the secondary node.

    # cd /cal-svr-baseCS2/sbin/
    
    # ./csconfigurator.sh

    For further information about running the configuration script, see the Sun Java System Calendar Server 6.3 Administration Guide.

  2. On the Runtime Configuration panel, deselect both of the Calendar Server start up options.

  3. On the Directories to Store Configuration and Data Files panel, provide the proper directories as shown in the following list:

    Config Directory

    share-disk-dirCS2/config

    Database Directory

    /share-disk-dirCS2/csdb

    Attachment Store Directory

    /share-disk-dirCS2/store

    Logs Directory

    /share-disk-dirCS2/logs

    Temporary Files Directory

    /share-disk-dirCS2/tmp

    When you have finished specifying the directories, choose Create Directory.

  4. On the Archive and Hot Backup panel, provide the appropriate directory names as shown in the following list:

    Archive Directory

    /share-disk-dirCS2/csdb/archive

    Hot Backup Directory

    /share-disk-dirCS2/csdb/hotbackup

    After specifying these directories, choose Create Directory.

  5. Verify that the configuration was successful.

    The configuration program will display a series of messages. If they all start with PASSED, which means it was successful. For an example of the output you might see, check the example at: 6.11 Example Output from the Calendar Configuration Program (Condensed).

ProcedureTo Perform the Final Configuration Steps for the Second Instance

  1. Using the Sun Cluster command-line interface, perform a fail over to the first node.

    For example:

    # /usr/cluster/bin/scswitch -z -g CAL-CS2-RG -h Node1
  2. Edit the configuration file, ics.confby adding the parameters shown in the example that follows.


    Note –

    The values shown are examples only. You must substitute your own information for the values in the example.

    Back up the ics.conf file before starting this step.


    ! The following changes were made to configure Calendar Server
    ! Highly Available
    !
    local.server.ha.enabled="yes"
    local.server.ha.agent="SUNWscics"
    service.http.listenaddr="IPAddressCS2"
    local.hostname="LOG-HOST-CS2-RS"
    local.servername="LOG-HOST-CS2-RS"
    service.ens.host="LOG-HOST-CS2-RS"
    service.http.calendarhostname="LOG-HOST-CS2-RS-Domain.com"
    local.autorestart="yes"
    service.listenaddr = "IPAddressCS2"

    Note –

    The value for service.http.calendarhostname must be a fully qualified hostname.


  3. Using the Sun Cluster command-line interface, create a Calendar Server resource group.

    Create a Calendar Server resource group and enable it.

    For example:

    ./scrgadm -a -j CAL-SVR-CS2-RS -g CAL-CS2-RG
          -t SUNW.scics -x ICS_serverroot=/cal-svr-baseCS2
          -y Resource_dependencies=CAL-HASP-CS2-RS,LOG-HOST-CS2-RS
    
    ./scrgadm -e -j CAL-SVR-CS2-RS
  4. Using the Sun Cluster command-line interface to test the successful creation of the calendar resource group, perform a fail over to the second node, which is primary node for this Calendar Server instance.

    For example:

    ./scswitch -z -g CAL-CS2-RG -h Node2

    Your have now finished installing and configuring a symmetric HA Calendar Server.

6.8 Starting and Stopping Calendar Server HA Service

Use the following commands to start, fail over, disable, remove, and restart the Calendar Server HA service:

To enable and start the Calendar Server HA service:
# scswitch -e -j CAL-SVR-RS
To Fail Over the Calendar Server HA service:
# scswitch -z -g CAL-RG -h Node2
To disable the Calendar Server HA Service:
# scswitch -n -j CAL-SVR-RS
To remove the Calendar Server resource:
# scrgadm -r -j CAL-SVR-RS
To restart the Calendar Server HA service:
# scrgadm -R -j CAL-SVR-RS

6.9 Removing HA from Your Calendar Server Configuration

This section describes how to undo the HA configuration for Sun Cluster. This section assumes the simple asymmetric example configuration described in this chapter. You must adapt this scenario to fit your own installation.

ProcedureTo Remove HA Components

  1. Become a superuser.


    Note –

    All of the following Sun Cluster commands require that you be running as a superuser.


  2. Bring the resource group offline. Use the following command to shut down all of the resources in the resource group (For example, the Calendar Server and the HA logical host name).


    # scswitch -F -g CAL-RG
  3. Disable the individual resources.

  4. Remove the resources one-by-one from the resource group using the commands:


    # scswitch -n -j CAL-SVR-RS
    # scswitch -n -j CAL-HASP-RS
    # scswitch -n -j LOG-HOST-RS
  5. Remove the resource group itself using the command:


    # scrgadm -r -g CAL-RG
  6. Remove the resource types (optional). If you want to remove the resource types from the cluster, use the command:


    # scrgadm -r -t SUNW.scics
    # scrgadm -r -t SUNW.HAStorage

6.10 Debugging on Sun Cluster

The Calendar Server Sun Cluster agents use two different API's to log messages:

ProcedureTo Enable Logging

The following task must be done on each HA node since the /var/adm file can't be shared. This file is on the root partition of the individual nodes.

  1. Create a logging directory for Calendar Server agents.

    mkdir -p /var/cluster/rgm/rt/SUNW.scics
  2. Set the debug level to 9.

    echo 9 >/var/cluster/rgm/rt/SUNW.scics/loglevel

    The following example shows log messages you might see in the directory. Note that, in the last line, ICS-serverroot is asking for the cal-svr-base, or installation directory.

    Dec 11 18:24:46 mars SC[SUNW.scics,CAL-RG,cal-rs,ics_svc_start]: 
         [ID 831728 daemon.debug] Groupname icsgroup exists.
    Dec 11 18:24:46 mars SC[SUNW.scics,CAL-RG,LOG-HOST-RS,ics_svc_start]: 
         [ID 383726 daemon.debug] Username icsuser icsgroup
    Dec 11 18:24:46 mars SC[SUNW.scics,CAL-RG,LOG-HOST-RS,ics_svc_start]: 
         [ID 244341 daemon.debug] ICS_serverroot = /cal-svr-base
  3. Enable Sun Cluster Data Services Logging.

    Edit the syslog.conf file by adding the following line .

    daemon.debug /var/adm/clusterlog

    This will cause all the debug messages to be logged into the daemon.debug /var/adm/clusterlog file.

  4. Restart the syslogd daemon.

    pkill -HUP syslogd

    All syslog debug messages are prefixed with the following:

    SC[resourceTypeName, resourceGroupName, resourceName, methodName]

    The following example messages have been split and carried over to multiple lines for display purposes.

    Dec 11 15:55:52 Node1 SC
          [SUNW.scics,CAL-RG,CalendarResource,ics_svc_validate]:
          [ID 855581 daemon.error] Failed to get the configuration info
    Dec 11 18:24:46 Node1 SC
          [SUNW.scics,CAL-RG,LOG-HOST-RS,ics_svc_start]:
          [ID 833212 daemon.info] Attempting to start the data service under 
          process monitor facility.

6.11 Example Output from the Calendar Configuration Program (Condensed)

This section contains a partial listing of the output from the configuration program. Your output will be much longer. At the end, it should say: “All Tasks Passed.” Inspect the log files. The location of the files is given at the end of the printout.

/usr/jdk/entsys-j2se/bin/java -cp /opt/Node2/SUNWics5/cal/share/lib:
     /opt/Node2/SUNWics5/cal/share -Djava.library.path=
     /opt/Node2/SUNWics5/cal/lib configure -nodisplay -noconsole -novalidate
# ./csconfigurator.sh -nodisplay -noconsole -novalidate
/usr/jdk/entsys-j2se/bin/java -cp /opt/Node2/SUNWics5/cal/share/lib:
     /opt/Node2/SUNWics5/cal/share -Djava.library.path=
     /opt/Node2/SUNWics5/cal/lib configure -nodisplay -noconsole -novalidate
Java Accessibility Bridge for GNOME loaded.

Loading Default Properties...

Checking disk space...

Starting Task Sequence
===== Mon Dec 18 15:33:29 PST 2006 =====
Running /bin/rm -f /opt/Node2/SUNWics5/cal/config
/opt/Node2/SUNWics5/cal/data

===== Mon Dec 18 15:33:29 PST 2006 =====
Running /usr/sbin/groupadd icsgroup

===== Mon Dec 18 15:33:29 PST 2006 =====
Running /usr/sbin/useradd -g icsgroup -d / icsuser

===== Mon Dec 18 15:33:30 PST 2006 =====
Running /usr/sbin/usermod -G icsgroup icsuser

===== Mon Dec 18 15:33:30 PST 2006 =====
Running /bin/sh -c /usr/bin/crle


===== Mon Dec 18 15:33:32 PST 2006 =====
Running /bin/chown icsuser:icsgroup /etc/opt/Node2/SUNWics5/config/watcher.
cnf


...

Sequence Completed

PASSED: /bin/rm -f /opt/Node2/SUNWics5/cal/config
/opt/Node2/SUNWics5/cal/data : status = 0

PASSED: /usr/sbin/groupadd icsgroup : status = 9

PASSED: /usr/sbin/useradd -g icsgroup -d / icsuser : status = 9


...

All Tasks Passed. Please check install log
/var/sadm/install/logs/Sun_Java_System_Calendar_Server_install.B12181533 for
further details.

6.12 Related Documentation

For more instruction about Sun Cluster, there are many documents that can be found at docs. sun.com.

The following is a partial list of documentation titles: