Previous     Contents     Index     DocHome     Next     
iPlanet Messaging Server 5.0 Installation Guide



Appendix A       High Availability


This appendix contains the following sections to help you determine which high availability model is right for you and how to setup your system to run high availability with Messaging Server:



High Availability Models

There are three basic high availability models that can be used with Messaging Server:

  • Asymmetric (hot standby)

  • Symmetric

  • N + 1 (N Over 1)

Each of these models is described in greater detail in the following subsections.


Asymmetric

The basic asymmetric or "hot standby" high availability model consists of two clustered host machines or "nodes." A logical IP address and associated hostname are designated to both nodes.

In this model, only one node is active at any given time; the backup or hot standby node remains idle for most of the time. A single shared disk array between both nodes is configured and designated to the active or "primary" node. A single message store and Mail Transport Agent (MTA) queue reside on this shared volume. Additionally, only one mail service instance will run on the active node.

Figure A-1 illustrates the basic asymmetric high availability model.



Figure A-1    Asymmetric High Availability Model

Before failover, the active node is Physical-A. Upon failover, Physical-B becomes the active node and the shared volume is switched so that it is designated to Physical-B. All services are stopped on Physical-A and resume on Physical-B.

The advantage of this model is that the backup node is dedicated and completely reserved for its primary node; there is no resource contention on the backup node. However, this model also means that the backup node stays idle most of the time and this resource is not completely utilized.


Symmetric

The basic symmetric or "dual services" high availability model consists of two hosting machines, each with its own logical IP address. Each logical node is associated with one physical node, and each physical node controls one disk array with two storage volumes. One volume (message store and MTA queue) is used for its local mail store, and the other is a mirror image of its partner's mail store.

In the symmetric high availability model, both nodes are active concurrently, and each node serves as a backup node for the other. Under normal conditions, each node runs only one instance of the mail service.

Figure A-2 illustrates the basic symmetric high availability model.



Figure A-2    Symmetric High Availability Model

Upon failover, the services on the failing node are shut down and restarted on its backup node. The mail store on the failed node switches to its backup node. At this point, the backup node is running two instances of the mail server and is managing two separate mail store volumes.

The big advantage of this model is that both nodes are active simultaneously, thus fully utilizing machine resources. However, multiple instances of the mail server on a single node can result in competition for CPU time and memory resources. Therefore, you should repair the failed node as quickly as possible and switch the servers back to their dual services state.

This model also provides a backup storage array; in the event of a disk array failure, its mirror image can be picked up by the service on its backup node.


N+1 (N Over 1)

The N + 1 or "N over 1" model operates in a multi-node asymmetrical configuration. N logical hostnames and N shared disk arrays are required. A single backup node is reserved as a hot standby for all the other nodes. The backup node is capable of running up to N instances of the mail server.

Figure A-3 illustrates the basic N + 1 high availability model.



Figure A-3    N + 1 High Availability Model

Upon failover of one or more active nodes, the backup node picks up the failing node's responsibilities.

The advantages of the N + 1 model are that the server load can be distributed to multiple nodes and that only one backup node is necessary to sustain all the possible node failures. Thus, the machine idle ratio is 1/N as opposed to 1/1, as is the case in a single asymmetric model.


Which High Availability Model is Right for you?

Table A-1 summarizes the advantages and disadvantages of each high availability model. Use this information to help you determine which model is right for you.

Table A-1    High Availability Model Advantages and Disadvantages

Model

Advantages

Disadvantages

Recommended User

Asymmetric  

  • Simple Configuration

  • Backup node is 100 percent reserved

 
  • Machine resources are not fully utilized

 

A small service provider with plans to expand in the future.  

Symmetric  

  • Better use of system resources

  • Higher availability

 
  • Resource contention on backup node

  • Mirrored disks reduce performance

 

A medium-sized service provider with no expansion plans on their backup systems in the near future.  

N + 1  

  • Load distribution

  • Easy expansion

 
  • Configuration complexity

 

A large service provider who requires distribution with no resource constraints.  


System Down Time Calculations

Table A-2 illustrates the probability that on any given day the mail service will be unavailable due to system failure. These calculations assume that on the average, each server goes down for one day every three months due to either a system crash or server hang, and that each storage device goes down one day every 12 months. They also ignore the small probability of both nodes being down simultaneously.

Table A-2    System Down Time Calculations

Model

Server Down Time Probability

Single server (no high availability)  

Pr(down) = (4 days of system down + 1 day of storage down)/365 = 1.37%  

Asymmetric  

Pr(down) = (0 days of system down + 1 day of storage down)/365 = 0.27%  

Symmetric  

Pr(down) = (0 days of system down + 0 days of storage down)/365 = (near 0)  

N + 1  

Pr(down) = (0 days of system down + 1 day of storage down)/(365xN) = 0.27%/N  



Installing High Availability



This section provides the information you need to install either the Veritas Cluster Server 1.1 or later or SunCluster 2.2 high availability clustering software and prepare it for use with the Messaging Server.

The example used in this section is based on a simple, two node cluster server (the asymmetric model). As always, you should refer to your Veritas Cluster Server documentation for detailed installation instructions and information.

The basic asymmetric model requires one public and two private network interfaces and one shared disk. The private network interface is used for cluster heartbeat connections. The shared disk must be connected to both nodes via a SCSI fiber channel connector and the SCSI ID on both ends must be different.


Cluster Agent Installation

A cluster agent is a Messaging Server API program that runs under the cluster framework. During Messaging Server 5.0 installation process, if you choose to install the High Availability component, the setup program will automatically detect the clustering software you have installed on your server and install the appropriate set of agent programs into the appropriate location.


Note The setup program will only copy one set of agents onto your server, so be sure to install and configure only one type of clustering software on your server.



For Veritas Clustering Software 1.1 or later, the agent type file is located in the /etc/VRTSvcs/conf/config directory and the agent programs are in the /opt/VRTSvcs/bin/MsgSrv directory. For SunCluster 2.2, the agents are installed in the /opt/SUNWcluster/ha/msg directory.

Some items of note regarding the Messaging Server installation and high availability:

  • When you are asked for the server-root (see Step 5 in Chapter 2 "Installation Questions"), be sure that it is on a shared storage volume; otherwise, high availability will not work.

  • When you are asked for the computer name (see Step 11 in Chapter 2 "Installation Questions"), be sure to specify the logical hostname of the machine where the Messaging Server was installed, rather than the physical hostname.

  • When you are asked for the Directory Server identifier (see Step 22 in Chapter 2 "Installation Questions"), be sure to specify the logical hostname of the machine where the Directory Server was installed, rather than the physical hostname.

  • When you are asked for the IP address (see Step 35 in Chapter 2 "Installation Questions"), be sure to specify the IP address of the logical host machine, not the physical host machine.

In you are using the Veritas Cluster Server 1.1 or later high availability software, go to Veritas Cluster Server Agent Installation. If you are using the SunCluster 2.2 high availability software, go to SunCluster Agent Installation.


Veritas Cluster Server Agent Installation

After you decide which high availability model you want to implement, you are ready to install the Veritas Cluster Server software and prepare it for use with Messaging Server. The procedures in this section must be completed before you install the Messaging Server.


Note It is assumed that you are already familiar with Veritas Cluster Server concepts and commands.




Pre-Installation Instructions

This section describes the procedures for installing the Veritas Cluster Server and preparing it for use with the Messaging Server.

To install and set up the Veritas Cluster Server for use with Messaging Server:

  1. Install Veritas Cluster Server 1.1 or later on both nodes.

  2. Configure and start the Veritas Cluster Server.

    Note For these first two steps, you should refer to your Veritas Cluster Server documentation for detailed information and instructions.



  3. Create the /etc/VRTSvcs/conf/config/main.cf file.

  4. Create a service group called iMS5.

    Within this service group:

    1. Create the network resource (specify NIC as the resource type).

      Use the public network interface name for the Device attribute (for example, hme0).

    2. Create the logical_IP resource (specify IP as the resource type).

      Use the logical IP for the Address attribute and the public interface for the Device attribute.

    3. Create a sharedg resource (specify DiskGroup as the resource type).

      Use the disk group name for the DiskGroup attribute.

    4. Create a mountshared resource (specify Mount as the resource type).

      Use the shared device name BlockDevice, specify MountPoint as the mount point, and set FSType to the appropriate file system type.

  5. Bring all of the above resources online on the primary (active) node.

  6. Start the dependency tree as follows: the logical_IP resource depends on the network resource, and the mountshared resource depends on the sharedg resource. Your dependency tree should look like this:




Installing High Availability

At this point, you have successfully installed Veritas Cluster Server and have prepared it for the Messaging Server installation. You must install the Messaging Server on the first node, but only the High Availability component on the second node. To do so, select only the iPlanet Messaging Suite component from the iPlanet Server Products menu, then select only the High Availability component from the iPlanet Messaging Applications menu.

When you run the Messaging Server installation, the setup program checks to see if the Veritas Cluster Server has been installed and properly configured. If so, then the appropriate high availability files are installed.


Post-Installation Instructions

After these steps are completed, you must perform the following on the secondary node:

  1. Switch the logical_IP and shared disk to the secondary node.

  2. Run the setup program on the secondary node to start the Messaging Server installation:

    ./setup

  3. From the list of installation types, select Custom installation, then select just the high availability packages in the iPlanet Messaging Applications component.

On the machine where you installed the Veritas Cluster Server software:

  1. Stop the Veritas Cluster Server.

  2. Add the following line in main.cf:

    include "MsgSrvTypes.cf"

  3. Start the Veritas Cluster Server.

  4. Create a resource named mail (specify MsgSrv as the resource type) and enter the instance name (InstanceName) and the log host name (LogHostName).

  5. Set the logical_IP and mountshared resources as children of the mail resource.

    This means that both the logical_IP and mountshared resources depend on the mail resource.

    Your dependency tree should now look like this:



Now, you are ready. On any node, bring up the mail resource online. This automatically starts the mail server on that node.


Configuring High Availability for Veritas Cluster Server

To configure high availability for the Veritas Cluster Server, you can modify the parameters in the MsgSvrType configuration file. Below is the relevant entry:

type MsgSrv (
   static int MonitorInterval = 180
   statis int MonitorTimeout = 180
   static int OnlineRetryLimit = 1
   static int OnlineWaitLimit = 1
   static int RestartLimit = 2
   static str ArgList[] = { State, InstanceName, LogHostName, PrtStatus, DebugMode }
   NameRule = resource.InstanceName
   str InstanceName
   str LogHostName
   str PrtStatus
   str DebugMode
)

Table A-3 describes the various parameters:

Table A-3    MsgSrv Parameters

Parameter

Description

MonitorInterval  

The duration in seconds between each probing.  

MonitorTimeout  

The duration in seconds before a probe times out.  

OnlineRetryLimit  

The number of times to retry online.  

OnlineWaitLimit  

The number of MonitorIntervals to wait after completing the online procedure and before the resource comes online.  

RestartLimit  

The number of restarts before the resource is failed over.  

Table A-4 describes the various arguments:

Table A-4    MsgSrv Arguments

Parameter

Description

State  

Indicates if the service is online or not in this system. This value is not changeable by the user.  

InstanceName  

The Messaging Server's instance name without the msg- prefix.  

LogHostName  

The logical host name that is associated with this instance.  

PrtStatus  

If set to TRUE, the online status is printed to the Veritas Cluster Server log file.  

DebugMode  

If set to TRUE, the debugging information is sent to the Veritas Cluster Server log file.  


SunCluster Agent Installation

After you decide which high availability model you want to implement, you are ready to install the SunCluster high availability software and prepare it for use with Messaging Server. The procedures in this section must be completed before you install the Messaging Server.


Note It is assumed that you are already familiar with SunCluster concepts and commands.




Pre-Installation Instructions

This section describes the procedures for installing the SunCluster software and preparing it for use with the Messaging Server.

To install and set up the SunCluster for use with Messaging Server:

  1. Install SunCluster 2.2 on both nodes.

    Note The HA fault monitor agent requires the tcpclnt binary file in SunCluster 2.2 SUNWscpro package. Thus, you must also install this package for the probing feature to fully work.



  2. Configure and start the SunCluster so you have access to both the logical IP and the shared volume.

    Note For these first two steps, you should refer to your SunCluster documentation for detailed information and instructions.




Installing High Availability

At this point, you have successfully installed the SunCluster software and have prepared it for the Messaging Server installation. You must install the Messaging Server on the first node, but only the High Availability component on the second node. To do so, select only the iPlanet Messaging Suite component from the iPlanet Server Products menu, then select only the High Availability component from the iPlanet Messaging Applications menu.

When you run the Messaging Server installation, the setup program checks to see if the SunCluster software has been installed and properly configured. If so, then the appropriate high availability files are installed.


Post-Installation Instructions

After these steps are completed, you must copy the server-root/bin/msg/ha/sc/config/ims_ha.cnf file to your shared disk mount point directory (for example, /mnt if your shared disk is mounted under the /mnt directory.

Additionally, you must first register the Messaging Server data service before using it by running the hareg -Y command.

If you want to change the logical host timeout value, use the following command:

scconf cluster_name -l seconds

where cluster_name is the name of the cluster and seconds is the number of seconds you want to set for the timeout value. The number of seconds should be twice the number of seconds needed for the start to complete. For more information, refer to your SunCluster documentation.


Directory Server Configuration

If you install and configure your Directory Server under the same server-root as the Messaging Server, there is no need for additional SunCluster agent files. If not, then there is an existing Sun-supplied agent package that you can use. The package is SUNWscnsl, which is supported by the SunCluster team at Sun.



Notes for Multiple Instances of Messaging Server



If you are using the Symmetric or N + 1 high availability models, there are some additional things you should be aware of during installation and configuration in order to prepare the Cluster Server for multiple instances of Messaging Server.


Create a Second Service Group

If you are using Veritas Cluster Server 1.1 or later, you must create a second service group in addition to the iMS5 group you created earlier. This group should have the same set of resources and the same dependency tree as iMS5.

If you are using SunCluster 2.2, create another logical host which consists of a different logical IP and a shared volume. The new instance can then be installed on this volume.


Note When bringing up SunCluster 2.2 using the hareg -Y command, be sure there is only one instance on each node. SunCluster 2.2 does not allow you to bring up multiple logical IPs on one node using this command.




Installation Notes

During the Messaging Server installation, be sure that all the mail services are offline during the installation process; running mail services may interfere with the Messaging Server installation.


Configuration Notes

Multiple instances of the Messaging Server running on the same server require that the correct IP address bonds to each instance. The following subsections provide instructions on how to bind the IP address for each instance. If this is not done correctly, the multiple instances could interfere with each other.


IP Address Binding for IMAP/POP3 Servers

Use the configutil command as follows:

configutil -o service.listenaddr -v IP_address

where IP_address is the address to which the service will bind.


IP Address Binding for SMTP Service

Add the following line in the SERVICE=SMTP section in the dispatcher.cnf file:

INTERFACE_ADDRESS=IP_address


IP Address Binding for SMTP_SUBMIT Service

Add the following line in the SERVICE=SMTP_SUBMIT section in the dispatcher.cnf file:

INTERFACE_ADDRESS=IP_address


IP Address Binding for LDAP Service

Add the following line in the slapd.conf file:

listenhost IP_address


Changing the Default tcp_port Number

The tcp_port number in the job_controller.cnf file must be different for each instance. If the tcp_port numbers are the same, change them so that they are different.


Previous     Contents     Index     DocHome     Next     
Copyright © 2000 Sun Microsystems, Inc. Some preexisting portions Copyright © 2000 Netscape Communications Corp. All rights reserved.

Last Updated October 05, 2000