Previous     Contents     Index     DocHome     Next     
iPlanet Messaging Server 5.1 Installation Guide for UNIX



Appendix A       High Availability


This appendix contains the following sections to help you determine which high availability (HA) model is right for you, and how to setup your system to run high availability with Messaging Server:



High Availability Models

There are different high availability models that can be used with Messaging Server. Three of the more basic ones are:

  • Asymmetric (hot standby)

  • Symmetric

  • N + 1 (N Over 1)

Each of these models is described in greater detail in the following subsections. Note that different HA products may or may not support different models. Refer to the HA documentation to determine which modesl are supported.


Asymmetric

The basic asymmetric or "hot standby" high availability model (Figure A-1) consists of two clustered host machines or "nodes." A logical IP address and associated hostname are designated to both nodes.

In this model, only one node is active at any given time; the backup or hot standby node remains idle most of the time. A single shared disk array between both nodes is configured and is mastered by the active or "primary" node. The message store partitions and Mail Transport Agent (MTA) queues reside on this shared volume.

Figure A-1    Asymmetric High Availability Model

Before failover, the active node is Physical-A. Upon failover, Physical-B becomes the active node and the shared volume is switched so that it is mastered by Physical-B. All services are stopped on Physical-A and started on Physical-B.

The advantage of this model is that the backup node is dedicated and completely reserved for the primary node; there is no resource contention on the backup node when a failover occurs. However, this model also means that the backup node stays idle most of the time and this resource is thus under utilized.


Symmetric

The basic symmetric or "dual services" high availability model consists of two hosting machines, each with its own logical IP address. Each logical node is associated with one physical node, and each physical node controls one disk array with two storage volumes. One volume is used for its local message store partitions and MTA queues, and the other is a mirror image of its partner's message store partitions and MTA queues.

In the symmetric high availability mode (Figure A-2), both nodes are active concurrently, and each node serves as a backup node for the other. Under normal conditions, each node runs only instances of the messaging server.

Figure A-2    Symmetric High Availability Model

Upon failover, the services on the failing node are shut down and restarted on the backup node. At this point, the backup node is running all instances of Messaging Server from both nodes and is managing two separate volumes.

The advantage of this model is that both nodes are active simultaneously, thus fully utilizing machine resources. However, during a failure, the backup node will have more resource contention as it runs services for all instances of the Messaging Server from both nodes. Therefore, you should repair the failed node as quickly as possible and switch the servers back to their dual services state.

This model also provides a backup storage array; in the event of a disk array failure, its mirror image can be picked up by the service on its backup node.


N+1 (N Over 1)

The N + 1 or "N over 1" model operates in a multi-node asymmetrical configuration. N logical hostnames and N shared disk arrays are required. A single backup node is reserved as a hot standby for all the other nodes. The backup node is capable of concurrently running all of the Messaging Server instances from the N nodes.

Figure A-3 illustrates the basic N + 1 high availability model.



Figure A-3    N + 1 High Availability Model

Upon failover of one or more active nodes, the backup node picks up the failing node's responsibilities.

The advantages of the N + 1 model are that the server load can be distributed to multiple nodes and that only one backup node is necessary to sustain all the possible node failures. Thus, the machine idle ratio is 1/N as opposed to 1/1, as is the case in a single asymmetric model.


Which High Availability Model is Right for you?

Table A-1 summarizes the advantages and disadvantages of each high availability model. Use this information to help you determine which model is right for you.

Table A-1    High Availability Model Advantages and Disadvantages

Model

Advantages

Disadvantages

Recommended User

Asymmetric  

  • Simple Configuration

  • Backup node is 100 percent reserved

 
  • Machine resources are not fully utilized

 

A small service provider with plans to expand in the future.  

Symmetric  

  • Better use of system resources

  • Higher availability

 
  • Resource contention on backup node

  • Mirrored disks reduce disk write performance

 

A medium-sized service provider with no expansion plans on their backup systems in the near future.  

N + 1  

  • Load distribution

  • Easy expansion

 
  • Configuration complexity

 

A large service provider who requires distribution with no resource constraints.  


System Down Time Calculations

Table A-2 illustrates the probability that on any given day the mail service will be unavailable due to system failure. These calculations assume that on average, each server goes down for one day every three months due to either a system crash or server hang, and that each storage device goes down one day every 12 months. They also ignore the small probability of both nodes being down simultaneously.

Table A-2    System Down Time Calculations

Model

Server Down Time Probability

Single server (no high availability)  

Pr(down) = (4 days of system down + 1 day of storage down)/365 = 1.37%  

Asymmetric  

Pr(down) = (0 days of system down + 1 day of storage down)/365 = 0.27%  

Symmetric  

Pr(down) = (0 days of system down + 0 days of storage down)/365 = (near 0)  

N + 1  

Pr(down) = (0 days of system down + 1 day of storage down)/(365xN) = 0.27%/N  



Installing High Availability for Veritas Cluster Server 1.1 or later or Sun Cluster 2.2



This section provides the information you need to install either the Veritas Cluster Server 1.1 or later or Sun Cluster 2.2 high availability clustering software and prepare it for use with the Messaging Server. (Refer to your Veritas or Sun Cluster Server documentation for detailed installation instructions and information as needed.) The example used in this section is based on a simple, two node cluster server (the asymmetric model).

The basic asymmetric model requires one public and two private network interfaces and one shared disk. The private network interface is used for cluster communications. The shared disk must be connected to both nodes.


Cluster Agent Installation

A cluster agent is a Messaging Server program that runs under the cluster framework. During the Messaging Server 5.1 installation process, if you choose to install the High Availability component, the setup program will automatically detect the clustering software you have installed on your server and install the appropriate set of agent programs into the appropriate location.


Note The setup program will only copy one set of agents—SC 2.2 or VCS 1.1—onto your server, so be sure to install and configure only one type of clustering software on your server.



For Veritas Clustering Software 1.1 or later, the agent type file is located in the /etc/VRTSvcs/conf/config directory and the agent programs are in the /opt/VRTSvcs/bin/MsgSrv directory. For Sun Cluster 2.2, the agents are installed in the /opt/SUNWcluster/ha/msg directory.

Some items of note regarding the Messaging Server installation and high availability:

  • When running the installation, make sure that the HA logical host names and associated IP addresses for Messaging and Directory servers are functioning (for example, active). The reason for this is because portions of the installation will make TCP connections using them (for example, to provision the directory server with configuration information). If Messaging and Directory servers are to run on the same host, then they may use the same logical host name and IP address. Run the installation on the cluster node currently pointed at by the HA logical host name for the messaging server.

  • When you are asked for the server-root (see Step 5 in Chapter 3, "Installation Questions"), be sure that the server-root is on the shared file system; otherwise, high availability will not work correctly. For example, after failing over to another node, the servers will no longer see the data accumulated by the servers on the failed node.

  • When you are asked for the fully-qualified domain name of the messaging server host (see Step 11 in Chapter 3, "Installation Questions"), be sure to specify the fully-qualified HA logical hostname for the messaging server. During the install, TCP connections using this logical hostname will be attempted.

  • When you are asked for the Directory Server identifier (see Step 22 in Chapter 3, "Installation Questions"), specify the fully-qualified HA logical hostname for the directory server. This logical host name must also be active as connection attempts using it will be made.

  • When you are asked for the IP address of Messaging Server (see Step 35 in Chapter 3, "Installation Questions"), be sure to specify the IP address associated with the logical host name for Messaging Server. Do not use the IP address for the physical host.

If you are using the Veritas Cluster Server 1.1 or later high availability software, go to Veritas Cluster Server Agent Installation. If you are using the Sun Cluster 2.2 high availability software, go to Sun Cluster Agent Installation.


Veritas Cluster Server Agent Installation

After you decide which high availability model you want to implement, you are ready to install the Veritas Cluster Server software and prepare it for use with Messaging Server. The procedures in this section must be completed before you install the Messaging Server.


Note It is assumed that you are already familiar with Veritas Cluster Server concepts and commands.




Pre-Installation Instructions

This section describes the procedures for installing the Veritas Cluster Server and preparing it for use with the Messaging Server.

To install and set up the Veritas Cluster Server for use with Messaging Server:

  1. Install Veritas Cluster Server 1.1 or later on both nodes.

  2. Configure and start the Veritas Cluster Server.

    Note For these first two steps, you should refer to your Veritas Cluster Server documentation for detailed information and instructions.



  3. Create the /etc/VRTSvcs/conf/config/main.cf file.

  4. Create a service group called iMS5.

    Within this service group:

    1. Create the network resource (specify NIC as the resource type).

      Use the public network interface name for the Device attribute (for example, hme0).

    2. Create the logical_IP resource (specify IP as the resource type).

      Use the logical IP for the Address attribute and the public interface for the Device attribute.

    3. Create a sharedg resource (specify DiskGroup as the resource type).

      Use the disk group name for the DiskGroup attribute.

    4. Create a mountshared resource (specify Mount as the resource type).

      Use the shared device name BlockDevice, specify MountPoint as the mount point, and set FSType to the appropriate file system type.

  5. Bring all of the above resources online on the primary (active) node.

  6. Start the dependency tree as follows: the logical_IP resource depends on the network resource, and the mountshared resource depends on the sharedg resource. Your dependency tree should look like this:




Installing High Availability

At this point, you have successfully installed Veritas Cluster Server and have prepared it for the Messaging Server installation. You must install the Messaging Server on the first node, but only the High Availability component on the second node. To do so, select only the iPlanet Messaging Suite component from the iPlanet Server Products menu, then select only the High Availability component from the iPlanet Messaging Applications menu.

When you run the Messaging Server installation, the setup program checks to see if the Veritas Cluster Server has been installed and properly configured. If so, then the appropriate high availability files are installed.


Post-Installation Instructions

After these steps are completed, you must perform the following on the secondary node:

  1. Switch the logical_IP and shared disk to the secondary node.

  2. Run the setup program on the secondary node to start the Messaging Server installation:

    ./setup

  3. From the list of installation types, select Custom installation, then select just the high availability packages in the iPlanet Messaging Applications component.

On the machine where you installed the Veritas Cluster Server software:

  1. Stop the Veritas Cluster Server.

  2. Add the following line in main.cf:

    include "MsgSrvTypes.cf"

  3. Start the Veritas Cluster Server.

  4. Create a resource named mail (specify MsgSrv as the resource type) and enter the instance name (InstanceName) and the log host name (LogHostName).

  5. Set the logical_IP and mountshared resources as children of the mail resource.

    This means that the mail resource depends on both the logical_IP and mountshared resources.

    Your dependency tree should now look like this:



Now, you are ready. On any node, bring up the mail resource online. This automatically starts the mail server on that node.


Configuring High Availability for Veritas Cluster Server

To configure high availability for the Veritas Cluster Server, you can modify the parameters in the MsgSvrType configuration file. Below is the relevant entry:

type MsgSrv (
   static int MonitorInterval = 180
   statis int MonitorTimeout = 180
   static int OnlineRetryLimit = 1
   static int OnlineWaitLimit = 1
   static int RestartLimit = 2
   static str ArgList[] = { State, InstanceName, LogHostName, PrtStatus, DebugMode }
   NameRule = resource.InstanceName
   str InstanceName
   str LogHostName
   str PrtStatus
   str DebugMode
)

Table A-3 describes the various parameters:

Table A-3    MsgSrv Parameters

Parameter

Description

MonitorInterval  

The duration in seconds between each probing.  

MonitorTimeout  

The duration in seconds before a probe times out.  

OnlineRetryLimit  

The number of times to retry online.  

OnlineWaitLimit  

The number of MonitorIntervals to wait after completing the online procedure and before the resource comes online.  

RestartLimit  

The number of restarts before the resource is failed over.  

Table A-4 describes the various arguments:

Table A-4    MsgSrv Arguments

Parameter

Description

State  

Indicates if the service is online or not in this system. This value is not changeable by the user.  

InstanceName  

The Messaging Server's instance name without the msg- prefix.  

LogHostName  

The logical host name that is associated with this instance.  

PrtStatus  

If set to TRUE, the online status is printed to the Veritas Cluster Server log file.  

DebugMode  

If set to TRUE, the debugging information is sent to the Veritas Cluster Server log file.  


Sun Cluster Agent Installation

After you decide which high availability model you want to implement, you are ready to install the Sun Cluster high availability software and prepare it for use with Messaging Server. The procedures in this section must be completed before you install the Messaging Server.


Note It is assumed that you are already familiar with Sun Cluster concepts and commands.




Pre-Installation Instructions

This section describes the procedures for installing the Sun Cluster software and preparing it for use with Messaging Server.

To install and set up the Sun Cluster for use with Messaging Server:

  1. Install Sun Cluster 2.2 on both nodes.

    Note The HA fault monitor agent requires the tcpclnt binary file in Sun Cluster 2.2 SUNWscpro package. Thus, you must also install this package for the probing feature to fully work.



  2. Configure and start the Sun Cluster so you have access to both the logical IP and the shared volume.

    Note For these first two steps, you should refer to your Sun Cluster documentation for detailed information and instructions.




Installing High Availability

At this point, you have successfully installed the Sun Cluster software and have prepared it for the Messaging Server installation. You must install the Messaging Server on the first node, but only the High Availability component on the second node. (You must switch the logical_IP and shared disk to the secondary node before installing High Availability components.) To do so, select only the iPlanet Messaging Suite component from the iPlanet Server Products menu, then select only the High Availability component from the iPlanet Messaging Applications menu.

When you run the Messaging Server installation, the setup program checks to see if the Sun Cluster software has been installed and properly configured. If so, then the appropriate high availability files are installed.


Post-Installation Instructions

You must perform the following on the secondary node:

  1. Conduct a failover to the secondary node.

  2. Run the setup program on the secondary node to start the Messaging Server installation:

    ./setup

  3. From the list of installation types, select Custom installation, then select just the high availability packages in the iPlanet Messaging Applications component.

After these steps are completed, you must copy the server-root/bin/msg/ha/sc/config/ims_ha.cnf file to your shared disk mount point directory (for example, /mnt if your shared disk is mounted under the /mnt directory.

Additionally, you must first register the Messaging Server data service before using it by running the hareg -Y command.

If you want to change the logical host timeout value, use the following command:

scconf cluster_name -l seconds

where cluster_name is the name of the cluster and seconds is the number of seconds you want to set for the timeout value. The number of seconds should be twice the number of seconds needed for the start to complete. For more information, refer to your Sun Cluster documentation.


Directory Server Configuration

If you install and configure your Directory Server under the same server-root as the Messaging Server, there is no need for additional Sun Cluster agent files. If not, then there is an existing Sun-supplied agent package that you can use. The package is SUNWscnsl, which is supported by the Sun Cluster team at Sun.


Un-Installing Veritas Cluster Server 1.1 or later and Sun Cluster 2.2

To uninstall Veritas Cluster Server and Sun Cluster 2.2:

  1. Perform the normal uninstall procedures as described in Appendix C, "Running the uninstall Program."

  2. Remove the instance entry from the /etc/msgregistry.inf file if multiple instances are installed; otherwise, remove the /etc/msgregistry.inf file on both nodes.

At this point, uninstall instructions differ depending on whether you are removing Veritas Cluster Server or Sun Cluster. If you are using the Veritas Cluster Server 1.1 or later high availability software, go to Un-Installing High Availability for Veritas Cluster Server. If you are using the Sun Cluster 2.2 high availability software, go to Un-Installing High Availability for SunCluster.


Un-Installing High Availability for Veritas Cluster Server

To un-install the high availability components for Veritas Cluster Server:

  1. Remove the dirsync entries from cron job table on both nodes.

  2. Delete all of the Veritas Cluster Server resources created during installation.

  3. Stop the Veritas Cluster Server and remove following files on both nodes if no more instances exist:

    /etc/VRTSvcs/conf/config/MsgSrvTypes.cf
    /opt/VRTSvcs/bin/MsgSrv/online
    /opt/VRTSvcs/bin/MsgSrv/offline
    /opt/VRTSvcs/bin/MsgSrv/clean
    /opt/VRTSvcs/bin/MsgSrv/monitor
    /opt/VRTSvcs/bin/MsgSrv/sub.pl

  4. Remove the Messaging Server entries from the /etc/VRTSvcs/conf/config/main.cf file on both nodes.


Un-Installing High Availability for SunCluster

To un-install the high availability components for SunCluster:

  1. Run the following command:

    hareg -u ims50

  2. Remove the following:

    /opt/SUNWcluster/ha/msg/ims_common
    /opt/SUNWcluster/ha/msg/ims_fm_probe
    /opt/SUNWcluster/ha/msg/ims_start_net
    /opt/SUNWcluster/ha/msg/ims_stop_net

  3. Remove the ims_ha.cnf file from your shared disk mount point directory (for example, /mnt if your shared disk is mounted under the /mnt directory.



Installing High Availability for Sun Cluster 3.0

This section describes how to install and configure the Messaging Server as a Sun Cluster 3.0 Highly Available (HA) Data Service. Documentation for Sun Cluster 3.0 may be found at:

http://docs.sun.com/ab2/coll.572.7/


Sun Cluster 3.0 Limitations and Performance

  • Veritas File System (VxFS) not supported for this release. (Committed to support in Sun Cluster 3.1.)

  • No rolling upgrade support.


Sun Cluster 3.0 Prerequisites

This section presumes the following:

  • Sun Cluster 3.0 is installed in a Solaris 2.8 environment with required patches.

  • The HA agent for Netscape Directory Server is installed (on Sun Cluster 3.0 Agents CDROM).

  • If the system is using shared disks, either Solstice DiskSuite or Veritas Volume Manager is used.


Installing the Messaging Server HA support for Sun Cluster 3.0

Each cluster node requires three packages to be installed to run the Messaging Server:

  • SUNWscdev from the Sun Cluster 3.0 CDROM (704-7524-10).

  • SUNWscsdk from the iPlanet Messaging Server CD in the directory solaris/sc30. This is an updated version of the SUNWscsdk package found on the Sun Cluster 3.0 Cool Stuff CDROM (704-7494-10). The version on the CDROM has a memory leak (Bugtraq #4398767).

  • SUNWscims from the iPlanet Messaging Server CD in the directory solaris/iMS_sc30.

Install each of these packages on each cluster node using pkgadd. For instance, if each of these packages are present in the current working directory, then use the following command to install them:

# pkgadd -d . SUNWscsdk SUNWscdev SUNWscims

Once these three packages are installed, you're ready to configure Messaging Server for HA.


Configuring the Messaging Server HA Support for Sun Cluster 3.0

This section describes how to configure HA support for the iPlanet Messaging Server by providing a simplified example and a more complex example.



Note The Messaging Server must be installed in the global file system directories, not the local system.




Simple Example

This example assumes that the messaging and directory server run on the same cluster node and use the same HA logical host name and IP address. The physical host names are assumed to be mail-1 and mail-2, with an HA logical host name of mail. Figure A-4 depicts the nested dependencies of the different HA resources you will create in configuring Messaging Server HA support.

Figure A-4    A simple iPlanet Messaging Server HA configuration

Before proceeding, make sure that the Messaging Server is shut down on all cluster nodes. The easiest way to check this is to issue the command:

# ps -ef | grep <server-root>

on each cluster node where server-root is the path to the Messaging Server top-level directory (for example, /global/ims/server5/). If no messaging servers are running, then you should see no processes other than your grep.

  1. Become the root user and create a console device.

    All of the following Sun Cluster commands require that you have logged in as root. You will also want to have a console device or window for viewing messages output to /dev/console.

  2. Add required resource types.

    Configure Sun Cluster to know about the resources types we will be using. This is done with the scrgadm -a -t command:

    # scrgadm -a -t SUNW.HAStorage
    # scrgadm -a -t SUNW.nsldap
    # scrgadm -a -t SUNW.ims

  3. Create a resource group for the Messaging Server instance.

    If you have not done so already, create a resource group and make it visible on the cluster nodes which will run the Messaging Server instance. The following command creates a resource group named IMS-RG, making it visible on the cluster nodes mail-1 and mail-2:

    # scrgadm -a -g IMS-RG -h mail-1,mail-2

    You may, of course, use whatever name you wish for the resource group.

  4. Create an HA logical host name resource.

    If you have not done so already, create and enable a resource for the HA logical host name, placing it in the resource group for the Messaging Server instance. The following command does so using the logical host name mail. Since the -j switch is omitted, the name of the resource created will also be mail.

    # scrgadm -a -L -g IMS-RG -l mail
    # scswitch -Z -g IMS-RG

  5. Install Messaging Server.

    Install Messaging Server using the HA logical host name created and enabled in Step 4. Make sure you replicate /etc/msgregistry.inf on the second node.

  6. Create an HA storage resource.

    Next, you need to create an HA storage resource type for the file systems on which the messaging and directory server are dependent. The following command creates an HA storage resource named ha-storage and the file system /global/ims/server5 is placed under its control:

    # scrgadm -a -j ha-storage -g IMS-RG \
    -t SUNW.HAStorage \
    -x ServicePaths=/global/ims/server5

    The comma-separated list of ServicePaths are the mount points of the cluster file systems on which the messaging and directory server are both dependent. In the above example, only one mount point, /global/ims/server5 is specified. If one of the servers has additional file systems on which it is dependent, then you can create an additional HA storage resource and in Step 6 or 8 indicate that additional dependency.

  7. Create an HA LDAP resource.

    To your growing resource group, add a resource of type SUNW.nsldap to monitor the directory server. The Confdir_list extension property of SUNW.nsldap is used to indicate the path to the directory server's top level directory on the global file system. Note also that this resource is dependent upon both the HA logical host name and HA storage resources established in Steps 4 and 5. However, since the SUNW.nsldap resource type specifies Network_resources_used in its resource type registration file, you do not need to explicitly specify the HA logical host name resource in the Resource_dependencies option below. You only need to specify the HA storage resource with that option. The following command accomplishes all of this, naming the HA LDAP resource ha-ldap.

    # scrgadm -a -j ha-ldap -t SUNW.nsldap -g IMS-RG \
         -x Confdir_list=/global/ims/server5/slapd-mail \
         -y Resource_dependencies=ha-storage

  8. Enable the HA LDAP resource.

    Before proceeding with creating the HA Messaging Server resource, we must bring the HA LDAP resource online. This because the act of creating the HA Messaging Server resource will attempt to validate the Messaging Server resource definition. Part of doing that requires accessing the Messaging Server configuration information stored in the LDAP server.

    If you skipped Steps 3 through 5 because you had done them previously, then the IMS-RG resource group is partially online already. In that case, issue the following commands to enable the HA storage and LDAP resources:

    # scswitch -e -j ha-storage

    # scswitch -e -j ha-ldap

    If you did execute Steps 3 through 5, then instead use the command:

    # scswitch -Z -g IMS-RG

  9. Create an HA Messaging Server resource.

    It's now time to create the HA Messaging Server resource and add it to the resource group. This resource is dependent upon the HA logical name, HA storage, and HA LDAP resources. As with the HA LDAP resource, we do not need to specify the HA logical name resource. Moreover, since the HA LDAP resource is itself dependent upon the HA storage resource, we merely need to specify a dependency upon the HA LDAP resource.

    In creating the HA Messaging Server resource, we need to indicate the path to the Messaging Server top-level directory—the server-root path—as well as the name of the Messaging Server instance to make HA. These are done with the IMS_serverroot and IMS_instance extension properties as shown in the following command.

    # scrgadm -a -j ha-ims -t SUNW.ims -g IMS-RG \
              -x IMS_serverroot=/global/ims/server5 \
              -x IMS_instance=stork \
              -y Resource_dependencies=ha-ldap

    The above command, makes an HA Messaging Server resource named ha-ims for the Messaging Server instance stork installed on the global file system at /global/ims/server5. The HA Messaging Server resource is dependent upon the HA LDAP resource named ldap created in Step 6 above.

    If the Messaging Server instance has file system dependencies beyond that of the directory server, then you can create an additional HA storage resource for those additional file systems. Then include that additional HA storage resource name in the Resource_dependencies option of the above command.

  10. Enable the Messaging Server resource.

    It's now time to activate the HA Messaging Server resource, thereby bringing the messaging server online. To do this, use the command

    # scswitch -e -j ha-ims

    The above command enables the ha-ims resource of the IMS-RG resource group. Since the IMS-RG resource was previously brought online, the above command also brings ha-ims online.

  11. Verify that things are working.

    Use the scstat command to see if the IMS-RG resource group is online. You may want to look at the output directed to the console device for any diagnostic information. Also look in the syslog file, /var/adm/messages.

  12. Fail the resource group over to another cluster node.

    Manually fail the resource group over to another cluster node. Use the scstat command to see what node the resource group is currently running on ("online" on). For instance, if it is online on mail-1, then fail it over to mail-2 with the command:

    # scswitch -z -g IMS-RG -h mail-2


Complex Example

In this more complicated example, we consider the case where iPlanet Messaging Server is dependent upon the following:

  • An LDAP server running on the same node and containing configuration information.

  • An LDAP server running on a different node and containing user information.

  • Some additional file systems containing message store partitions and MTA message queues.

Figure A-5 shows how these dependencies are realized using Sun Cluster resource groups. Key parameters of each resource are shown in the figure. The commands required to realize this configuration follow.

Figure A-5    A complex iPlanet Messaging Server HA configuration


# scrgadm -a -t SUNW.nsldap
# scrgadm -a -t SUNW.ims

# scrgadm -a -g LDAP-USER-RG -h ldap-1,ldap-2   Create LDAP-USER-RG resource group

# scrgadm -a -L -g LDAP-USER-RG -l ldap   Create HA logical host name

# scswitch -Z -g LDAP-USER-RG    Bring LDAP-USER-RG online

     ...Install and configure directory server in /global/ids

# scrgadm -a -j ha-ids-disk -g LDAP-USER-RG \   Create HA storage resource
          -t SUNW.HAStorage \
          -x ServicePaths=/global/ids

# scrgadm -a -j ha-ldap-user -g LDAP-USER-RG \   Create HA LDAP resource
          -t SUNW.nsldap \
          -x Confdir_list=/global/ids/slapd-ldap \
          -y Resource_dependencies=ha-ids-disk

# scswitch -e -j ha-ids-disk    Bring rest of LDAP-USER-RG online
# scswitch -e -j ha-ldap-user

# scrgadm -a -g IMS-RG -h mail-1,mail-2    Create IMS-RG resource group

# scrgadm -a -L -g IMS-RG -l mail    Create HA logical host name

# scswitch -Z -g IMS-RG    Bring IMS-RG online

    ...Install and configure messaging server and 2nd directory server in /global/ims/server5

# scrgadm -a -j ha-ims-serverroot -g IMS-RG \   Create HA storage resource
          -t SUNW.HAStorage \
          -x ServicePaths=/global/ims/server5

# scrgadm -a -j ha-ldap-config -g IMS-RG \   Create HA LDAP resource
          -t SUNW.nsldap \
          -x Confdir_list=/global/ims/server5/slapd-mail \
          -y Resource_dependencies=ha-ims-serverroot

# scrgadm -a -j ha-ims-data -g IMS-RG  \    Create another HA storage resource
          -t SUNW.HAStorage \
          -x ServicePaths=/global/ims/store,/global/ims/queues

# scswitch -e -j ha-ims-serverroot   Bring LDAP online
# scswitch -e -j ha-ldap-config

# scrgadm -a -j ha-ims -g IMS-RG   \   Create HA Messaging Server resource
          -t SUNW.ims \
          -x IMS_serverroot=/global/ims/server5 \
          -x IMS_instance=stork \
          -y Resource_dependencies=ha-ldap-config,ha-ims-data \
          -y RG_dependencies=LDAP-USER-RG

# scswitch -e -j ha-ims-data Bring Messaging Server online
# scswitch -e -j ha-ims    



Unconfiguring the Messaging Server HA Support for Sun Cluster 3.0

This section describes how to undo the HA configuration. This section assumes the simple example configuration. For other configurations, the specific commands (for example, Step 3) may be different but will otherwise follow the same logical order.

  1. Become the root user.

    All of the following Sun Cluster commands require that you be running as user root.

  2. Bring the resource group offline.

    To shut down all of the resources in the resource group, issue the command

    # scswitch -F -g IMS-RG

    This shuts down all resources within the resource group (for example, the Messaging Server, LDAP, and the HA logical host name.

  3. Disable the individual resources.

    Next, remove the resources one-by-one from the resource group with the commands

    # scswitch -n -j ha-ims
    # scswitch -n -j ha-ldap
    # scswitch -n -j ha-storage
    # scswitch -n -j mail

  4. Remove the individual resources from the resource group.

    Once the resources have been disabled, you may remove them one-by-one from the resource group with the commands:

    # scrgadm -r -j ha-ims
    # scrgadm -r -j ha-ldap
    # scrgadm -r -j ha-storage
    # scrgadm -r -j mail

  5. Remove the resource group.

    Once the all the resources have been removed from the resource group, the resource group itself may be removed with the command:

    # scrgadm -r -g IMS-RG

  6. Remove the resource types (optional).

    Should you need to remove the resource types from the cluster, issue the commands:

    # scrgadm -r -t SUNW.ims
    # scrgadm -r -t SUNW.nsldap
    # scrgadm -r -t SUNW.HAStorage



Notes for Multiple Instances of Messaging Server

If you are using the Symmetric or N + 1 high availability models, there are some additional things you should be aware of during installation and configuration in order to prepare the Cluster Server for multiple instances of Messaging Server. This section covers those issues and procedures.

Note During the Messaging Server installation, be sure that all the mail services are offline during the installation process; running mail services may interfere with the Messaging Server installation.




Making Additional Messaging Server Instances Highly Available

If you are using Veritas Cluster Server 1.1 or later, you must create a second service group in addition to the iMS5 group you created earlier. This group should have the same set of resources and the same dependency tree as iMS5.

If you are using Sun Cluster 2.2, create another logical host which consists of a different logical IP and a shared volume. The new instance can then be installed on this volume.


Note When bringing up Sun Cluster 2.2 using the hareg -Y command, be sure there is only one instance on each node. Sun Cluster 2.2 does not allow you to bring up multiple logical IPs on one node using this command.



For Sun Cluster 3.0, whether or not you create another resource group will depend upon the usage of the additional Messaging Server instance. If failover of the additional instance is to be independent of the existing instance, then you will likely want to create a new resource group for the additional instance. If, however, the additional instance should failover when the existing instance does, then you may want to use the same resource group for both instances.


Binding IP Addresses for Each Messaging Server Instance on the Same Server

Multiple instances of the Messaging Server running on the same server require that the correct IP address binds to each instance. The following subsections provide instructions on how to bind the IP address for each instance. If this is not done correctly, the multiple instances could interfere with each other. These instructions refer to Sun Cluster 2.2, Sun Cluster 3.0, and the Veritas Cluster Server.

Part of configuring Messaging Server for HA involves configuring the interface address on which the Messaging Servers bind and listen for connections. By default, the servers bind to all available interface addresses. However, in an HA environment, you want to the servers to bind specifically to the interface address associated with an HA logical host name. (Were they to bind to all available interfaces, then difficulties would arise when two different Messaging Server instances attempt to run on the same physical host.)

A script is therefore provided to configure the interface address used by the servers belonging to a given Messaging Server instance. Optionally, the script can be directed to configure an LDAP server instance living in the same Messaging Server root to use the same interface address. Note that the script identifies the interface address by means of the IP address which you have or will be associating with the HA logical host name used by the servers.

If the LDAP server or servers you will be using are located on a different host, then the ha_ip_config script does not configure those LDAP servers. In general, they should not require additional configuration as a result of configuring the Messaging Server to be HA.

The script effects the configuration changes by modifying or creating the following configuration files. For the file

<server-root>/msg-<instance>/imta/config/dispatcher.cnf

it adds or changes INTERFACE_ADDRESS option for the SMTP and SMTP Submit servers. For the file

<server-root>/msg-<instance>/imta/config/job_controller.cnf

it adds or changes the INTERFACE_ADDRESS option for the Job Controller. For the file

<server-root>/slapd-<instance>/config/slapd.conf

it adds or changes the listenhost option for the LDAP server (optional) and, finally it sets the configutil service.listenaddr parameter used by the POP, IMAP, and Messenger Express HTTP servers.

Note that the original configuration files, if any, are renamed to *.pre-ha.

Run the script as follows:

  1. Become root

  2. Execute <server-root>/bin/msg/install/bin/ha_ip_config

  3. The script presents the questions described below. The script may be aborted by typing control-d in response to any of the questions. Default answers to the questions will appear within square brackets, [ ]. To accept the default answer, simply press the RETURN key.

    1. Logical IP address: Specify the IP address assigned to the logical host name which the Messaging Server instance will be using. The IP address must be specified in dotted decimal form, for example, 10.0.100.10.

    2. Messaging Server root: Specify the absolute path to the top-level directory in which Messaging Server is installed. Within this directory resides the Messaging Server instances, each in a msg-* subdirectory.

    3. Messaging Server instance name: Specify the name of the Messaging Server instance to configure. Do not include the leading msg- in the instance name.

    4. Also configure an LDAP server instance in the same Messaging Server root: Answer "yes" if you would like to configure an LDAP server instance located in the same Messaging Server root. The LDAP server instance will be configured with the same IP address as the Messaging Server instance. Answer "no" to skip configuring an LDAP server instance.

      This question will not appear if the Messaging Server root does not contain any subdirectories whose name begin with slapd-.

    5. LDAP instance name: specify the name of the LDAP server instance to configure. Omit the leading slapd- from the instance name.

      This question will not appear if you answered "no" to previous question on configuring an LDAP server instance (d).

    6. Do you wish to change any of the above choices: answer "no" to accept your answers and effect the configuration change. Answer "yes" if you wish to alter your answers.

  4. A sample run of the script is shown below.

    # su root
    # <server-root>/bin/msg/install/bin/ha_ip_config

    Please specify the IP address assigned to the HA logical host name. Use dotted decimal form, a.b.c.d

    Logical IP address: 10.0.37.10

    Please specify the path to the top level directory in which iMS is installed. This is the server root directory which contains the instance directories.

    iMS server root: /opt/iplanet/server5

    Next, please specify the name of the iMS instance for which to effect the configuration changes. Omit the leading "msg-" from the name. Possible instances include:

    mail-1
    mail-2

    iMS instance name [mail-1]: mail-1

    Also configure an LDAP server instance in the same iMS server root [yes]? yes

    Please specify the name of the LDAP server instance for which to effect the configuration changes. This LDAP server instance must live in a subdirectory of the iMS server root previously specified. Omit the leading "slapd-" from the LDAP server instance name. Possible instances include:

    elenchus

    LDAP instance name [elenchus]: elenchus

    Logical IP address: 10.0.37.10
    iMS server root: /opt/iplanet/server5
    iMS instance name: mail-1
    LDAP instance name: elenchus

    Do you wish to change any of the above choices (yes/no) [no]? no

    Updating the file /opt/iplanet/server5/msg-mail-1/imta/config/dispatcher.cnf
    Updating the file /opt/iplanet/server5/msg-mail-1/imta/config/job_controller.cnf
    Updating the file /opt/iplanet/server5/slapd-elenchus/config/slapd.conf
    Setting the service.listenaddr configutil parameter
    Configuration successfully changed

    #

  5. If running or potentially running more than one instance of the messaging server on the same node, then edit the job_controller.cnf files located in the <server-root>/msg-instance/imta/config/ directories. Ensure that each instance is using a different TCP port number for the Job Controllers. This is done via the TCP_PORT option setting in that file.


Testing Nodes

Before proceeding, take the time to ensure that the iPlanet Messaging Server can be started and stopped on each node in the cluster. Begin by testing on the node on which you installed the Messaging Server. Then, fail the logical host name over to another cluster node with the command (Sun Cluster 3.0):

# scswitch -z -g IMS-RG -h name-of-physical-host-to-failover-to

When you're done testing, be sure to shutdown the Messaging Server before configuring it for HA. Likewise for the directory server if running it on a different host or otherwise separated from Messaging Server.


Previous     Contents     Index     DocHome     Next     
Copyright © 2001 Sun Microsystems, Inc. Some preexisting portions Copyright © 2001 Netscape Communications Corp. All rights reserved.

Last Updated May 06, 2001