Previous     Contents     Index     Next     
iPlanet Messaging Server 5.2 Installation Guide for UNIX



Chapter 4   High Availability


This chapter contains the following sections to help you determine which high availability (HA) model is right for you, and how to set up your system to run high availability with Messaging Server:



High Availability Models

There are different high availability models that can be used with Messaging Server. Three of the more basic ones are:

Each of these models is described in greater detail in the following subsections. In addition, the following topics are covered:

Note that different HA products may or may not support different models. Refer to the HA documentation to determine which models are supported.


Asymmetric

The basic asymmetric or "hot standby" high availability model (Figure 4-1) consists of two clustered host machines or "nodes." A logical IP address and associated host name are designated to both nodes.

In this model, only one node is active at any given time; the backup or hot standby node remains idle most of the time. A single shared disk array between both nodes is configured and is mastered by the active or "primary" node. The message store partitions and Mail Transport Agent (MTA) queues reside on this shared volume.

Figure 4-1    Asymmetric High Availability Model


Before failover, the active node is Physical-A. Upon failover, Physical-B becomes the active node and the shared volume is switched so that it is mastered by Physical-B. All services are stopped on Physical-A and started on Physical-B.

The advantage of this model is that the backup node is dedicated and completely reserved for the primary node; there is no resource contention on the backup node when a failover occurs. However, this model also means that the backup node stays idle most of the time and this resource is thus under utilized.


Symmetric

The basic symmetric or "dual services" high availability model consists of two hosting machines, each with its own logical IP address. Each logical node is associated with one physical node, and each physical node controls one disk array with two storage volumes. One volume is used for its local message store partitions and MTA queues, and the other is a mirror image of its partner's message store partitions and MTA queues.

In the symmetric high availability mode (Figure 4-2), both nodes are active concurrently, and each node serves as a backup node for the other. Under normal conditions, each node runs only instances of the messaging server.

Figure 4-2    Symmetric High Availability Model


Upon failover, the services on the failing node are shut down and restarted on the backup node. At this point, the backup node is running all instances of Messaging Server from both nodes and is managing two separate volumes.

The advantage of this model is that both nodes are active simultaneously, thus fully utilizing machine resources. However, during a failure, the backup node will have more resource contention as it runs services for all instances of the Messaging Server from both nodes. Therefore, you should repair the failed node as quickly as possible and switch the servers back to their dual services state.

This model also provides a backup storage array; in the event of a disk array failure, its mirror image can be picked up by the service on its backup node.


N+1 (N Over 1)

The N + 1 or "N over 1" model operates in a multi-node asymmetrical configuration. N logical host names and N shared disk arrays are required. A single backup node is reserved as a hot standby for all the other nodes. The backup node is capable of concurrently running all of the Messaging Server instances from the N nodes.

Figure 4-3 illustrates the basic N + 1 high availability model.

Figure 4-3    N + 1 High Availability Model

Upon failover of one or more active nodes, the backup node picks up the failing node's responsibilities.

The advantages of the N + 1 model are that the server load can be distributed to multiple nodes and that only one backup node is necessary to sustain all the possible node failures. Thus, the machine idle ratio is 1/N as opposed to 1/1, as is the case in a single asymmetric model.


Which High Availability Model is Right for you?

Table 4-1 summarizes the advantages and disadvantages of each high availability model. Use this information to help you determine which model is right for you.

Table 4-1    High Availability Model Advantages and Disadvantages

Model

Advantages

Disadvantages

Recommended User

Asymmetric  

  • Simple Configuration

  • Backup node is 100 percent reserved

 
  • Machine resources are not fully utilized

 

A small service provider with plans to expand in the future.  

Symmetric  

  • Better use of system resources

  • Higher availability

 
  • Resource contention on backup node

  • Mirrored disks reduce disk write performance

 

A medium-sized service provider with no expansion plans on their backup systems in the near future.  

N + 1  

  • Load distribution

  • Easy expansion

 
  • Configuration complexity

 

A large service provider who requires distribution with no resource constraints.  


System Down Time Calculations

Table 4-2 illustrates the probability that on any given day the mail service will be unavailable due to system failure. These calculations assume that on average, each server goes down for one day every three months due to either a system crash or server hang, and that each storage device goes down one day every 12 months. They also ignore the small probability of both nodes being down simultaneously.

Table 4-2    System Down Time Calculations 

Model

Server Down Time Probability

Single server (no high availability)  

Pr(down) = (4 days of system down + 1 day of storage down)/365 = 1.37%  

Asymmetric  

Pr(down) = (0 days of system down + 1 day of storage down)/365 = 0.27%  

Symmetric  

Pr(down) = (0 days of system down + 0 days of storage down)/365 = (near 0)  

N + 1  

Pr(down) = (0 days of system down + 1 day of storage down)/(365xN) = 0.27%/N  



Installing High Availability



This section provides the information you need to install the Veritas Cluster Server 1.1 or later, Sun Cluster 2.2, or Sun Cluster 3.0 Update 1 or 2 high availability clustering software and prepare it for use with the Messaging Server. (Refer to your Veritas or Sun Cluster Server documentation for detailed installation instructions and information as needed.). The following topics are covered in this section:


Cluster Agent Installation

A cluster agent is a Messaging Server program that runs under the cluster framework. During the Messaging Server 5.2 installation process, if you choose to install the High Availability component, the setup program will automatically detect the clustering software you have installed on your server and install the appropriate set of agent programs into the appropriate location.


Note If you are installing Veritas Cluster Server or Sun Cluster 2.2, the setup program will only copy one set of agents—VCS 1.1 or SC 2.2 —onto your server, so be sure to install and configure only one type of clustering software on your server.



For Veritas Clustering Software, the agent type file is located in the /etc/VRTSvcs/conf/config directory and the agent programs are in the /opt/VRTSvcs/bin/MsgSrv directory. For Sun Cluster 2.2, the agents are installed in the /opt/SUNWcluster/ha/msg directory.

Some items of note regarding the Messaging Server installation and high availability (Veritas Sun Cluster, Sun Cluster 2.2, and Sun Cluster 3.0 U1 and U2):

  • High availability for the Messaging Server is not installed by default; be sure to select High Availability Components from the Custom Installation menu if you install Sun Cluster 2.2 or Veritas Cluster Server 1.1 or later.

    Note If you install Sun Cluster 3.0 U1 or U2, you should select Custom Installation as your installation type, however you should not select Sun Cluster 2.2/Veritas HA component during Messaging Server installation.



  • When running the installation, make sure that the HA logical host names and associated IP addresses for Messaging and Directory servers are functioning (for example, active). The reason for this is because portions of the installation will make TCP connections using them (for example, to provision the directory server with configuration information). If Messaging and Directory servers are to run on the same host, then they may use the same logical host name and IP address. Run the installation on the cluster node currently pointed at by the HA logical host name for the messaging server.

  • When you are asked for the server-root (see Step 5 in Chapter 3, "Installation Questions"), be sure that the server-root is on the shared file system; otherwise, high availability will not work correctly. For example, after failing over to another node, the servers will no longer see the data accumulated by the servers on the failed node.

  • When you are asked for the fully-qualified domain name of the messaging server host (see Step 11 in Chapter 3, "Installation Questions"), be sure to specify the fully-qualified HA logical host name for the messaging server. During the install, TCP connections using this logical host name will be attempted.

  • When you are asked for the IP address of Messaging Server (see Step 35 in Chapter 3, "Installation Questions"), be sure to specify the IP address associated with the logical host name for Messaging Server. Do not use the IP address for the physical host.

If you are using the Veritas Cluster Server 1.1 or later high availability software, go to Veritas Cluster Server Agent Installation. If you are using the Sun Cluster 2.2 high availability software, go to Sun Cluster 2.2 Agent Installation. If you are using Sun Cluster 3.0 U1 or U2 high availability software, go to Sun Cluster 3.0 U1 and U2 Agent Installation.


Veritas Cluster Server Agent Installation

After you decide which high availability model you want to implement, you are ready to install the Veritas Cluster Server software and prepare it for use with Messaging Server. The procedures in this section must be completed before you install the Messaging Server.


Note It is assumed that you are already familiar with Veritas Cluster Server concepts and commands.



The following topics are covered in this section:

The example used in this section is based on a simple, two node cluster server (the asymmetric model).

The basic asymmetric model requires one public and two private network interfaces and one shared disk. The private network interface is used for cluster communications. The shared disk must be connected to both nodes.


Pre-Installation Instructions

This section describes the procedures for installing the Veritas Cluster Server and preparing it for use with the Messaging Server.

To install and set up the Veritas Cluster Server for use with Messaging Server:

  1. Install Veritas Cluster Server 1.1 or later on both nodes.

  2. Configure and start the Veritas Cluster Server.

    Note For these first two steps, you should refer to your Veritas Cluster Server documentation for detailed information and instructions.



  3. Create the /etc/VRTSvcs/conf/config/main.cf file.

  4. Create a service group called iMS5.

    Within this service group:

    1. Create the network resource (specify NIC as the resource type).

      Use the public network interface name for the Device attribute (for example, hme0).

    2. Create the logical_IP resource (specify IP as the resource type).

      Use the logical IP for the Address attribute and the public interface for the Device attribute.

    3. Create a sharedg resource (specify DiskGroup as the resource type).

      Use the disk group name for the DiskGroup attribute.

    4. Create a mountshared resource (specify Mount as the resource type).

      Use the shared device name BlockDevice, specify MountPoint as the mount point, and set FSType to the appropriate file system type.

  5. Bring all of the above resources online on the primary (active) node.

  6. Start the dependency tree as follows: the logical_IP resource depends on the network resource, and the mountshared resource depends on the sharedg resource. Your dependency tree should look like this:




Installing High Availability

At this point, you have successfully installed Veritas Cluster Server and have prepared it for the Messaging Server installation.

On the primary node, install Messaging Server and High Availability. To do so, follow these steps:

  1. Run the setup program on the primary node to start the Messaging Server installation:

    ./setup

  2. Select Custom installation from the list of installation types.

  3. Select the Sun Cluster 2.2/Veritas HA component in addition to the Messaging Server components you are installing on the primary node.

On the secondary node, you should only install High Availability. To do so, follow these steps:

  1. Conduct a failover to the secondary node.

  2. Run the setup program on the secondary node to start the Messaging Server installation:

    ./setup

  3. From the list of installation types, select Custom installation, then select just the Sun Cluster 2.2/Veritas HA component from the iPlanet Messaging Applications.

During the Messaging Server installation, the setup program checks to see if the Veritas Cluster Server has been installed and properly configured. If so, then the appropriate high availability files are installed.


Post-Installation Instructions

After installing high availability, follow these post-installation steps for both nodes:

  1. Stop the Veritas Cluster Server.

  2. Add the following line in main.cf:

    include "MsgSrvTypes.cf"

  3. Start the Veritas Cluster Server.

  4. Create a resource named mail (specify MsgSrv as the resource type) and enter the instance name (InstanceName) and the log host name (LogHostName).

  5. Set the logical_IP and mountshared resources as children of the mail resource.

    This means that the mail resource depends on both the logical_IP and mountshared resources.

    Your dependency tree should now look like this:



Now, you are ready. On any node, bring up the mail resource online. This automatically starts the mail server on that node.


Configuring High Availability for Veritas Cluster Server

To configure high availability for the Veritas Cluster Server, you can modify the parameters in the MsgSvrType configuration file. Below is the relevant entry:

type MsgSrv (
   static int MonitorInterval = 180
   statis int MonitorTimeout = 180
   static int OnlineRetryLimit = 1
   static int OnlineWaitLimit = 1
   static int RestartLimit = 2
   static str ArgList[] = { State, InstanceName, LogHostName, PrtStatus, DebugMode }
   NameRule = resource.InstanceName
   str InstanceName
   str LogHostName
   str PrtStatus
   str DebugMode
)

Table 4-3 describes the various parameters:

Table 4-3    MsgSrv Parameters

Parameter

Description

MonitorInterval  

The duration in seconds between each probing.  

MonitorTimeout  

The duration in seconds before a probe times out.  

OnlineRetryLimit  

The number of times to retry online.  

OnlineWaitLimit  

The number of MonitorIntervals to wait after completing the online procedure and before the resource comes online.  

RestartLimit  

The number of restarts before the resource is failed over.  

Table 4-4 describes the various arguments:

Table 4-4    MsgSrv Arguments

Parameter

Description

State  

Indicates if the service is online or not in this system. This value is not changeable by the user.  

InstanceName  

The Messaging Server's instance name without the msg- prefix.  

LogHostName  

The logical host name that is associated with this instance.  

PrtStatus  

If set to TRUE, the online status is printed to the Veritas Cluster Server log file.  

DebugMode  

If set to TRUE, the debugging information is sent to the Veritas Cluster Server log file.  


Sun Cluster 2.2 Agent Installation

After you decide which high availability model you want to implement, you are ready to install the Sun Cluster high availability software and prepare it for use with Messaging Server. The following topics are covered in this section:

The procedures in this section must be completed before you install the Messaging Server.

Note It is assumed that you are already familiar with Sun Cluster concepts and commands.



The example used in this section is based on a simple, two node cluster server (the asymmetric model).

The basic asymmetric model requires one public and two private network interfaces and one shared disk. The private network interface is used for cluster communications. The shared disk must be connected to both nodes.


Pre-Installation Instructions

This section describes the procedures for installing the Sun Cluster software and preparing it for use with Messaging Server.

To install and set up the Sun Cluster for use with Messaging Server:

  1. Install Sun Cluster 2.2 on both nodes.

    Note The HA fault monitor agent requires the tcpclnt binary file in Sun Cluster 2.2 SUNWscpro package. It is mandatory to install this probing feature; otherwise, unresponsive messaging servers will not be detected.



  2. Configure and start the Sun Cluster so you have access to both the logical IP and the shared volume.

    Note For these first two steps, you should refer to your Sun Cluster documentation for detailed information and instructions.




Installing High Availability

At this point, you have successfully installed Sun Cluster software and have prepared it for the Messaging Server installation.

On the primary node, install Messaging Server and High Availability. To do so, follow these steps:

  1. Run the setup program on the primary node to start the Messaging Server installation:

    ./setup

  2. Select Custom installation from the list of installation types.

  3. Select the Sun Cluster 2.2/Veritas HA component in addition to the Messaging Server components you are installing on the primary node.

On the secondary node, you should only install High Availability. To do so, follow these steps:

  1. Switch the logical_IP and shared disk to the secondary node.

  2. Run the setup program on the secondary node to start the Messaging Server installation:

    ./setup

  3. From the list of installation types, select Custom installation, then select just the Sun Cluster 2.2/Veritas HA component from the iPlanet Messaging Applications.

During the Messaging Server installation, the setup program checks to see if the Sun Cluster software has been installed and properly configured. If so, then the appropriate high availability files are installed.


Post-Installation Instructions

You must perform the following on the secondary node:

  1. Conduct a failover to the secondary node.

  2. Copy the server-root/bin/msg/ha/sc/config/ims_ha.cnf file to your administrative file system disk mount point directory for this logical host (for example, /$LOGICAL_HOSTNAME).

  3. Additionally, you must first register the Messaging Server data service before using it by running the hareg -Y command.

  4. If you want to change the logical host timeout value, use the following command:

    scconf cluster_name -l seconds

    where cluster_name is the name of the cluster and seconds is the number of seconds you want to set for the timeout value. The number of seconds should be twice the number of seconds needed for the start to complete. For more information, refer to your Sun Cluster documentation.


Directory Server Configuration

If you install and configure your Directory Server under the same server-root as the Messaging Server, there is no need for additional Sun Cluster agent files. If not, then there is an existing Sun-supplied agent package that you can use. The package is SUNWscnsl, which is supported by the Sun Cluster team at Sun.


Sun Cluster 3.0 U1 and U2 Agent Installation

This section describes how to install and configure the Messaging Server as a Sun Cluster 3.0 U1 or U2 (Update 1 or 2) Highly Available (HA) Data Service. The following topics are covered in this section:

Documentation for Sun Cluster 3.0 U1 and U2 can be found at:

http://docs.sun.com/ab2/coll.572.8/


Sun Cluster 3.0 U1 and U2 Prerequisites

This section presumes the following:

  • Sun Cluster 3.0 U1 or U2 is installed in a Solaris 8 environment with required patches.

  • The HA agent for Netscape Directory Server is installed (on Sun Cluster 3.0 U1 or U2 Agent CDROMs). While it is not required to use the Netscape Directory Server HA agent, it is strongly recommended, and the documentation assumes you are doing so.

  • If the system is using shared disks, either Solstice DiskSuite or Veritas Volume Manager is used.

  • Veritas File System (VxFS) is not supported with Sun Cluster 3.0 U1.

    Note Veritas File System (VxFS) is now supported with Sun Cluster 3.0 U2.




Installing the Messaging Server HA support for Sun Cluster 3.0 U1 and U2

Each cluster node only requires one package to be installed to run the Messaging Server:

  • SUNWscims from the iPlanet Messaging Server CD in the directory solaris/iMS_sc30.

Install the SUNWscims package on each cluster node using pkgadd. For example:

# pkgadd -d . SUNWscims

Once the package is installed, you are ready to configure Messaging Server for HA.


Configuring the Messaging Server HA Support for Sun Cluster 3.0 U1 and U2

This section describes how to configure HA support for the iPlanet Messaging Server by describing different configuration examples:

After installing Messaging Server, be sure to review Binding IP Addresses for Single and Multiple Messaging Server Instances on a Server for additional configuration steps associated with configuring HA support.


Simple Example
This example assumes that the messaging and directory server run on the same cluster node and use the same HA logical host name and IP address. The physical host names are assumed to be mail-1 and mail-2, with an HA logical host name of mail. Figure 4-4 depicts the nested dependencies of the different HA resources you will create in configuring Messaging Server HA support.

Figure 4-4    A Simple iPlanet Messaging Server HA configuration


Before proceeding, make sure that the Messaging Server is shut down on all cluster nodes. The easiest way to check this is to issue the command:

# ps -ef | grep server-root

on each cluster node where server-root is the path to the Messaging Server top-level directory (for example, /global/ims/server5/). If no messaging servers are running, then you should see no processes other than your grep.

  1. Become the root user and open a console.

    All of the following Sun Cluster commands require that you have logged in as root. You will also want to have a console or window for viewing messages output to /dev/console.

  2. Add required resource types.

    Configure Sun Cluster to know about the resources types we will be using. This is done with the scrgadm -a -t command:

    # scrgadm -a -t SUNW.HAStorage
    # scrgadm -a -t SUNW.nsldap
    # scrgadm -a -t SUNW.ims

  3. Create a resource group for the Messaging Server instance.

    If you have not done so already, create a resource group and make it visible on the cluster nodes which will run the Messaging Server instance. The following command creates a resource group named IMS-RG, making it visible on the cluster nodes mail-1 and mail-2:

    # scrgadm -a -g IMS-RG -h mail-1,mail-2

    You may, of course, use whatever name you wish for the resource group.

  4. Create an HA logical host name resource.

    If you have not done so already, create and enable a resource for the HA logical host name, placing it in the resource group for the Messaging Server instance. The following command does so using the logical host name mail. Since the -j switch is omitted, the name of the resource created will also be mail.

    # scrgadm -a -L -g IMS-RG -l mail
    # scswitch -Z -g IMS-RG

  5. Install Messaging Server.

    Install Messaging Server using the HA logical host name created and enabled in Step 4. Make sure you replicate /etc/msgregistry.inf on the secondary node.

    For instructions on installing Messaging Server, see Chapter 2, "Installation Instructions"

    Note If you install Sun Cluster 3.0 U1 or U2, you should select Custom Installation as your installation type, however you should not select High Availability components during Messaging Server installation.



  6. Now, start the messaging servers with the start-msg command in the msgserver-root directory.

  7. Run the ha_ip_config script to bind the IP addresses for the msg-instance on the servers. For instructions on running the script, see Binding IP Addresses for Single and Multiple Messaging Server Instances on a Server.

  8. Run stop-msg ha in the msgserver-root directory to stop the messaging servers.

  9. Stop the directory and administration server processes with the following commands:

    msgserver-root/slapd-instance/stop-slapd
    msgserver-root/stop-admin

  10. Create an HA storage resource.

    Next, you need to create an HA storage resource type for the file systems on which the messaging and directory server are dependent. The following command creates an HA storage resource named ha-storage and the file system /global/ims/server5 is placed under its control:

    # scrgadm -a -j ha-storage -g IMS-RG \
    -t SUNW.HAStorage \
    -x ServicePaths=/global/ims/server5

    The comma-separated list of ServicePaths are the mount points of the cluster file systems on which the messaging and directory server are both dependent. In the above example, only one mount point, /global/ims/server5 is specified. If one of the servers has additional file systems on which it is dependent, then you can create an additional HA storage resource and in Step 6 or 8 indicate that additional dependency.

  11. Create an HA LDAP resource.

    To your growing resource group, add a resource of type SUNW.nsldap to monitor the directory server. The Confdir_list extension property of SUNW.nsldap is used to indicate the path to the directory server's top level directory on the global file system. Note also that this resource is dependent upon both the HA logical host name and HA storage resources established in Steps 4 and 5. However, since the SUNW.nsldap resource type specifies Network_resources_used in its resource type registration file, you do not need to explicitly specify the HA logical host name resource in the Resource_dependencies option below. You only need to specify the HA storage resource with that option. The following command accomplishes all of this, naming the HA LDAP resource ha-ldap.

    # scrgadm -a -j ha-ldap -t SUNW.nsldap -g IMS-RG \
         -x Confdir_list=/global/ims/server5/slapd-mail \
         -y Resource_dependencies=ha-storage

  12. Enable the HA LDAP resource.

    Before proceeding with creating the HA Messaging Server resource, we must bring the HA LDAP resource online. This because the act of creating the HA Messaging Server resource will attempt to validate the Messaging Server resource definition. Part of doing that requires accessing the Messaging Server configuration information stored in the LDAP server.

    If you skipped Steps 3 through 5 because you had done them previously, then the IMS-RG resource group is partially online already. In that case, issue the following commands to enable the HA storage and LDAP resources:

    # scswitch -e -j ha-storage

    # scswitch -e -j ha-ldap

    If you did execute Steps 3 through 5, then instead use the command:

    # scswitch -Z -g IMS-RG

  13. Create an HA Messaging Server resource.

    It's now time to create the HA Messaging Server resource and add it to the resource group. This resource is dependent upon the HA logical name, HA storage, and HA LDAP resources. As with the HA LDAP resource, we do not need to specify the HA logical name resource. Moreover, since the HA LDAP resource is itself dependent upon the HA storage resource, we merely need to specify a dependency upon the HA LDAP resource.

    In creating the HA Messaging Server resource, we need to indicate the path to the Messaging Server top-level directory—the server-root path—as well as the name of the Messaging Server instance to make HA. These are done with the IMS_serverroot and IMS_instance extension properties as shown in the following command.

    # scrgadm -a -j ha-ims -t SUNW.ims -g IMS-RG \
              -x IMS_serverroot=/global/ims/server5 \
              -x IMS_instance=stork \
              -y Resource_dependencies=ha-ldap

    The above command, makes an HA Messaging Server resource named ha-ims for the Messaging Server instance stork installed on the global file system at /global/ims/server5. The HA Messaging Server resource is dependent upon the HA LDAP resource named ldap created in Step 6 above.

    If the Messaging Server instance has file system dependencies beyond that of the directory server, then you can create an additional HA storage resource for those additional file systems. Then include that additional HA storage resource name in the Resource_dependencies option of the above command.

  14. Enable the Messaging Server resource.

    It's now time to activate the HA Messaging Server resource, thereby bringing the messaging server online. To do this, use the command

    # scswitch -e -j ha-ims

    The above command enables the ha-ims resource of the IMS-RG resource group. Since the IMS-RG resource was previously brought online, the above command also brings ha-ims online.

  15. Verify that things are working.

    Use the scstat command to see if the IMS-RG resource group is online. You may want to look at the output directed to the console device for any diagnostic information. Also look in the syslog file, /var/adm/messages.

  16. Fail the resource group over to another cluster node.

    Manually fail the resource group over to another cluster node. Use the scstat command to see what node the resource group is currently running on ("online" on). For instance, if it is online on mail-1, then fail it over to mail-2 with the command:

    # scswitch -z -g IMS-RG -h mail-2


Complex Example
In this more complicated example, we consider the case where iPlanet Messaging Server is dependent upon the following:

  • An LDAP server running on the same node and containing configuration information.

  • An LDAP server running on a different node and containing user information.

  • Some additional file systems containing message store partitions and MTA message queues.

Figure 4-5 shows how these dependencies are realized using Sun Cluster resource groups. Key parameters of each resource are shown in the figure. The commands required to realize this configuration follow.

Figure 4-5    A complex iPlanet Messaging Server HA configuration


# scrgadm -a -t SUNW.nsldap
# scrgadm -a -t SUNW.ims

# scrgadm -a -g LDAP-USER-RG -h ldap-1,ldap-2   Create LDAP-USER-RG resource group

# scrgadm -a -L -g LDAP-USER-RG -l ldap   Create HA logical host name

# scswitch -Z -g LDAP-USER-RG    Bring LDAP-USER-RG online

     ...Install and configure directory server in /global/ids

# scrgadm -a -j ha-ids-disk -g LDAP-USER-RG \   Create HA storage resource
          -t SUNW.HAStorage \
          -x ServicePaths=/global/ids

# scrgadm -a -j ha-ldap-user -g LDAP-USER-RG \   Create HA LDAP resource
          -t SUNW.nsldap \
          -x Confdir_list=/global/ids/slapd-ldap \
          -y Resource_dependencies=ha-ids-disk

# scswitch -e -j ha-ids-disk    Bring rest of LDAP-USER-RG online
# scswitch -e -j ha-ldap-user

# scrgadm -a -g IMS-RG -h mail-1,mail-2    Create IMS-RG resource group

# scrgadm -a -L -g IMS-RG -l mail    Create HA logical host name

# scswitch -Z -g IMS-RG    Bring IMS-RG online

    ...Install and configure messaging server and 2nd directory server in /global/ims/server5

# scrgadm -a -j ha-ims-serverroot -g IMS-RG \   Create HA storage resource
          -t SUNW.HAStorage \
          -x ServicePaths=/global/ims/server5

# scrgadm -a -j ha-ldap-config -g IMS-RG \   Create HA LDAP resource
          -t SUNW.nsldap \
          -x Confdir_list=/global/ims/server5/slapd-mail \
          -y Resource_dependencies=ha-ims-serverroot

# scrgadm -a -j ha-ims-data -g IMS-RG  \    Create another HA storage resource
          -t SUNW.HAStorage \
          -x ServicePaths=/global/ims/store,/global/ims/queues

# scswitch -e -j ha-ims-serverroot   Bring LDAP online
# scswitch -e -j ha-ldap-config

# scrgadm -a -j ha-ims -g IMS-RG   \   Create HA Messaging Server resource
          -t SUNW.ims \
          -x IMS_serverroot=/global/ims/server5 \
          -x IMS_instance=stork \
          -y Resource_dependencies=ha-ldap-config,ha-ims-data \
          -y RG_dependencies=LDAP-USER-RG

# scswitch -e -j ha-ims-data Bring Messaging Server online
# scswitch -e -j ha-ims    





Additional Configuration Notes



If you are using the Symmetric or N + 1 high availability models, there are some additional things you should be aware of during installation and configuration in order to prepare the Cluster Server for Messaging Server.

This section covers those issues and procedures for Veritas Cluster Server 1.1 or later, Sun Cluster 2.2, Sun Cluster 3.0 U1, and Sun Cluster 3.0 U2:


Binding IP Addresses for Single and Multiple Messaging Server Instances on a Server

Single and multiple instances of the Messaging Server running on a server require that the correct IP address binds to each instance.


Note This section is applicable to Sun Cluster 2.2, Sun Cluster 3.0 U1 and U2, and Veritas Cluster Server version 1.1 or later.



The following section provides instructions on how to bind the IP address for each instance. If this is not done correctly, the instances could interfere with each other.

Part of configuring Messaging Server for HA involves configuring the interface address on which the Messaging Servers bind and listen for connections. By default, the servers bind to all available interface addresses. However, in an HA environment, you want to the servers to bind specifically to the interface address associated with an HA logical host name. (Were they to bind to all available interfaces, then difficulties would arise when two different Messaging Server instances attempt to run on the same physical host.)

A script is therefore provided to configure the interface address used by the servers belonging to a given Messaging Server instance. Optionally, the script can be directed to configure an LDAP server instance living in the same Messaging Server root to use the same interface address. Note that the script identifies the interface address by means of the IP address which you have or will be associating with the HA logical host name used by the servers.

If the LDAP server or servers you will be using are located on a different host, then the ha_ip_config script does not configure those LDAP servers. In general, they should not require additional configuration as a result of configuring the Messaging Server to be HA.

The script effects the configuration changes by modifying or creating the following configuration files. For the file

server-root/msg-instance/imta/config/dispatcher.cnf

it adds or changes INTERFACE_ADDRESS option for the SMTP and SMTP Submit servers. For the file

server-root/msg-instance/imta/config/job_controller.cnf

it adds or changes the INTERFACE_ADDRESS option for the Job Controller. For the file

server-root/slapd-instance/config/slapd.conf

it adds or changes the listenhost option for the LDAP server (optional) and, finally it sets the configutil service.listenaddr parameter used by the POP, IMAP, and Messenger Express HTTP servers.

Note that the original configuration files, if any, are renamed to *.pre-ha.

Run the script as follows:

  1. Become root.

  2. Execute server-root/bin/msg/install/bin/ha_ip_config

  3. The script presents the questions described below. The script may be aborted by typing control-d in response to any of the questions. Default answers to the questions will appear within square brackets, [ ]. To accept the default answer, simply press the RETURN key.

    1. Logical IP address: Specify the IP address assigned to the logical host name which the Messaging Server instance will be using. The IP address must be specified in dotted decimal form, for example, 10.0.100.10.

    2. Messaging Server root: Specify the absolute path to the top-level directory in which Messaging Server is installed. Within this directory resides the Messaging Server instances, each in a msg-* subdirectory.

    3. Messaging Server instance name: Specify the name of the Messaging Server instance to configure. Do not include the leading msg- in the instance name.

    4. Also configure an LDAP server instance in the same Messaging Server root: Answer "yes" if you would like to configure an LDAP server instance located in the same Messaging Server root. The LDAP server instance will be configured with the same IP address as the Messaging Server instance. Answer "no" to skip configuring an LDAP server instance.

      This question will not appear if the Messaging Server root does not contain any subdirectories whose name begin with slapd-.

    5. LDAP instance name: specify the name of the LDAP server instance to configure. Omit the leading slapd- from the instance name.

      This question will not appear if you answered "no" to previous question on configuring an LDAP server instance (d).

    6. Do you wish to change any of the above choices: answer "no" to accept your answers and effect the configuration change. Answer "yes" if you wish to alter your answers.

  4. If running or potentially running more than one instance of the messaging server on the same node, then edit the job_controller.cnf files located in the server-root/msg-instance/imta/config/ directories. Ensure that each instance is using a different TCP port number for the Job Controllers. This is done via the TCP_PORT option setting in that file.

    Note If you do not change the IP address of the physical host name to the IP address of the logical host name, you will only have access to the Administration Server on the physical host that you specified during Messaging Server installation.

    To set the IP address of the logical host name, make sure you chose the Custom installation option when you installed Messaging Server on a cluster. Question 35. allows you to specify the Administration Server to a specific IP address.

    If you chose the Typical installation option, you can still change the IP address from the physical host to the IP address of the logical host for the Administration Server. To do so, use the admin_ip.pl utility. For more information on this utility, consult your iPlanet Console documentation at: http://docs.iplanet.com/docs/manuals/console.html




Testing Nodes

Before proceeding, take the time to ensure that the iPlanet Messaging Server can be started and stopped on each node in the cluster. Begin by testing on the node on which you installed the Messaging Server. Then, fail the logical host name over to another cluster node with the command (Sun Cluster 3.0 U1 and U2):

# scswitch -z -g IMS-RG -h name-of-physical-host-to-failover-to

When you're done testing, be sure to shut down the Messaging Server before configuring it for HA. Likewise for the directory server if running it on a different host or otherwise separated from Messaging Server.


Making Additional Messaging Server Instances Highly Available

If you are using Veritas Cluster Server 1.1 or later, you must create a second service group in addition to the iMS5 group you created earlier. This group should have the same set of resources and the same dependency tree as iMS5.

If you are using Sun Cluster 2.2, create another logical host which consists of a different logical IP and a shared volume. The new instance can then be installed on this volume.


Note When bringing up Sun Cluster 2.2 using the hareg -Y command, be sure there is only one instance on each node. Sun Cluster 2.2 does not allow you to bring up multiple logical IPs on one node using this command.



For Sun Cluster 3.0 U1 or U2, whether or not you create another resource group will depend upon the usage of the additional Messaging Server instance. If failover of the additional instance is to be independent of the existing instance, then you will likely want to create a new resource group for the additional instance. If, however, the additional instance should failover when the existing instance does, then you may want to use the same resource group for both instances.



Uninstalling High Availability



This section describes how to uninstall high availability. The following topics are covered:


Uninstalling Veritas Cluster Server and Sun Cluster 2.2

The High Availability uninstall instructions differ depending on whether you are removing Veritas Cluster Server or Sun Cluster. If you are using the Veritas Cluster Server high availability software, go to Uninstalling High Availability for Veritas Cluster Server. If you are using the Sun Cluster 2.2 high availability software, go to Uninstalling High Availability for SunCluster 2.2.


Uninstalling High Availability for Veritas Cluster Server

To uninstall the high availability components for Veritas Cluster Server:

  1. Bring the iMS5 service group offline and disable its resources.

  2. Remove the dependencies between the mail resource, the logical_IP resource, and the mountshared resource.

  3. Bring the iMS5 service group back online so the sharedg resource is available.

  4. If you are using the dirsync option, remove the dirsync entries from the cron job table on both nodes.

  5. Delete all of the Veritas Cluster Server resources created during installation.

  6. Stop the Veritas Cluster Server and remove following files on both nodes, if no more instances exist:

    /etc/VRTSvcs/conf/config/MsgSrvTypes.cf
    /opt/VRTSvcs/bin/MsgSrv/online
    /opt/VRTSvcs/bin/MsgSrv/offline
    /opt/VRTSvcs/bin/MsgSrv/clean
    /opt/VRTSvcs/bin/MsgSrv/monitor
    /opt/VRTSvcs/bin/MsgSrv/sub.pl

  7. Remove the Messaging Server entries from the /etc/VRTSvcs/conf/config/main.cf file on both nodes.

  8. Remove the /opt/VRTSvcs/bin/MsgSrv/ directory from both nodes.

  9. If the directory server is installed in the same server-root directory as messaging server, be sure the directory server is running.

  10. Perform the normal uninstall procedures as described in Appendix B, "Running the Uninstall Program."

  11. Remove the instance entry from the /etc/msgregistry.inf file on both nodes if multiple instances are installed; otherwise, remove the /etc/msgregistry.inf file on both nodes.


Uninstalling High Availability for SunCluster 2.2

To uninstall the high availability components for SunCluster:

  1. Run the following command to stop all processes:

    hareg -n

  2. If the directory server is installed in the same server-root directory as messaging server, be sure the directory server is running.

  3. Perform the normal uninstall procedures as described in Appendix B, "Running the Uninstall Program."

  4. Remove the instance entry from the /etc/msgregistry.inf file on both nodes if multiple instances are installed; otherwise, remove the /etc/msgregistry.inf file on both nodes.

  5. Run the following command:

    hareg -u ims50

    Note that you must perform Step 1 before performing this step.

  6. Remove the following:

    /opt/SUNWcluster/ha/msg/ims_common
    /opt/SUNWcluster/ha/msg/ims_fm_probe
    /opt/SUNWcluster/ha/msg/ims_start_net
    /opt/SUNWcluster/ha/msg/ims_stop_net

  7. Remove the ims_ha.cnf file from your system disk mount point directory for the logical host (for example, /$LOGICAL_HOST).


Uninstalling Messaging Server HA Support for Sun Cluster 3.0 U1 and U2

This section describes how to undo the HA configuration for Sun Cluster 3.0 U1 and U2. This section assumes the simple example configuration (described in the Simple Example). For other configurations, the specific commands (for example, Step 3) may be different but will otherwise follow the same logical order.

  1. Become the root user.

    All of the following Sun Cluster commands require that you be running as user root.

  2. Bring the resource group offline.

    To shut down all of the resources in the resource group, issue the command

    # scswitch -F -g IMS-RG

    This shuts down all resources within the resource group (for example, the Messaging Server, LDAP, and the HA logical host name.

  3. Disable the individual resources.

    Next, remove the resources one-by-one from the resource group with the commands

    # scswitch -n -j ha-ims
    # scswitch -n -j ha-ldap
    # scswitch -n -j ha-storage
    # scswitch -n -j mail

  4. Remove the individual resources from the resource group.

    Once the resources have been disabled, you may remove them one-by-one from the resource group with the commands:

    # scrgadm -r -j ha-ims
    # scrgadm -r -j ha-ldap
    # scrgadm -r -j ha-storage
    # scrgadm -r -j mail

  5. Remove the resource group.

    Once the all the resources have been removed from the resource group, the resource group itself may be removed with the command:

    # scrgadm -r -g IMS-RG

  6. Remove the resource types (optional).

    Should you need to remove the resource types from the cluster, issue the commands:

    # scrgadm -r -t SUNW.ims
    # scrgadm -r -t SUNW.nsldap
    # scrgadm -r -t SUNW.HAStorage


Previous     Contents     Index     Next     
Copyright © 2002 Sun Microsystems, Inc. All rights reserved.

Last Updated February 26, 2002