Sun Java System Messaging Server 6.3 Administration Guide

ProcedureTo Configure Messaging Server with Sun Cluster HAStorage or HAStoragePlus—Generic Example

This section provides the generic steps for configuring Messaging Server for HA. After reviewing these steps, refer to the specific asymmetric or symmetric examples in the following sections. In these instructions the physical hosts are called mars and venus. The logical host name is meadow.

Figure 3–4 depicts the nested dependencies of the different HA resources you will create in configuring Messaging Server HA support.

  1. Become the superuser and open a console.

    All of the following Sun Cluster commands require that you have logged in as superuser. You will also want to have a console or window for viewing messages output to /dev/console.

  2. On all the nodes, install the required Messaging Sun Cluster Data Service Agents package (SUNWscims).

  3. On each node in the cluster create the Messaging Server runtime user and group under which the Messaging Server will run.

    The user ID and group ID numbers must be the same on all nodes in the cluster. The runtime user ID is the user name under which Messaging Server runs. This name should not be root. The default is mailsrv. The runtime Group ID is the group under which Messaging Server runs. The default is mail

    Although the configure utility can create these names for you, you can also create them before running configure as part of the preparation of each node as described in this chapter. The runtime user and group ID names must be in the following files:

    • mailsrv, or the name you select, must in /etc/passwd on all nodes in the cluster

    • mail, or the name you select, must be in /etc/group on all nodes in the cluster

    See 1.1 Creating UNIX System Users and Groups.

  4. Add required resource types to Sun Cluster.

    Configure Sun Cluster to know about the resources types we will be using. To register Messaging Server as your resource use the following command:


    # scrgadm -a -t SUNW.ims

    To register HAStoragePlus as a resource type, use this command:


    # scrgadm -a -t SUNW.HAStoragePlus

    To do the same with HAStorage as a resource type, use this command:


    # scrgadm -a -t SUNW.HAStorage
  5. Create a failover resource group for the Messaging Server.

    If you have not done so already, create a resource group and make it visible on the cluster nodes which will run the Messaging Server. The following command creates a resource group named MAIL-RG, making it visible on the cluster nodes mars and venus:

    # scrgadm -a -g MAIL-RG -h mars,venus

    You may, of course, use whatever name you wish for the resource group.

  6. Create an HA logical host name resource and bring it on-line.

    If you have not done so, create and enable a resource for the HA logical host name placing that resource in the resource group. The following command does so using the logical host name meadow. Since the -j switch is omitted, the name of the resource created will also be meadow. meadow is the logical host name by which clients communicate with the services in the resource group.


    # scrgadm -a -L -g MAIL-RG -l meadow
    # scswitch -Z -g MAIL-RG
  7. Create an HAStorage or HAStoragePlus resource.

    Next, you need to create an HA Storage or HAStoragePlus resource type for the file systems on which Messaging Server is dependent. The following command creates an HAStoragePlus resource named disk-rs, and the file system disk_sys_mount_point is placed under its control:


    # scrgadm -a -j disk-rs -g MAIL-RG \
    -t SUNW.HAStoragePlus \
    -x FilesystemMountPoints=disk_sys_mount_point-1, disk_sys_mount_point-2 -x AffinityOn=True

    SUNW.HAStoragePlus represents the device groups, cluster and local file systems which are to be used by one or more data service resources. One adds a resource of type SUNW.HAStoragePlus to a resource group and sets up dependencies between other resources and the SUNW.HAStoragePlus resource. These dependencies ensure that the data service resources are brought online after:

    • All specified device services are available (and collocated if necessary)

    • All specified file systems are mounted following their checks

    The FilesystemMountPoints extension property allows for the specification of either global or local file systems. That is, file systems that are either accessible from all nodes of a cluster or from a single cluster node. Local file systems managed by a SUNW.HAStoragePlus resource are mounted on a single cluster node and require the underlying devices to be Sun Cluster global devices. SUNW.HAStoragePlus resources specifying local file systems can only belong in a failover resource group with affinity switch overs enabled. These local file systems can therefore be termed failover file systems. Both local and global file system mount points can be specified together.

    A file system whose mount point is present in the FilesystemMountPoints extension property is assumed to be local if its /etc/vfstab entry satisfies both of the following conditions:

    • Non-global mount option

    • Mount at boot flag is set to no


    Note –

    Instances of the SUNW.HAStoragePlus resource type ignore the mount at boot flag for global file systems.


    For the HAStoragePlus resource, the comma-separated list of FilesystemMountPoints are the mount points of the Cluster File Systems (CFS) or Failover File Systems (FFS) on which Messaging Server is dependent. In the above example, only two mount points, disk_sys_mount_point-1 and disk_sys_mount_point-2, are specified. If one of the servers has additional file systems on which it is dependent, then you can create an additional HA storage resource and indicate this additional dependency in Step 15.

    For HAStorage use the following:


    # scrgadm -a -j disk-rs -g MAIL-RG \
    -t SUNW.HAStorage
    -x ServicePaths=disk_sys_mount_point-1, disk_sys_mount_point-2 -x AffinityOn=True

    For the HAStorage resource, the comma-separated list of ServicePaths are the mount points of the cluster file systems on which Messaging Server is dependent. In the above example, only two mount points, disk_sys_mount_point-1 and disk_sys_mount_point-2, are specified. If one of the servers has additional file systems on which it is dependent, then you can create an additional HA storage resource and indicate this additional dependency in Step 15.

  8. Install the required Messaging Server packages on the primary node. Choose the Configure Later option.

    Use the Communications Suite installer to install the Messaging Server packages.

    For symmetric deployments: Install Messaging Server binaries and configuration data on files systems mounted on a shared disk of the Sun Cluster. For example, for Messaging Server binaries could be under /disk_sys_mount_point-1/SUNWmsgsr and the configuration data could be under /disk_sys_mount_point-2/config.

    For asymmetric deployments: Install Messaging Server binaries on local file systems on each node of the Sun Cluster. Install configuration data on a shared disk. For example, the configuration data could be under /disk_sys_mount_point-2/config.

  9. Configure the Messaging Server. See 1.3 Creating the Initial Messaging Server Runtime Configuration.

    In the initial runtime configuration, you are asked for the Fully Qualified Host Name. You must use the HA logical hostname instead of the physical hostname.

    In the initial runtime configuration, you are asked to specify a configuration directory in 1.3 Creating the Initial Messaging Server Runtime Configuration. Be sure to use the shared disk directory path of your HAStorage or HAStoragePlus resource.

  10. Run the ha_ip_config script to set service.listenaddr and service.http.smtphost and to configure the dispatcher.cnf and job_controller.cnf files for high availability.

    The script ensures that the logical IP address is set for these parameters and files, rather than the physical IP address. It also enables the watcher process (sets local.watcher.enable to 1), and auto restart process (local.autorestart to 1).

    For instructions on running the script, see 3.4.4 Binding IP Addresses on a Server.

    The ha_ip_config script should only be run once on the primary node.

  11. Modify the imta.cnf file and replace all occurrences of the physical hostname with the logical hostname of the cluster.

  12. Fail the resource group from the primary to the secondary cluster node in order to ensure that the failover works properly.

    Manually fail the resource group over to another cluster node. (Be sure you have superuser privileges on the node to which you failover.)

    Use the scstat command to see what node the resource group is currently running on (“online” on). For instance, if it is online on mars, then fail it over to venus with the command:

    # scswitch -z -g MAIL-RG -h venus

    If you are upgrading your first node, you will install through theCommunications Suite Installer and then configure Messaging Server. You will then failover to the second node where you will install the Messaging Server package through the Communications Suite Installer, but you will not have to run the Initial Runtime Configuration Program (configure) again. Instead, you can use the useconfig utility.

  13. Install the required Messaging Server packages on the secondary node. Choose the Configure Later option.

    After failing over to the second node, install the Messaging Server packages using the Communications Suite Installer.

    For symmetric deployments: Do not install Messaging Server.

    For asymmetric deployments: Install Messaging Server binaries on local file systems on the local file system.

  14. Run useconfig on the second node of the cluster.

    The useconfig utility allows you to share a single configuration between multiple nodes in an HA environment. You don't need to run the initial runtime configure program (configure). Instead use the useconfig utility (see 3.3.3 Using the useconfig Utility.

  15. Create an HA Messaging Server resource.

    It’s now time to create the HA Messaging Server resource and add it to the resource group. This resource is dependent upon the HA logical host name and HA disk resource.

    In creating the HA Messaging Server resource, we need to indicate the path to the Messaging Server top-level directory—the msg-svr-base path. These are done with the IMS_serverroot extension properties as shown in the following command.


    # scrgadm -a -j mail-rs -t SUNW.ims -g MAIL-RG \
          -x IMS_serverroot=msg-svr-base \
          -y Resource_dependencies=disk-rs,meadow

    The above command, creates an HA Messaging Server resource named mail-rs for the Messaging Server, which is installed on IMS_serverroot in the msg-svr-base directory. The HA Messaging Server resource is dependent upon the HA disk resource disk-rs as well as the HA logical host name meadow.

    If the Messaging Server has additional file system dependencies, then you can create an additional HA storage resource for those file systems. Be sure to include that additional HA storage resource name in the Resource_dependencies option of the above command.

  16. Enable the Messaging Server resource.

    It’s now time to activate the HA Messaging Server resource, thereby bringing the messaging server online. To do this, use the command

    # scswitch -e -j mail-rs

    The above command enables the mail-rs resource of the MAIL-RG resource group. Since the MAIL-RG resource was previously brought online, the above command also brings mail-rs online.

  17. Verify that things are working.

    Use the scstat -pvv command to see if the MAIL-RG resource group is online.

    You may also want to look at the output directed to the console device for any diagnostic information. Also look in the syslog file, /var/adm/messages. For more debugging options and information, refer to 3.4.3.1 How to Enable Debugging on Sun Cluster.