8 Configuring Instant Messaging Server for High Availability

This chapter describes how to configure Oracle Communications Instant Messaging Server for high availability (HA).

Overview of High Availability for Instant Messaging Server

You can use server pooling to provide high availability (HA) for your Instant Messaging Server deployment. Server pools provide redundancy so that if one server in the pool fails, affected clients can reconnect and continue their sessions through another server in the pool with a minimum of inconvenience. Additionally, if you set up your deployment with load balancers, users can immediately reconnect and be directed by a load balancer to another node in the pool. You can also configure an Instant Messaging multiplexor with a list of Instant Messaging Server hosts for failover.

Note:

In Instant Messaging Server versions prior to 10.0, Oracle Solaris Cluster was the recommended HA solution. As of Instant Messaging Server 10.0, Oracle Solaris Cluster is deprecated.

About Server Pooling

Server pooling enables you to support millions of users within a single domain. By using a server pool, you can share a domain across several servers in a server pool. In addition, you can use a load balancer to help manage server utilization in the pool.

By creating a server pool, the number of users you can support in an Instant Messaging Server deployment is no longer constrained by the capacity of a single server system. Instead, you can use the resources of several systems to support the users in a single domain. In addition, server pools provide redundancy so that if one server in the pool fails, affected clients can reconnect and continue their sessions through another server in the pool with a minimum of inconvenience. Deploying more than one server in a server pool creates a multi-node deployment.

You create a server pool by configuring the Instant Messaging servers to communicate over the server-to-server port and get user data from the same LDAP directory. Once you have configured the servers, you must configure the client resources to point to the load balancer, or load director, instead of a single node's host and port.

Caution:

While it is possible to use a shared file system instead of an LDAP directory to store user properties, doing so negatively impacts performance and manageability. For this reason, only LDAP storage is supported for server pools.

To ensure that all servers within a server pool have consistent data, the following information is replicated among all servers in the pool:

  • Routing information for end users

  • Conference membership and configuration

  • Multi-party conference messages

The following information is not replicated:

  • One-on-one chat messages

  • Presence subscriptions and notifications

If you are enforcing policy through access control files in your deployment, the content of the access control files must be the same among all servers in a server pool. See Instant Messaging Server Security Guide for more information.

Availability in an Instant Messaging Server Pool

If a node in a server pool goes down, all currently connected clients are disconnected and the sessions and resources become unavailable. If you set up your deployment with load balancers, users can immediately reconnect and be directed by a load balancer to another node in the pool. When they do so, they do not need to recreate conferences or news channels as this information is shared between servers in the pool. In addition, one-to-one chat sessions can be continued after the user is directed to another node in the pool.

Configuring Server-to-Server Communication Between Instant Messaging Servers

This section describes how to enable communication between two Instant Messaging servers, or peers, in a server pool. You must configure all servers in the pool with information about all other servers in the pool.

Table 8-1 lists the configuration properties their values used to set up communication for two example Instant Messaging servers in a server pool, iimA.siroe.com and iimB.siroe.com.

See "Configuration Properties" for more information on the configuration properties.

Table 8-1 Example Configuration Information for Two Instant Messaging Servers in a Server Pool

Property Value for Server A Value for Server B Notes

iim_server.serverid

iimA.siroe.com

iimB.siroe.com

In a server pool, this ID is used to support the dialback mechanism and is not used for authentication. This value should be unique within the server pool.

iim_server.password

secretforiimA

secret4iimB

None

iim_server.domainname

siroe.com

siroe.com

Peer servers within a server pool share the same default domain.


Note:

When open federation is enabled, do not use the host name as the server ID. For example, the property iim_server.serverid should not be set to host name.

You define coserver properties by running the imconfutil add-coserver command. The add-coserver property enables you to set the server ID, the password used to authenticate for this coserver, the coserver host name, the domain server used by the coserver, and whether SSL is required.

After setting the coserver property, you can retrieve it by using the imconfutil get-coserver-prop command. If you need to modify an existing coserver property, use the imconfutil set-coserver-prop command. To remove a coserver, use the imconfutil delete-coserver command. If you need to verify the password of a coserver, use the imconfutil verify-coserver-pass command. To see a listing of all configured coservers, use the imconfutil list-coservers command.

See "Setting Up Communication Between Two Instant Messaging Servers in a Server Pool" for more information on coserver configuration.

Setting Up Communication Between Two Instant Messaging Servers in a Server Pool

The following example shows how to set up coservers im1.example.com and im2.example.com.

  1. Perform the following commands on host1 (im1.example.com).

    1. Set the iim_server.serverid and iim_server.password configuration properties.

      imconfutil set-prop -c InstantMessaging_home/config/iim.conf.xml iim_server.serverid=peer1.im1.example.com iim_server.password=peer1
      
    2. Add the coserver (im2.example.com).

      imconfutil add-coserver -c InstantMessaging_home/config/iim.conf.xml id=coserver1 serverid=peer2.im2.example.com password=peer2 host=im2.example.com domain=example.com
      
  2. Perform the following commands on host2 (im2.example.com).

    1. Set the iim_server.serverid and iim_server.password configuration properties.

      imconfutil set-prop -c InstantMessaging_home/config/iim.conf.xml iim_server.serverid=peer2.im2.example.com iim_server.password=peer2
      
    2. Add the coserver (im1.example.com).

      imconfutil add-coserver -c InstantMessaging_home/config/iim.conf.xml id=coserver1 serverid=peer1.im1.examnple.com password=peer1 host=im1.example.com domain=example.com
      
  3. Restart Instant Messaging Server on both hosts.

    imadmin refresh server
    

Adding a New Node to an Existing Instant Messaging Server Deployment

If you need to add an additional node to an existing server pool, you need must configure the new server for server-to-server communication then add configuration information about the new server to all existing servers in the pool. In addition, you must add configuration information about all the servers in the pool to the new node. See "Setting Up Communication Between Two Instant Messaging Servers in a Server Pool" for instructions.

Securing a Multi-node Deployment

When a node connects to a remote server, the node provides a dialback key. The remote server then connects back to the node in order to verify the dialback key. In a multi-node deployment, the remote server may connect back to a different node in the pool from the node that originally sent the dialback key. The node the remote server connects to must provide the same dialback key that the original connecting node supplied. The iim_server.dialback.key configuration property defines which dialback key a node should use. The value for the dialback key is randomly generated unless you explicitly specify one. See "Manually Defining the Dialback Key for an Instant Messaging Server in a Server Pool" for instructions.

The From attribute is used by a remote server to connect back to an initiating server. Typically, a server's domain name is used as the value for the From attribute in server-to-server communication under Jabber. However, all servers in a server pool share the same domain name. Therefore, the domain name cannot be used as a key to locate a single server in a pool. Instead, Instant Messaging Server uses a server or peer identifier (serverid) instead of the domain name as the value for the From attribute.

Manually Defining the Dialback Key for an Instant Messaging Server in a Server Pool

The value for the dialback key is randomly generated unless you explicitly specify one.

  1. Use the imconfutil command to modify the value of the iim_server.dialback.key configuration property.

    imconfutil -c InstantMessaging_home/config/iim.conf.xml iim_server.dialback.key=mymultinodedialbackkey
    
  2. Refresh the configuration on both servers.

    imadmin refresh server
    

Using Shoal for Server Pool Messaging

Instant Messaging Server uses Shoal, a Java technology-based scalable and dynamic clustering framework to connect multiple servers within a server pool. For more information on Shoal, see the Project Shoal website at:

https://shoal.java.net

Setting Shoal Properties

To enable Shoal, use the imconfutil command to set the following configuration properties:

  • iim_server.serverid=servername (Ensure that this value is unique for each server)

  • iim_server.password=password (Ensure that this password is same across all servers)

For example:

imconfutil -c InstantMessaging_home/config/iim.conf.xml set-prop iim_server.serverid=server1 iim_server.password=password

Using Shoal for Automatic Discovery of Peer Servers in a Pool

Instant Messaging Server enables you to use the Shoal clustering framework to automatically discover and add peer servers in a server pool. The following steps describe how to configure Shoal for the servers in a pool that belong to the same IP subnet. To configure Shoal for servers in a pool that are part of different subnets, see "Using Shoal Across Subnets".

To enable auto-discovery of peer servers:

  1. Configure a server pool containing a number of Instant Messaging servers to use the LDAP propstore property.

  2. Use the imconfutil command to set the following configuration property to start auto-discovery.

    imconfutil -c InstantMessaging_home/iim.conf.xml set-prop iim_server.peer.autodiscover=true
    
  3. Set the configuration properties as explained in "Setting Shoal Properties".

    Setting the properties enables you to start and stop the servers as required. If you are connected to one server, you can see the presence of the server and chat with users on any other server.

Using Shoal for Conferences Across Server Pools

Instant Messaging Server enables the use of Shoal group messaging to broadcast conference messages across the server pool. Shoal framework can be used to send conference messages across the server pool even if you have not used Shoal for auto-discovery or across subnets. When you enable use of Shoal across server pools, all conference presence broadcasts including join and leave notifications, messages, and chat status notifications will be sent using the Shoal group messaging feature.

To enable Shoal for conferences:

  1. Set the properties as explained in "Setting Shoal Properties".

  2. Use the imconfutil command to set the following configuration property.

    imconfutil -c InstantMessaging_home/config/iim.conf.xml set-prop iim_server.peer.conferences.usep2p=true
    

    This property is used to enable or disable the use of Shoal for conference messaging. If you set the property to false or not set at all, the legacy server-to-server connection is used.

You can enable Shoal anytime during and after configuration. If you enable this feature after configuration, restart all the servers.

Note:

When using Shoal for peer discovery and conferences, ensure that:
  • The iim_server.password property is the same on all hosts.

  • Relay is enabled for communication to work when hosts are on different subnets.

Using Shoal Across Subnets

The Shoal configuration of a server pool in a subnet cannot discover new peers that are present in different IP subnets. Shoal uses relay nodes to propagate peer information across subnets. You must configure Instant Messaging Server to start a separate process that performs the Shoal relay functionality, by providing connection details of the relays present in different subnets.

To enable Shoal across different subnets, you must start the relay server. To start the relay server, you need at least one relay server per subnet. You can configure any number of relay servers.

To start the relay server, use the imconfutil command to set the relay.imadmin.enable and relay.listen_address (optional) configuration properties. For example:

imconfutil -c InstantMessaging_home/config/iim.conf.xml set-prop relay.imadmin.enable=true relay.listen_address=192.0.2.0

The list of relay servers is specified by using the relay.uri_list property:

relay.uri_list = list of relays

You specify each relay by using a URI of the form tcp://host:port. For example:

relay.uri_list = tcp://relay2.example.com:5600, tcp://relay3.example.com:5600

You can start or stop the relay process independently of the Instant Messaging server. Stopping or restarting the relay process does not affect the servers that are already in the pool.

About Multiplexor Failover

You can configure an Instant Messaging multiplexor with a list of Instant Messaging Server hosts for failover. When the multiplexor detects a failure with the current Instant Messaging Server host, it attempts a connection with another Instant Messaging Server host in the configured list. After the multiplexor connects to the next Instant Messaging Server host in the list, it informs incoming clients of the newly connected Instant Messaging Server host. During the failover process, the multiplexor closes its listener port for incoming clients (default 5222) until a new Instant Messaging Server host is available. When the multiplexor encounters a failure with an Instant Messaging Server host, it retries the next host in the list for a specified interval before it tries the next available host.

You can configure multiplexor failover in either polling or non-polling mode.

Polling mode works as follows:

  1. The multiplexor attempts to connect to the first Instant Messaging server in the failover list.

  2. If this first server does not respond, the multiplexor keeps polling it until the server connects back.

  3. In the meantime, the multiplexor fails over to the other available Instant Messaging servers in the failover list, and handles the client connection with the newly connected server.

  4. If the first Instant Messaging server becomes available again, then the multiplexor starts handling new incoming clients with the server that was first polled.

  5. The multiplexor maintains connection to the failed-over Instant Messaging server as long as it has connected clients.

In polling mode, if the original Instant Messaging Server host becomes available again, then the multiplexor starts handling new incoming clients with the polled first server. The failed-over Instant Messaging Server host is kept connected as long as it has connected clients.

In non-polling mode, when an Instant Messaging server fails, the multiplexor tries the list of Instant Messaging servers in round-robin fashion and connects to the first available server.

The default polling interval, specified by the iim_mux.polling_interval property, is five seconds. A positive value means that the multiplexor operates in polling mode. Thus, the default is polling mode. A negative polling interval means that the multiplexor operates in non-polling mode.

Enabling Multiplexor Failover

To enable the multiplexor to fail over to other Instant Messaging Server hosts, configure the following properties:

imconfutil -c InstantMessaging_home/config/iim.conf.xml set-prop iim_mux.polling_interval=interval
imconfutil -c InstantMessaging_home/config/iim.conf.xml set-prop iim_mux.serverport=ims_host1:listener_port,ims_host2:listener_port

where:

  • interval is the polling interval, in seconds, during which the multiplexor attempts to contact the next Instant Messaging Server for failover

  • ims_host1 is the first Instant Messaging Server host for failover, ims_host2 is the next, and so on

  • listener_port is the port on which the Instant Messaging Server host communicates with the multiplexor (default is 45222)

Overview of Using Oracle Solaris Cluster

This section provides information about Oracle Solaris Cluster HA requirements, the terms used in examples, and the permissions that you need to configure HA.

Note:

As of Instant Messaging Server 10.0, Oracle Solaris Cluster has been deprecated.

HA Configuration Software Requirements

Table 8-2 shows the required software for an Instant Messaging Server HA deployment.

Table 8-2 HA Software Requirements

Software and Version Notes and Patches

Oracle Solaris 10

All versions of Oracle Solaris 10 are supported. Oracle Solaris 10 requires at least Oracle Solaris Cluster 3.0 Update 3. Oracle Solaris 10 includes Oracle Solaris Logical Volume Manager (LVM).

Oracle Solaris Cluster 3.1 or 3.2

Oracle Solaris Cluster software must be installed and configured on all the nodes in the cluster. To install Oracle Solaris Cluster 3.1 or 3.2, use the Sun Java Enterprise System installer by following the installation process in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX at:

http://docs.oracle.com/cd/E19528-01/820-2827/index.html

After you install the Oracle Solaris Cluster software, you must configure the cluster. For more information, see Sun Cluster System Administration Guide for Solaris OS at:

http://docs.oracle.com/cd/E19787-01/819-2971

Oracle Solaris Cluster Patches - For Oracle Solaris 10, you can download patches from My Oracle Support at:

https://support.oracle.com

Oracle Solaris Volume Manager

Oracle Solaris 10.

Veritas Volume Manager (VxVM)

Oracle Solaris 10 requires at least version 3.5 and the required patches.

Veritas File System (VxFS)

Oracle Solaris 10 requires at least version 3.5 and the required patches.


HA Configuration Requirements

To install and configure an Instant Messaging Server HA configuration, log in or become root and specify a console or window for viewing messages that exist in the /dev/console directory.

HA Configuration Terms and Checklist

Table 8-3 describes the variables used in the configuration examples in this chapter. In addition, you must gather the information before you configure HA for Instant Messaging Server. You are prompted for this information during configuration. Use this checklist along with the system requirements specified in Instant Messaging Server Installation and Configuration Guide.

Table 8-3 Configuration Examples Variables

Name in Example Description

/global/im

Global file system or cluster file system (CFS) mount point.

/local/im

FFS mount point for the shared disk.

LOG_HOST_RS

Logical host name resource.

IM_NODE1

Node1 of the cluster.

IM_NODE2

Node2 of the cluster.

IM_RG

Instant Messaging Server resource group.

IM_HASP_RS

Instant Messaging Server storage resource.

IM_SVR_RS

Instant Messaging Server resource.

IM_RUNTIME_DIR

Either global or FFS mount point. The value is /global/im or /local/im.

IM_SVR_BASE

Instant Messaging Server base installation directory. The default value is /opt/sun/comms/im.

IM_SCHA_BASE

Instant Messaging Server HA agent base installation directory. The default value is /opt/sun/comms/im_scha.

IM_RUNTIME_CONFIG

Location of the Instant Messaging Server runtime directory InstantMessaging_runtime/default/config.

INSTALL-ROOTIM1

Installation directory for instance 1 in a symmetric setup. For example /opt/node1.

INSTALL-ROOTIM2

Installation directory for instance 2 in a symmetric setup. For example /opt/node2.


Starting and Stopping the Instant Messaging Server HA Service

To start and stop the Instant Messaging Server HA service, use the Oracle Solaris Cluster scswitch command.

Caution:

Do not use the imadmin start, imadmin stop, or imadmin refresh commands in a HA environment with Sun Cluster. Instead, use the Oracle Solaris Cluster administrative utilities. For more information about the Oracle Solaris Cluster scswitch command, see Oracle Solaris Cluster Reference Manual.

To start the Instant Messaging Server HA service, enter the following command:

scswitch -e -j IM_SVR_RS

To stop the Instant Messaging Server HA service, enter the following command:

scswitch -n -j IM_SVR_RS

To restart the Instant Messaging Server HA Service, enter the following command:

scswitch -R -j IM_SVR_RS

Troubleshooting the Instant Messaging Server HA Configuration

Troubleshooting error messages are stored in the error log. The logs are controlled by the syslog facility. For information about using the logging facility, see the syslog.conf man page.

Setting Up HA for Instant Messaging Server

This section describes the steps to set up HA for Instant Messaging Server.

Choosing a High Availability Model for Your Instant Messaging Server Deployment

This section lists the HA models, and describes the procedure to install and configure the asymmetric and symmetric models for deployment.

Table 8-4 summarizes the advantages and disadvantages of each HA model. Use this information to decide the appropriate model for your deployment.

Table 8-4 HA Models Advantages and Disadvantages

Model Advantages Disadvantages Recommended Users

Asymmetric

  • Simple Configuration

  • Backup node is 100% reserved.

  • Rolling upgrade with negligible downtime

  • Machine resources are not fully utilized.

  • A small service provider with plans to expand in the future.

Symmetric

  • Efficient use of system resources

  • Higher availability

  • Resource contention on the backup node.

  • HA requires fully redundant disks.

  • A small corporate deployment that can accept performance penalties if a single server fails.

N+1

  • Load distribution

  • Easy expansion

  • Management and configuration complexity.

  • A large service provider who requires distribution with no resource constraints.


High-Level Task List for an Asymmetric HA Deployment

The following is a list of the tasks necessary to install and configure Instant Messaging Server for asymmetric HA:

  1. Prepare the nodes.

    1. Install the Oracle Solaris operating system on all the nodes of the cluster.

    2. Install Oracle Solaris Cluster software on all the nodes of the cluster.

    3. Install the Instant Messaging Server HA Agents package, SUNWiimsc, on all the nodes of the cluster by using the Installer.

    4. Create a file system on the shared disk.

    5. Install Instant Messaging Server on all the nodes of the cluster by using the Installer.

    6. Create a symbolic link from the Instant Messaging Server /etc/opt/sun/comms/im directory to the shared disk InstantMessaging_runtime directory on all the nodes of the cluster.

  2. Configure the first or the primary node.

    1. Using the Oracle Solaris Cluster command-line interface, set up HA on the primary node.

    2. Run the Instant Messaging Server configure utility on the primary node.

    3. Using the Oracle Solaris Cluster command-line interface, create and enable a resource group for Instant Messaging Server.

See "Installing and Configuring in an Asymmetric HA Environment" for step-by-step instructions.

High-Level Task List for a Symmetric HA Deployment

The following is a list of the tasks necessary to install and configure Instant Messaging Server for symmetric HA:

  1. Prepare the nodes.

    1. Install the Oracle Solaris operating system software on all the nodes of the cluster.

    2. Install the Oracle Solaris Cluster software on all the nodes of the cluster.

    3. Create four file systems. You can create a CFS or global file systems or FFS' or local file systems.

    4. Create the necessary directories.

    5. Install the Instant Messaging Server HA Agents package, SUNWiimsc, on all nodes of the cluster by using the Installer.

  2. Install and configure the first instance of Instant Messaging Server HA.

    1. Using the Installer, install Instant Messaging Server on the first node of the cluster.

    2. Using the Oracle Solaris Cluster command-line interface, configure HA on the first node.

    3. Create a symbolic link from the Instant Messaging Server /etc/opt/sun/comms/im directory to the shared disk InstantMessaging_runtime directory on the first node.

    4. Run the Instant Messaging Server configure utility on the first node.

    5. Using the Oracle Solaris Cluster command-line interface, create and enable a resource group for Instant Messaging Server on the first node.

    6. Using the Oracle Solaris Cluster command-line interface to test the successful creation of the resource group, perform a failover to the second node.

  3. Install and configure the second instance of Instant Messaging Server HA.

    1. Using the Installer, install Instant Messaging Server on the second node of the cluster.

    2. Using the Oracle Solaris Cluster command-line interface, configure HA on the second node.

    3. Create a symbolic link from the Instant Messaging Server /etc/opt/sun/comms/im directory to the shared disk InstantMessaging_runtime directory on the secondary node.

    4. Run the Instant Messaging Server configure utility on the second node.

    5. Using the Oracle Solaris Cluster command-line interface, create and enable a resource group for Instant Messaging Server on the second node.

    6. Using the Oracle Solaris Cluster command-line interface to test the successful creation of the resource group, perform a failover to the first node.

See "Installing and Configuring in a Symmetric HA Environment" for step-by-step instructions.

Installing and Configuring in an Asymmetric HA Environment

This section contains instructions for configuring an asymmetric HA Instant Messaging Server cluster.

Creating File Systems for HA Deployment

Create a file system on the shared disk. The /etc/vfstab directory should be identical on all the nodes of the cluster.

For the CFS, the directory should be similar to the following example.

## Cluster File System/Global File System ##
/dev/md/penguin/dsk/d400 /dev/md/penguin/rdsk/d400 /global/im ufs 2 yes global,logging

For the failover FFS, the directory should be similar to the following example.

## Fail Over File System/Local File System ##
/dev/md/penguin/dsk/d400 /dev/md/penguin/rdsk/d400 /local/im ufs 2 no logging

Note:

The fields in these commands are separated by tabs and not spaces.

Creating the Instant Messaging Server Directory on all the Shared Disks of the Cluster in the HA Deployment

For all the nodes of the cluster, create a directory, InstantMessaging_runtime, to store the configuration details and data. For example, to create an Instant Messaging Server directory on a shared disk, enter either one of the following:

mkdir -p /local/im

or

mkdir -p /global/im

Installing and Configuring HA for Instant Messaging Server Software

This section contains instructions for the tasks involved in installing and configuring HA for Instant Messaging Server. Perform the following tasks to complete the configuration:

  • Preparing Each Node of the Cluster

  • Setting Up the Primary Node

  • Invoking the configure Utility on the Primary Node

Preparing Each Node of the Cluster

For each node in the cluster, create the Instant Messaging Server runtime user and group under to run the components. The user ID (UID) and group ID (GID) numbers must be the same on all the nodes in the cluster.

  • Runtime UID: User name using which the Instant Messaging server runs. The default value is inetuser.

  • Runtime GID: Group using which the Instant Messaging server runs. The default value is inetgroup. Although the configure utility creates the IDs, you can create the IDs before you invoke the configure utility as part of the preparation of each node. Create the runtime UID and GID on a node where you will not invoke the configure utility, which is usually secondary node.

Ensure that the user name, group name and the corresponding UID and GID are the same in the following files on all nodes:

  • inetuser or the name that you select in the /etc/passwd directory on all the nodes in the cluster

  • inetgroup or the name that you select in the /etc/group directory on all the nodes in the cluster

Refer to your operating system documentation for detailed information about users and groups.

Selecting the Default Installation Directory "IM_SCHA"

For Instant Messaging Server and Instant Messaging Server Oracle Solaris Cluster agent IM_SCHA, the Installer uses the /opt/sun/comms directory on the Oracle Solaris operating system as the default installation directory. The value of the InstantMessaging_home variable is /opt/sun/comms/im.

However, if you are using a shared disk for binaries, you must specify a CFS or a FFS installation directory. For example, if /global/im/ is the installation directory, then the value of InstantMessaging_home is /global/im/im.

If you are using a local disk, you should install the Instant Messaging in the same directory on each machine in the node.

  • Configuration files and runtime files reside on a CFS or on a highly-available FFS. Binaries are installed on local file systems on each node at the same location. Enables rolling upgrade of the Instant Messaging Server software.

  • Binaries, configuration files and runtime files either reside on a CFS or on a highly-available FFS. The Instant Messaging Server installation is required only on one node as the binaries are shared across all the nodes. Instant Messaging Server upgrade needs a server down time.

Installing Instant Messaging Server Products and Packages

Install products and packages by using the Installer. For more information about the installer, see Unified Communications Suite Installation and Configuration Guide.

Table 8-5 lists the products or packages required for a multiple node cluster configuration.

Table 8-5 Requirements for Multiple Nodes

Product or Package Node 1 Node n

Oracle Solaris Cluster Software

Yes

Yes

Instant Messaging Server Server

Yes

Yes, if you use a local disk for configuration files and binaries. No, if you use a shared disk for configuration files and binaries.

Oracle Solaris Cluster Agent for Instant Messaging Server SUNWiimsc

Yes

Yes, if you use a local disk for configuration files and binaries. No, if you use a shared disk for configuration files and binaries.

Shared components

Yes

Yes


Instant Messaging Server HA Agent Installation

To install the Instant Messaging Server Oracle Solaris Cluster HA agent:

  1. Run the Installer in the global zone:

    commpkg install
    

    On Solaris 10 zones, run the commpkg command from global and non-global zones.

  2. Select the Instant Messaging Server Oracle Solaris Cluster HA Agent software when prompted.

  3. Enter the Oracle Solaris Cluster HA Agent preconfiguration command.

    IM_SCHA_BASE/bin/init-config
    

    On Solaris 10 zones, run this command only from the global zone.

Setting Up the Primary Node

Use the Oracle Solaris Cluster command line interface to set up HA on the first node.

  1. Register the Instant Messaging Server and HAStoragePlus resource.

    scrgadm -a -t SUNW.HAStoragePlus
    
    scrgadm -a -t SUNW.iim
    
  2. Create a failover Instant Messaging Server resource group. For example, for a two node asymmetric cluster setup, the following command creates the Instant messaging resource group IM-RG with the primary node as NODE1 and the secondary, or failover, node as NODE2.

    scrgadm -a -g IM-RG -h IM_NODE1,IM_NODE2
    
  3. Create a logical hostname resource in the Instant Messaging Server resource group and change the resource group state to online. For example, the following instructions create the logical hostname resource LOG_HOST_RS and bring the resource group IM-RG to online state.

    scrgadm -a -L -g IM-RG -l LOG_HOST_RS
    
    scrgadm -c -j LOG_HOST_RS -y \
    
    R_description="LogicalHostname resource for LOG_HOST_RS"
    
    scswitch -Z -g IM-RG
    
  4. Create and enable the HAStoragePlus resource. For example, the following commands create and enable the HAStoragePlus resource IM_HASP_RS.

    scrgadm -a -j IM_HASP_RS -g IM-RG -t
    
    SUNW.HAStoragePlus:4 -x FilesystemMountPoints=/IM_RUNTIME_DIR
    
    scrgadm -c -j IM_HASP_RS -y
    
    R_description="Failover data service resource for SUNW.HAStoragePlus:4"
    
    scswitch -e -j IM_HASP_RS
    
  5. Create a symbolic link from the Instant Messaging Server /etc/opt/sun/comms/im directory to the shared disk InstantMessaging_runtime directory on all the nodes of the cluster.

    For example, enter the following commands on all the nodes of the cluster:

    cd /etc/opt/sun/comms
    
    ln -s /IM_RUNTIME_DIR im
    

Invoking the configure Utility on the Primary Node

  1. Invoke the configure utility.

    For example, from the InstantMessaging_home directory enter the following command:

    # pwd
    
    /IM_SVR_BASE
    
    # ./configure
    

    For more information about the configure utility, see Instant Messaging Server Installation and Configuration Guide.

  2. When prompted for the Instant Messaging Server runtime files directory InstantMessaging_runtime, enter either of the following commands:

    1. If you are using FFS for the runtime files, enter /local/im.

    2. If you are using a CFS for the runtime files, enter /global/im.

  3. If prompted for the Instant Messaging Server host name, enter the logical host. Choose to accept the logical host even if the configure utility is unable to connect to the specified host. The logical host resource might be offline at the time when you invoke the configure utility.

  4. Do not start Instant Messaging Server after configuration or on system startup.

  5. Copy the Instant Messaging Server configuration file iim.conf.xml to the iim.conf file with the same permissions.

    Note:

    Also copy the iim.conf.xml file to iim.conf after any future configuration changes as cluster uses the iim.conf file.
  6. To use the Gateway Connector service in HA, update this service configuration with the virtual host name or IP address and port number as follows:

    imconfutil --config config_file_path iim_gwc.hostport=virtual host-name or ip:port
    

    For example:

    /opt/sun/comms/sbin/imconfutil --config /DATA1/default/config/iim.conf.xml iim_gwc.hostport=192.10.12.11:22222
    
  7. Create and enable the Instant Messaging Server resource.

    In this example, the resource group name is IM_SVR_RS. Provide the logical host resource name and the HAStoragePlus resource name. For example,

    scrgadm -a -j IM_SVR_RS -g IM-RG
    
    -t SUNW.iim -x Server_root=/InstantMessaging_home
    
    -x Confdir_list=/InstantMessaging_runtime (ex: /local/im/default/config )
    
    -y Resource_dependencies=IM_HASP_RS,LOG_HOST_RS
    
    scrgadm -e -j IM_SVR_RS
    
  8. Test the successful creation of the Instant messaging resource group by performing a failover.

    scswitch -z -g IM-RG -h IM_NODE2
    

    Note:

    You do not need to configure the second node as the configuration is shared between all the nodes by soft links pointing to the shared location.

Installing and Configuring in a Symmetric HA Environment

This section contains instructions for configuring a symmetric HA Instant Messaging Server system.

Initial Tasks

You must complete the following preparatory tasks before installing Instant Messaging Server on the nodes. The preparatory tasks are:

  • Creating File Systems

  • Installing the Instant Messaging Server HA Package

  • Preparing Each Node of the Cluster

Creating File Systems

Instant Messaging Server binaries, configuration files, and runtime files reside on the CFS or on the highly available FFS. For each Instant Messaging Server instance, installation is needed on only one node as the binaries are shared across all the nodes.

To create file systems:

  1. Create four file systems by using CFS or FFS.

    To create a system by using CFS, for example, the contents of the /etc/vfstab file should appear as follows.

    # Cluster File System/Global File System ##
    
    /dev/md/penguin/dsk/d500 /dev/md/penguin/rdsk/d500
    
    /INSTALL-ROOTIM1 ufs 2 yes logging,global
    
    /dev/md/penguin/dsk/d400 /dev/md/penguin/rdsk/d400
    
    /share-disk-dirIM1 ufs 2 yes logging,global
    
    /dev/md/polarbear/dsk/d200 /dev/md/polarbear/rdsk/d200
    
    /INSTALL-ROOTIM2 ufs 2 yes logging,global
    
    /dev/md/polarbear/dsk/d300 /dev/md/polarbear/rdsk/d300
    
    /share-disk-dirIM2 ufs 2 yes logging,global
    

    Note:

    The fields must be separated by tabs.

    To create a system by using FFS, for example, the contents of the /etc/vfstab file should appear as follows.

    # Failover File System/Local File System ##
    
    /dev/md/penguin/dsk/d500 /dev/md/penguin/rdsk/d500
    
    /INSTALL-ROOTIM1 ufs 2 yes logging
    
    /dev/md/penguin/dsk/d400 /dev/md/penguin/rdsk/d400
    
    /share-disk-dirIM1 ufs 2 yes logging
    
    /dev/md/polarbear/dsk/d200 /dev/md/polarbear/rdsk/d200
    
    /INSTALL-ROOTIM2 ufs 2 yes logging
    
    /dev/md/polarbear/dsk/d300 /dev/md/polarbear/rdsk/d300
    
    /share-disk-dirIM2 ufs 2 yes logging
    

    Note:

    The fields must be separated by tabs.
  2. Create the following mandatory directories on all the nodes of the cluster.

    # mkdir -p /INSTALL-ROOTIM1 share-disk-dirIM1
    
    INSTALL-ROOTIM2 share-disk-dirIM2
    

Installing the Instant Messaging Server HA Package

Install the Instant Messaging Server Oracle Solaris Cluster HA package in two nodes. You can use the Communication Suite 7 Update 2 installer to install the HA package.

To install the Instant Messaging Server Oracle Solaris Cluster HA agent:

  1. Run the Installer:

    commpkg install
    

    In Solaris 10 zones, run this command from the global and non-global zones.

  2. When prompted, select the Instant Messaging Server Oracle Solaris Cluster HA Agent software.

  3. Run the Sun Cluster HA Agent pre-configuration command:

    IM_SCHA_BASE/bin/init-config
    

    On Solaris 10 zones, run this command only from the global zone.

Preparing Each Node of the Cluster

For each node in the cluster, create the Instant Messaging Server runtime user and group under which the components will run. The UID and GID numbers must be the same on all nodes in the cluster.

  • Runtime UID: User name using which the Instant Messaging server runs. The default value is inetuser.

  • Runtime GID: Group using which the Instant Messaging server runs. The default value is inetgroup. Although the configure utility creates these IDs, you can create the IDs before you invoke the configure utility as part of the preparation of each node. Create the runtime UID and GID on a node where you might not invoke the configure utility, which is usually secondary node.

Ensure that the user name, group name and the corresponding UID and GID are same in the following files on all nodes:

  • inetuser or the name that you select in the /etc/passwd directory on all the nodes in the cluster

  • inetgroup or the name that you select in the /etc/group directory on all the nodes in the cluster

Refer to your operating system documentation for detailed information about users and groups.

Installing and Configuring the First Instance of Instant Messaging Server

To install the first instance of Instant Messaging Server:

  1. Verify whether the files are mounted.

    On the primary node Node1, enter the following command:

    df -k
    

    The following message shows a sample output:

    /dev/md/penguin/dsk/d500 35020572
    
    34738 34635629 1% /INSTALL-ROOTIM1
    
    /dev/md/penguin/dsk/d400 35020572
    
    34738 34635629 1% /share-disk-dirIM1
    
  2. Using the Installer, install Instant Messaging Server on the primary node.

    1. Run the Installer:

      commpkg install
      

      Note:

      In case of Oracle Solaris 10 zones, refer to Unified Communications Suite Installation and Configuration Guide.
    2. At the Specify Installation Directories prompt, enter the installation root INSTALL-ROOTIM1.

  3. Create a symbolic link from the Instant Messaging Server the /etc/opt/sun/comms/im directory to the shared disk IM_RUNTIME_DIR directory on all the nodes of the cluster. For example, enter the following commands on a cluster node:

    # cd /etc/opt/sun/comms
    
    # ln -s /share-disk-dirIM1 im
    

To configure Oracle Solaris Cluster on the first node by using the Oracle Solaris Cluster command-line interface:

  1. Register the following resource types.

    scrgadm -a -t SUNW.HAStoragePlus
    
    scrgadm -a -t SUNW.iim
    
  2. Create a failover resource group.

    In the following example, the resource group is IM-RG1, IM_NODE1 is the primary node and IM_NODE2 is the failover node.

    scrgadm -a -g IM-RG1 -h IM_NODE1,IM_NODE2
    
  3. Create a logical host name resource for the node.

    Add the logical host name LOG_HOST_RS to the resource group. Instant Messaging Server listens on this host. The following example uses LOG-HOST-IM-RS1. Replace this value with the actual hostname.

    scrgadm -a -L -g IM-RG1 -l LOG-HOST-IM-RS1
    
    scrgadm -c -j LOG-HOST-IM-RS1 -y R_description=
    
    "LogicalHostname resource for LOG-HOST-IM-RS1"
    
  4. Bring the resource group online.

    scswitch -Z -g IM-RG1
    
  5. Create a HAStoragePlus resource and add it to the failover resource group.

    In this example, the resource is called IM_HASP_RS1. Replace the resource with your own resource name.

    Note:

    The example is split for display purpose in this document.
    scrgadm -a -j IM-HASP-RS1 -g IM-RG1 -t
    
    SUNW.HAStoragePlus:4 -x FilesystemMountPoints=/INSTALL-ROOTIM1,
    
    /share-disk-dirIM1
    
    scrgadm -c -j IM-HASP-RS1 -y R_description="Failover data
    
    service resource for SUNW.HAStoragePlus:4"
    
  6. Enable the HAStoragePlus resource.

    scswitch -e -j IM-HASP-RS1
    

To configure the first instance of Instant Messaging Server:

  1. Run the configure utility on the primary node.

    # cd INSTALL-ROOTIM1/im
    
    # ./configure
    

    For more information about the configure utility, see Instant Messaging Server Installation and Configuration Guide.

  2. When prompted for the Instant Messaging Server Runtime Files Directory, enter /share-disk-dirIM1 if you are using HAStoragePlus for the runtime files.

  3. When prompted for the Instant Messaging Server host name, enter the logical host.

    Choose to accept the logical host even if the configure utility cannot connect to the specified host. The logical host resource might be offline at the time when you invoke the configure utility.

  4. Do not start Instant Messaging Server after configuration or on system startup.

  5. Copy the Instant Messaging Server configuration file iim.conf.xml to the iim.conf file with the same permissions.

    Note:

    Also copy the iim.conf.xml file to iim.conf after any future configuration changes as cluster uses the iim.conf file.
  6. To use the Gateway Connector service in HA, update this service configuration with the virtual host name or IP address and port number as follows:

    InstantMessaging_home/imconfutil --config config_file_path iim_gwc.hostport=virtual host-name or ip:port
    

    For example:

    /opt/sun/comms/sbin/imconfutil --config /DATA1/default/config/iim.conf.xml iim_gwc.hostport=192.10.12.11:22222
    
  7. Create and enable the Instant Messaging Server resource.

    In this example, the resource group name is IM_SVR_RS1. Provide the logical host resource name and the HAStoragePlus resource name.

    scrgadm -a -j IM_SVR_RS1 -g IM-RG1
    
    -t SUNW.iim -x Server_root=/INSTALL-ROOTIM1/im
    
    -x Confdir_list=/share-disk-dirIM1/default/config
    
    -y Resource_dependencies=IM-HASP-RS1,LOG-HOST-IM-RS1
    
    scrgadm -e -j IM_SVR_RS1
    
  8. Test the successful creation of the Instant Messaging Server resource group by performing a failover.

    scswitch -z -g IM-RG1 -h IM_NODE2
    

    Note:

    You do not have to configure the second node as configuration is shared between all the nodes by soft links pointing to shared location.

Installing and Configuring the Second Instance of Instant Messaging Server

To install the second instance of Instant Messaging Server:

  1. Verify whether the files are mounted. On the primary node IM_NODE2, enter:

    df -k
    

    The following output is displayed:

    /dev/md/polarbear/dsk/d300 35020572
    
    34738 34635629 1% /share-disk-dirIM2
    
    /dev/md/polarbear/dsk/d200 35020572
    
    34738 34635629 1% /INSTALL-ROOTIM2
    
  2. Install Instant Messaging Server on the primary node.

    1. Run the Installer:

      commpkg install
      
    2. At the Specify Installation Directories prompt, specify the installation root INSTALL-ROOTIM2.

  3. Create a symbolic link from the Instant Messaging Server /etc/opt/sun/comms/im directory to the shared disk IM_RUNTIME_DIR directory on this cluster node.

    For example, enter the following commands on all the nodes of the cluster:

    cd /etc/opt/sun/comms
    
    ln -s /share-disk-dirIM2 im
    

Configuring Oracle Solaris Cluster on the Second Node

To configure Oracle Solaris Cluster on the second node by using the Oracle Solaris Cluster command-line interface:

  1. Create a failover resource group.

    In the following example, the resource group is IM-RG2, IM_NODE2 is the primary node and IM_NODE1 is the failover node.

    scrgadm -a -g IM-RG2 -h IM_NODE2,IM_NODE1
    
  2. Create a logical host name resource for this node.

    Add the logical host name LOG_HOST_RS to the resource group. Instant Messaging Server listens on this host. The following example uses LOG-HOST-IM-RS2 in the place where you will substitute in the actual host name.

    scrgadm -a -L -g IM-RG2 -l LOG-HOST-IM-RS2
    
    scrgadm -c -j LOG-HOST-IM-RS2 -y R_description=
    
    "LogicalHostname resource for LOG-HOST-IM-RS2"
    
  3. Bring the resource group online.

    scswitch -Z -g IM-RG2
    
  4. Create a HAStoragePlus resource and add it to the failover resource group.

    In this example, the resource is called IM-HASP-RS2. Replace it by your own resource name. The lines are divided and show as two lines in the example for display purposes in this document.

    scrgadm -a -j IM-HASP-RS2 -g IM-RG2 -t
    
    SUNW.HAStoragePlus:4 -x FilesystemMountPoints=/INSTALL-ROOTIM2,
    
    /share-disk-dirIM2
    
    scrgadm -c -j IM-HASP-RS2 -y R_description="Failover data
    
    service resource for SUNW.HAStoragePlus:4"
    
  5. Enable the HAStoragePlus resource.

    scswitch -e -j IM-HASP-RS2
    

To configure the second instance of Instant Messaging Server:

  1. Run the configure utility on the primary node.

    # cd INSTALL-ROOTIM2/im
    
    # ./configure
    

    For more information about the configure utility, see Instant Messaging Server Installation and Configuration Guide.

  2. When prompted for the Instant Messaging Server Runtime Files Directory, enter one of the following:

    If you are using an HAStoragePlus for the runtime files, enter /share-disk-dirIM2.

  3. When prompted for the Instant Messaging Server host name, enter the logical host.

    For example, accept the logical host even if the configure utility cannot connect to the specified host. The logical host resource might be offline when you invoke the configure utility.

  4. Do not start Instant Messaging Server after configuration or on system startup.

    In an HA configuration, the Instant Messaging Server service requires the logical host to be online for Instant Messaging Server to work correctly.

  5. Copy the Instant Messaging Server configuration file iim.conf.xml to the iim.conf file with the same permissions.

    Note:

    Also copy the iim.conf.xml file to iim.conf after any future configuration changes as cluster uses the iim.conf file.
  6. To use the GatewayConnector service in HA, update this service configuration with the virtual host name or IP address and port number as follows:

    InstantMessaging_home/imconfutil --config config_file_path iim_gwc.hostport=virtual host-name or ip:port
    

    For example:

    /opt/sun/comms/sbin/imconfutil --config /DATA1/default/config/iim.conf.xml iim_gwc.hostport=192.10.12.11:33333
    
  7. Create the Instant Messaging Server resource and enable the resource.

    In this example, the resource group name is IM_SVR_RS2. Provide the logical host resource name, the HAStoragePlus resource name, and the port number. By default, Instant Messaging Server uses ports 5269, 5222, and 45222. If the first instance uses these port numbers, use different port numbers for the second instance.

    /INSTALL-ROOTIM2/im/sbin/imconfutil --config /MS_ALTROOT/im/config/iim.conf.xml set-prop iim_server.port=5270
    
    /INSTALL-ROOTIM2/im/sbin/imconfutil --config /MS_ALTROOT/im/config/iim.conf.xml set-prop iim_server.muxport=45223
    
    /INSTALL-ROOTIM2/im/sbin/imconfutil --config /MS_ALTROOT/im/config/iim.conf.xml set-prop iim_mux.listenport=5223
    
    /INSTALL-ROOTIM2/im/sbin/imconfutil --config /MS_ALTROOT/im/config/iim.conf.xml set-prop iim_mux.serverport=45223
    
    scrgadm -a -j IM_SVR_RS2 -g IM-RG2
    
    -t SUNW.iim -x Server_root=/INSTALL-ROOTIM2/im
    
    -y Confdir_list=/share-disk-dirIM2/default/config
    
    -y Resource_dependencies=IM-HASP-RS2,LOG-HOST-IM-RS2
    
  8. Test the successful creation of the Instant messaging resource group by performing a failover.

    scswitch -z -g IM-RG2 -h IM_NODE1
    

    Note:

    You do not have to configure the second node as configuration is shared between all the nodes by soft links pointing to shared location.

Removing HA for Instant Messaging Server

To remove Instant Messaging Server from an HA environment, remove the Instant Messaging Server cluster agent SUNWiimsc.

When you remove the SUNWiimsc package as described in this procedure, any customization you made to the RTR file SUNW.iim is lost. If you want to restore them at a later time, you must create a backup copy of SUNW.iim before removing SUNWiimsc.

To remove HA for Instant Messaging Server:

  1. Stop the Instant Messaging Server data service.

    scswitch -F -g IM_RG
    
  2. Disable all resources in the Instant Messaging Server resource group IM_RG.

    scswitch -n -j IM_SVR_RS
    
    scswitch -n -j LOG_HOST_RS
    
    scswitch -n -j IM-HASP-RS
    
  3. Remove the resources from the Instant Messaging Server resource group.

    scrgadm -r -j IM_SVR_RS
    
    scrgadm -r -j LOG_HOST_RS
    
    scrgadm -r -j IM-HASP-RS
    
  4. Remove the Instant Messaging Server resource group.

    scrgadm -r -g IM_RG
    
  5. Remove the Instant Messaging Server resource type.

    scrgadm -r -t SUNW.iim
    
  6. Remove the SUNWiimsc package by using the Sun Java Enterprise System installer or run the pkgrm SUNWiimsc command.

    When you remove the package, any customization that you make to the RTR file is lost.

  7. Remove any links that you have created during the HA configuration, if you are using a shared directory for configuration files and binaries.

    rm /etc/opt/sun/comms/im