In this example we assume two cluster nodes with the physical hostnames mars.red.siroe.com and venus.red.siroe.com. The installation and configuration directory locations need to be unique. A contention problem will occur if the installation and configuration directories on each node have the same directory names, for example /opt/SUNWmsgsr and /var/opt/SUNWmsgsr. The contention problem occurs when venus does a failover to mars, and two instances of Messaging Server compete with the same install and configuration directories.
A good practice for creating unique names for the installation and configuration directories would be to use the format /opt/NodeMember/SUNWmsgsr for the installation directory and /var/opt/NodeMember/SUNWmsgsr for the configuration directory. You can use any directory to install your binaries and configuration data as long as they are unique.
In this example we assume two cluster nodes with physical hostnames mars.red.siroe.com and venus.red.siroe.com.
For mars.red.siroe.com, binaries are installed at /opt/mars/SUNWmsgsr and configuration data is installed at /var/opt/mars/SUNWmsgsr.
For venus.red.siroe.com, binaries are installed at /opt/venus/SUNWmsgsr and configuration data is installed at /var/opt/venus/SUNWmsgsr.
We have two logical hostnames called meadow and pasture with their respective logical IP addresses. For example, the /etc/hosts file on both nodes look like this:
192.18.75.155 meadow.red.siroe.com meadow 192.18.75.157 pasture.red.siroe.com pasture |
Install the Messaging Server Sun Cluster agent package (SUNWscims) on both nodes.
Create four file systems.
These files systems can either be Cluster File Systems or local file systems (Failover File Systems).
/var/opt/mars/SUNWmsgsr /var/opt/venus/SUNWmsgsr /opt/mars/SUNWmsgsr /opt/venus/SUNWmsgsr |
These files systems should be mounted on a shared disk. For example below we show four Cluster File Systems. The contents of the/etc/vfstab shown below should be similar on all nodes of the cluster.
# cat /etc/vfstab #device device mount FS fsck mount mount to mount to fsck point type pass at_boot_options /dev/md/penguin/dsk/d500 /dev/md/penguin/rdsk/d500 /opt/mars/SUNWmsgsr ufs 2 yes logging,global /dev/md/penguin/dsk/d400 /dev/md/penguin/rdsk/d400 /var/opt/mars/SUNWmsgsr ufs 2 yes logging,global /dev/md/polarbear/dsk/d200 /dev/md/polarbear/rdsk/d200 /opt/venus/SUNWmsgsr ufs 2 yes logging,global /dev/md/polarbear/dsk/d300 /dev/md/polarbear/rdsk/d300 /var/opt/venus/SUNWmsgsr ufs 2 yes logging,global |
If you want to make the four file systems shown above as local file systems (Fail Over File Systems), set the mount at boot option to no and remove the mount option global keyword:
|
Configure the primary node
Add the required resource types on the primary node.
This configures Sun Cluster to know about the resources types that will be used. To register Messaging Server and the HAStoragePlus resource, use the following commands:
# scrgadm -a -t SUNW.HAStoragePlus # scrgadm -a -t SUNW.ims |
Create a fail over resource group for Messaging Server called MS_RG_MARS.
# scrgadm -a -g MS_RG_MARS -h mars,venus |
Create one logical hostname resource called meadow, add it to the resource group and bring it on-line.
# scrgadm -a -L -g MS_RG_MARS -l meadow # scrgadm -c -j meadow -y R_description="LogicalHostname resource for meadow" # scswitch -Z -g MS_RG_MARS |
Create a HAStoragePlus resource called ms-hasp-mars with the file systems created earlier.
# scrgadm -a -j ms-hasp-mars -g MS_RG_MARS -t SUNW.HAStoragePlus -x FileSystemMountPoints ="/opt/mars/SUNWmsgsr, /var/opt/mars/SUNWmsgsr" -x AffinityOn=TRUE |
Enable the HAStoragePlus resource:
# scswitch -e -j ms-hasp-mars |
Install the Messaging Server on the primary node.
Install the Messaging Server packages using Communications Suite installer. Make sure you install the Messaging Server binaries and configuration data on the shared file system (see Step 2). For example, for this instance of Messaging Server, the messaging binaries are under /opt/mars/SUNWmsgsr and the configuration data is under /var/opt/mars/SUNWmsgsr.
Install and configure the Messaging Server on the primary node (see 1.3 Creating the Initial Messaging Server Runtime Configuration).
The initial runtime configuration program asks for the Fully Qualified Host Name. Enter the logical hostname meadow.red.siroe.com. The program also asks to specify a configuration directory. Enter /var/opt/mars/SUNWmsgsr.
Run the ha_ip_config script on the primary node and provide the logical IP address.
This is only run on the primary node and not on the secondary node. The ha_ip_config script is located under the installation directory under the sbin directory. For example:
# /opt/mars/SUNWmsgsr/sbin/ha_ip_config Please specify the IP address assigned to the HA logical host name. Use dotted decimal form, a.b.c.d Logical IP address: 192.18.75.155 # This value is the logical IP address of the logical hostname. Refer # to the /etc/hosts file. Please specify the path to the top level directory in which iMS is installed. iMS server root: /opt/mars/SUNWmsgsr . . . Updating the file /opt/mars/SUNWmsgsr/config/dispatcher.cnf Updating the file /opt/mars/SUNWmsgsr/config/job_controller.cnf Setting the service.listenaddr configutil parameter Setting the local.snmp.listenaddr configutil parameter Setting the service.http.smtphost configutil parameter Setting the local.watcher.enable configutil parameter Setting the local.autorestart configutil parameter Setting the metermaid.config.bindaddr configutil parameters Setting the metermaid.config.serveraddr configutil parameters Setting the local.ens.port parameter Configuration successfully updated |
Modify the imta.cnf file and replace all occurrences of the physical hostname, that is, mars, with the HA logical host name (meadow).
Fail over the resource group to the secondary node (venus).
After failing over, you will then configure the secondary node (venus).
# scswitch -z -g MS_RG_VENUS -h mars |
On the secondary node (venus) run useconfig utility. See 3.3.3 Using the useconfig Utility
You do not have to run the initial runtime configuration program (configure) or install the Messaging Server packages.
In the following example, /var/opt/mars/SUNWmsgsr is the shared configuration directory.
# useconfig /var/opt/mars/SUNWmsgsr/setup/configure_20061201124116 cp /var/opt/mars/SUNWmsgsr/setup/configure_20061201124116/Devsetup.properties /opt/mars/SUNWmsgsr/lib/config-templates/Devsetup.properties /usr/sbin/groupadd mail /usr/sbin/useradd -g mail -d / mailsrv /usr/sbin/usermod -G mail mailsrv sed -e "s/local.serveruid/mailsrv/" -e "s/local.serveruid/mail/" -e "s:<msg·RootPath>:/opt/mars/SUNWmsgsr:" /opt/mars/SUNWmsgsr/lib/config-templates/devtypes.txt.template > /opt/mars/SUNWmsgsr/lib/config-templates/devtypes.txt sed -e "s/local.serveruid/mailsrv/" -e "s/local.serveruid/mail/" -e "s:<msg·RootPath>:/opt/mars/SUNWmsgsr:" /opt/mars/SUNWmsgsr/lib/config-templates/config.ins.template > /opt/mars/SUNWmsgsr/lib/config-templates/config.ins /opt/mars/SUNWmsgsr/lib/devinstall -l sepadmsvr:pkgcfg:config -v -m -i /opt/mars/SUNWmsgsr/lib/config-templates/config.ins /opt/mars/SUNWmsgsr/lib/config-templates /opt/mars/SUNWmsgsr/lib/jars /opt/mars/SUNWmsgsr/lib devinstall returned 0 crle -c /var/ld/ld.config -s /usr/lib/secure:/opt/SUNWmsgsr/lib:/opt/venus/SUNWmsgsr/lib:/opt/mars/SUNWmsgsr/lib -s /opt/mars/SUNWmsgsr/lib See /opt/mars/SUNWmsgsr/install/useconfiglog_20061211155037 for more details |
Create the HA Messaging Server Resource and enable it.
# scrgadm -a -j ms-rs-mars -t SUNW.ims -g MS_RG_MARS -x IMS_serverroot =/opt/mars/SUNWmsgsr -y Resource_dependencies=meadow,ms-hasp-mars # scswitch -e -j mail-rs-mars |
The above command, creates an HA Messaging Server resource named ms-rs-mars for the Messaging Server, which is installed on /opt/mars/SUNWmsgsr. This HA Messaging Server resource is dependent upon the HA disk resource, that is, the file systems created earlier as well as the HA logical host name meadow.
Verify that everything is working.
Failover the Messaging Server resource back to the primary node.
# scswitch -z -g MAIL-RG -h mars |
Similarly, create another failover resource group for the second instance of Messaging Server with venus as the primary and mars as the secondary (or standby node).
Repeat the steps from 3 to 10 with venus as the primary node for this resource group, MS_RG_VENUS as the resource group, pasture as the logical hostname and ms-hasp-venus as the HAStoragePlus resource. Thus the commands will look like this:
To create the resource group MS_RG_VENUS:
# scrgadm -a -g MS_RG_VENUS -h venus,mars |
To create a logical hostname resource called pasture, add it to the resource group and bring it on-line;
# scrgadm -a -L -g MS_RG_VENUS -l pasture # scrgadm -c -j pasture -y R_description="LogicalHostname resource for pasture" # scswitch -Z -g MS_RG_VENUS |
To create an HAStoragePlus resource called ms-hasp-venus with the file systems created earlier:
# scrgadm -a -j ms-hasp-venus -g MS_RG_VENUS -t SUNW.HAStoragePlus -x FileSystemMountPoints ="/opt/venus/SUNWmsgsr, /var/opt/venus/SUNWmsgsr" -x AffinityOn=TRUE |
To enable the HAStoragePlus resource:
# scswitch -e -j ms-hasp-venus |
To run the ha_ip_config script on the primary node and provide the logical IP address:
# /opt/venus/SUNWmsgsr/sbin/ha_ip_config |
To create the HA Messaging Server Resource and enable it:
# scrgadm -a -j ms-rs-venus -t SUNW.ims -g MS_RG_VENUS -x IMS_serverroot =/opt/venus/SUNWmsgsr -y Resource_dependencies=pasture,ms-hasp-venus # scswitch -e -j mail-rs-venus |
To fail over the resource group to the secondary node (venus):
# scswitch -z -g MS_RG_MARS -h venus |
To run useconfig on the secondary node (mars) run useconfig utility:
# useconfig /var/opt/venus/SUNWmsgsr/setup/configure_20061201124116 |
To verify that everything is working by failing over the Messaging Server resource back to the primary node:
# scswitch -z -g MAIL-RG -h venus |