Sun ONE logo     Previous      Contents      Index      Next     
Sun ONE Directory Server 5.2 Installation and Tuning Guide



Appendix C      Installing Sun Cluster HA for Directory Server

This appendix describes how to install and configure both the Sun Cluster HA for Directory Server data service and the associated Administration Server data service. Refer to the Sun Cluster 3.0 product documentation for Sun Cluster installation instructions and key concepts.

You must configure the data services as a failover services.

Before You Start

Use this section in conjunction with the worksheets in the Sun Cluster 3.0 Release Notes as a checklist before performing installation and configuration.

Prior to starting your installation, consider these questions.

  • Do you plan to run multiple Directory Server instances on the same node?
  • If so, you may choose to set nsslapd-listenhost on cn=config to the appropriate network resource (a logical host name, such as dirserv.example.com) as the IP address for each instance. Directory Server default behavior is to listen on all network interfaces.

  • Do you run multiple data services in your Sun Cluster configuration?
  • You may set up multiple data services in any order, with one exception: If you use Sun Cluster HA for DNS, you must set it up before setting up Sun Cluster HA for Directory Server.

Table C-1 summarizes the Sun Cluster HA for Directory Server installation and configuration process.

Table C-1    Installation and Configuration Process 

Task

What you should know

"Setting Up Network Resources"

The names of the cluster nodes that can master the data services.

The logical host names to be used by clients accessing Directory Server such as ds1.example.com, ds2.example.com.

Refer to the Sun Cluster 3.0 product documentation for instructions on setting up logical host names.

"Installing the Servers"

The ServerRoot location on the global file system such as /global/ds where you install Directory Server.

Installation details summarized in Table 1-2.

"Installing the Data Service Packages"

The SUNWdsha and SUNWasha packages provide the management interface for the data services so you can manage Directory Server and Administration Server with the same tools as other data services in the cluster.

"Configuring the Servers"

The resource type names for Directory Server data service, SUNW.dsldap, and for the Administration Server data service, SUNW.mps.

The names of the cluster nodes that can master the data services.

The logical host names used by clients accessing Directory Server and Administration Server.

The ServerRoot location on the global file system where you install Directory Server.

The port on which Directory Server listens for client requests.

The port on which Administration Server listens for client requests.

The name of the resource group defined in "Setting Up Network Resources".

"Configuring Extension Properties"

(Refer to the section itself for details.)

Setting Up Network Resources

Sun Cluster software manages logical host names that differ both from node names and from host names for individual network interfaces. Figure C-1 shows how logical host names, managed by a two-node cluster, are not permanently associated with either of the nodes.

Figure C-1    Cluster with Two Nodes
Directory Server relies on the logical host name.

When installing the Sun Cluster HA for Directory Server data service, you configure Directory Server and Administration Server to listen on the logical host name interface so they are not tied to any particular node in the cluster, and the Sun Cluster software can manage failover. In Figure C-1, the nodes are named foo and bar. The logical host names you use during installation as shown in Figure C-1 however would be ds-1.example.com and ds-2.example.com, not foo and bar. Notice that the logical host names used are fully qualified domain names.

Refer to the Sun Cluster 3.0 product documentation for more information on these key concepts and for instructions on setting up logical host names.

After setting up logical host names, perform the following steps:

  1. Become super user on a node in the cluster.
  2. Verify that all network addresses you use have been added to the name service database.
  3. To avoid failures during name service lookup, ensure as well that all fully qualified domain names, fully qualified logical host names and shared IP addresses are present in the /etc/hosts file on each cluster node. Also configure name service mapping in /etc/nsswitch.conf on each cluster node to check local files first before trying to access other name services.

  4. Create a failover resource group to hold network and application resources. For example:
  5. # scrgadm -a -g resource-group [-h node-list]

    Here resource-group specifies the name of the group.

    The optional node-list is a comma-separated list of physical node names or IDs identifying potential master nodes for the cluster. The order of the node names determines the order in which the nodes are considered as primary during failover. If all nodes in the cluster are potential masters, it is not necessary to specify the node-list.

  6. Add logical host name resources to the resource group.
  7. # scrgadm -a -L -g resource-group -l logical-host-names [-n netif-list]

    Here logical-host-names is a comma-separated list of fully qualified domain names used as logical host names. You use one logical host name per Directory Server instance.

    The optional netif-list is a comma-separated list identifying the NAFO groups on each node. If you do not specify this option, scrgadm(1M) attempts to discover a network adapter on the subnet used by each logical host name specified on each node in node-list specified in Step 3.

  8. Verify that all fully qualified domain names specified as logical host names in Step 4 have been added to the name service database.
  9. Enable the resource group and bring it online.
  10. # scswitch -Z -g resource-group

With the resource group online, you may install the servers.

Installing the Servers

In Sun Cluster HA for Directory Server, both Directory Server and Administration Server run under the control of Sun Cluster. This means that instead of supplying the servers with a fully qualified domain name for the physical node during installation, you provide a fully qualified logical host name that can fail over to a different node.

You perform installation starting with the node online for the logical host name used by directory client applications, then repeating the process for all other cluster nodes that you want to master the Directory Server data service.

Installing on the Active Node

For the cluster node that is online for the logical host name used by directory client applications:

  1. Install the Solaris packages for both Directory Server and Administration Server, referring to "Installing Solaris Packages" for instructions.
  2. Configure Directory Server. Refer to "Configuring Directory Server" for instructions.
  3. When performing this step:

    • Place the Directory Server instance on the global cluster file system.
    • Use the logical host name, not the node name.

  4. Configure Administration Server, referring to "Configuring Administration Server" for instructions and using the same logical host name used to configure the Directory Server.
  5. When using Directory Server in secure mode only, create an empty file named ServerRoot/slapd-serverID/keypass to indicate to the cluster that the Directory Server instance runs in secure mode.
  6. Also create a ServerRoot/alias/slapd-serverID-pin.txt file, containing the password required to start the instance automatically in secure mode. This allows the cluster to restart the data service without human intervention.

Installing on Other Nodes

For each node you want to master the Directory Server data service:

  1. Install the Solaris packages for both Directory Server and Administration Server, referring to "Installing Solaris Packages" for instructions.
  2. Configure Directory Server using settings identical to those provided when "Installing on the Active Node".
  3. Configure Administration Server using settings identical to those provided when "Installing on the Active Node".
  4. Copy ServerRoot/alias/slapd-serverID-pin.txt from the first node to ServerRoot/alias/.


  5. Note

    Do not remove or relocate any files placed on the global file system.



Installing the Data Service Packages

The data service packages, SUNWdsha and SUNWasha, provide the management interfaces for administering the servers as a data services within the cluster.

  • On each cluster node that you want to support the Directory Server data service, use the pkgadd(1M) utility to install the data service packages.
  • # pkgadd -d dirContainingPackages SUNWasha SUNWdsha

Configuring the Servers

Perform the following steps only on the cluster node that is online for the logical host name in use by Directory Server:

  1. Become super user.
  2. Stop Directory Server and Administration Server.
  3. # /usr/sbin/directoryserver stop
    # /usr/sbin/mpsadmserver stop

  4. Register the resource types for both data services.
  5. # scrgadm -a -t SUNW.dsldap -f /etc/ds/v5.2/cluster/SUNW.dsldap
    # scrgadm -a -t SUNW.mps -f /etc/mps/admin/v5.2/cluster/SUNW.mps

    Here SUNW.dsldap and SUNW.mps are the predefined resource type names for the data services. /etc/ds/v5.2/cluster/SUNW.dsldap and /etc/mps/admin/v5.2/cluster/SUNW.mps define the data services.

  6. Add the servers to the failover resource group created in "Setting Up Network Resources".
  7. # scrgadm -a -j resource-name-ds -g resource-group -t SUNW.dsldap \
    -y Network_resources_used=logical-host-name \
    -y Port_list=port-number/tcp \
    -x Confdir_list=ServerRoot/slapd-serverID

    # scrgadm -a -j resource-name-as -g resource-group -t SUNW.mps \
    -y Network_resources_used=logical-host-name \
    -y Port_list=port-number/tcp \
    -x Confdir_list=ServerRoot

    Here you provide a new resource-name-ds to identify the Directory Server instance, and a new resource-name-as to identify the Administration Server instance.

    The resource-group parameter is the name of the group specified in "Setting Up Network Resources".

    The logical-host-name identifies the logical host name used for the current Directory Server instance.

    The port-number is the numbers of the ports on which the server instances listen for client requests, specified in "Installing the Servers". Notice the Port_list parameter of each command takes only one entry.

    ServerRoot and ServerRoot/slapd-serverID are paths specified in "Installing the Servers". Notice the Confdir_list parameter of each command takes only one entry.

  8. Enable the server resources and monitors.
  9. # scswitch -e -j resource-name-ds
    # scswitch -e -j resource-name-as

    Here resource-name-ds and resource-name-as are the names you provided to identify the servers in Step 4.



    Note

    After configuring the servers, do not run backup and restore commands such as db2bak, db2ldif, back2db, and ldif2db on an inactive node of the cluster. Instead, perform all backup and restore procedures on the active node.



  10. Consider performing the steps in the section, "Synchronizing HA Storage and Data Services" to improve performance on fail over.

Example Registration and Configuration

Code Example C-1 shows how you might register and configure the data service for the cluster illustrated in Figure C-1.



Code Example C-1    Registering and Configuring the Data Service 

(Create a failover resource group on the node that is online.)
# scrgadm -a -g ds-resource-group-1 -h foo,bar

(Add a logical hostname resource to the resource group.)
# scrgadm -a -L -g ds-resource-group-1 -l ds-1.example.com

(Bring the resource group online.)
# scswitch -Z -g ds-resource-group-1

(Install packages on each node in the cluster.)

(Stop the servers on the node that is online.)
# /usr/sbin/directoryserver stop
# /usr/sbin/mpsadminserver stop

(Register the SUNW.dsldap and SUNW.mps resource types.)
# scrgadm -a -t SUNW.dsldap -f /etc/ds/v5.2/cluster/SUNW.dsldap
# scrgadm -a -t SUNW.mps -f /etc/mps/admin/v5.2/cluster/SUNW.mps

(Create resources for the servers and add them to the resource group.)
# scrgadm -a -j ds-1 -g ds-resource-group-1 \
-t SUNW.dsldap -y Network_resources_used=ds-1.example.com \
-y Port_list=389/tcp \
-x Confdir_list=/global/ds/slapd-ds-1
# scrgadm -a -j as-1 -g ds-resource-group-1 \
-t SUNW.mps -y Network_resources_used=ds-1.example.com \
-y Port_list=5201/tcp \
-x Confdir_list=/global/ds

(Enable the application resources.)
# scswitch -e -j ds-1
# scswitch -e -j as-1


Configuring Extension Properties

Extension properties allow you to configure how the cluster software handles the application software. For example, you can adjust how the cluster determines when the data service must fail over.

What You Can Configure

You typically configure resource extension properties using the Cluster Module of the Sun Management Center, or using the scrgadm utility. You can change the extension properties listed in Table C-2 using the scrgadm utility with the -x parameter=value option.

Table C-2    SUNW.dsldap Resource Extension Properties 

Property

Description

Default

Range

Monitor_retry_count

Integer value indicating the number of times the process monitor facility (PMF) restarts the fault monitor during the time window specified by the value of Monitor_retry_interval

4 attempts

-1 to 2,147,483,641 attempts

-1 means retry forever.

Monitor_retry_interval

Integer value indicating the time in minutes over which failures of the fault monitor are counted

If the number of times the fault monitor fails exceeds the value specified in Monitor_retry_count within this period, the PMF cannot restart the fault monitor.

2 minutes

-1 to 2,147,483,641 minutes

-1 specifies an infinite retry interval.

Probe_timeout

Integer value indicating the timeout value in seconds that the fault monitor uses to probe a Directory Server instance

30 seconds

0 to 2,147,483,641 seconds

Refer to the Sun Cluster 3.0 product documentation for more information on Sun Cluster properties.

How the Fault Monitor Operates

The cluster software determines whether the data service is healthy using a fault monitor. The fault monitor probes the data service, and then determines whether the service is healthy or must be restarted based on the results of the probe.

Table C-3    How the Fault Monitor Interprets Probes 

Directory Server running in...

Probe Used

Algorithm

Normal mode

ldapsearch

  1. Attempt a search.
  2. If the search operation results in:
  3. LDAP_SUCCESS, then the service is considered healthy.
  4. An LDAP error, then the service must be restarted.
  5. A problem other than timeout, then the fault monitor probes again depending on Monitor_retry_count and Monitor_retry_interval.
  6. The Probe_timeout duration being exceeded, then the fault monitor probes again depending on Monitor_retry_count and Monitor_retry_interval.

Potential causes of timeout include heavy loads on the system, network, or Directory Server instance. Timeout may also indicate that the Probe_timeout value is set too low for the number of Directory Server instances monitored.

Secure mode (SSL)

TCP connect

  1. Attempt to connect.
  2. If the connection operation:
  3. Succeeds, then the service is considered healthy.
  4. Fails, then the service must be restarted.
  5. Exceeds Probe_timeout, then the service must be restarted.

The fault monitor uses the IP addresses and port numbers you specified when "Configuring the Servers" to carry out probe operations. If Directory Server is configured to listen on two ports, one for SSL traffic and one for normal traffic, the fault monitor probes both ports using TCP connect, following the fault monitoring algorithm used for secure mode ports.

Synchronizing HA Storage and Data Services

The SUNW.HAStorage resource type synchronizes actions between HA storage and data services, permitting higher performance when a disk-intensive data service such as Directory Server undergoes fail over.

To synchronize a Directory Server data service with HA storage, complete the following steps on the node that is online for the logical host name in use by the data service:

  1. Register the HA storage resource type.
  2. # scrgadm -a -t SUNW.HAStorage

  3. Configure the storage resource to remain synchronized.
  4. # scrgadm -a -j HAStorage-resource-name -g HAStorage-resource-group \
    -t SUNW.HAStorage -x ServicePaths=volume-mount-point \
    -x AffinityOn=True

    Here, volume-mount-point identifies the disk volume where Directory Server stores data.

  5. Enable the storage resource and monitors.
  6. # scswitch -e -j HAStorage-resource-name

  7. Add a dependency on the existing Directory Server resource.
  8. # scrgadm -c -j resource-name-ds \
    -y Resource_Dependencies=HAStorage-resource-name

Refer to SUNW.HAStorage(5) for background information, and to the Sun Cluster 3.0 product documentation for further instructions on setting up a SUNW.HAStorage resource type for new resources.

Creating an Additional Directory Server Instance

Perform the following steps:

  1. Create an additional Directory Server instance using the Sun ONE Server Console.
  2. Refer to the Sun ONE Server Console Server Management Guide for instructions.

  3. Stop the new Directory Server instance on the node that is online for the logical host name in use by the data service.
  4. # /usr/sbin/directoryserver -server serverID stop

  5. Add the Directory Server to the failover resource group created in "Setting Up Network Resources".
  6. # scrgadm -a -j resource-name-ds -g resource-group -t SUNW.dsldap \
    -y Network_resources_used=logical-host-name \
    -y Port_list=port-number/tcp \
    -x Confdir_list=ServerRoot/slapd-serverID

    Here you provide a new resource-name-ds to identify the Directory Server instance.

    The resource-group parameter is the name of the group specified in "Setting Up Network Resources".

    The logical-host-name identifies the logical host name used for the instance.

    The port-number is the number of the port on which the instance listens for client requests, specified in "Installing the Servers". Notice the Port_list parameter takes only one entry.

    ServerRoot and ServerRoot/slapd-serverID are paths specified in "Installing the Servers". Notice the Confdir_list parameter takes only one entry.

  7. Enable the server resources and monitors.
  8. # scswitch -e -j resource-name-ds

    Here resource-name-ds is the name you provided to identify the Directory Server in Step 3.

Uninstalling

To remove Sun Cluster HA for Directory Server and the associated Administration Server from the cluster, perform the following steps:

  1. Stop the server instances.
  2. # scswitch -n -j resource-name-ds
    # scswitch -n -j resource-name-as

  3. Remove the resources.
  4. # scrgadm -r -j resource-name-ds
    # scrgadm -r -j resource-name-as

  5. Remove the resource types from the cluster database.
  6. # scrgadm -r -t SUNW.dsldap
    # scrgadm -r -t SUNW.mps

  7. Delete the server configurations.
  8. # /usr/sbin/mpsadmserver unconfigure
    # /usr/sbin/directoryserver unconfigure

  9. Remove the packages installed, including SUNWdsha and SUNWasha, from each node using the pkgrm(1M) utility.

Previous      Contents      Index      Next     
Copyright 2003 Sun Microsystems, Inc. All rights reserved.