Sun Java logo     Previous      Contents      Index      Next     

Sun logo
Sun Java(TM) System Directory Server 5 2004Q2 Installation and Migration Guide 

Appendix A  
Installing Sun Cluster HA for Directory Server

Currently, the only supported clustering technology for Directory Server is Sun Cluster 3.1, using the packaged versions of the products. Clustering is not supported for installations using compressed archive deliveries.

This appendix describes how to install and configure both the Sun Cluster HA for Directory Server data service and the associated Administration Server data service. This appendix also covers use of and upgrade to HA Storage Plus, which is currently the recommended storage resource type for use with Sun Cluster HA for Directory Server. Refer to the Sun Cluster 3.1 product documentation for Sun Cluster installation instructions and key concepts.

You must configure the data services as a failover services.

The following sections comprise this appendix:


Before You Start

Use this section in conjunction with the worksheets in the Sun Cluster 3.1 Release Notes as a checklist before performing installation and configuration.

Prior to starting your installation, consider these questions.

Table A-1 summarizes the Sun Cluster HA for Directory Server installation and configuration process.

Table A-1  Installation and Configuration Process 

Task

What you should know

Setting Up Network and File System Resources and Installing the Servers

The names of the cluster nodes that can master the data services.

The logical host name to be used by clients accessing Directory Server such as ds.example.com. Refer to the Sun Cluster 3.1 product documentation for instructions on setting up a logical host name.

The ServerRoot location on the failover file system such as /shared/ds where you place Directory Server data. This file system must be on a shared partition on shared disks. Note that packages must be installed on local file systems.

Installing the Data Service Packages

The SUNWdsha and SUNWasha packages provide the management interface for the data services so you can manage Directory Server and Administration Server with the same tools as other data services in the cluster.

 

Configuring the Data Service

The resource type names for Directory Server data service, SUNW.dsldap, and for the Administration Server data service, SUNW.mps.

The names of the cluster nodes that can master the data services.

The logical host name used by clients accessing Directory Server and Administration Server.

The ServerRoot location on the file system where you install Directory Server data.

The port on which Directory Server listens for client requests.

The port on which Administration Server listens for client requests.

The resource names defined in Setting Up Network and File System Resources.


Setting Up Network and File System Resources

Sun Cluster software manages one or more logical host names that differ both from node names and from host names for individual network interfaces. A clustered Directory Server instance typically relies on a single logical host name. Figure A-1 shows how a logical host name, managed by a two-node cluster, is not permanently associated with either of the nodes.

Figure A-1  Cluster with Two Nodes

Directory Server relies on the logical host name.

When installing the Sun Cluster HA for Directory Server data service, you configure Directory Server and Administration Server to listen on the logical host name interface so they are not tied to any particular node in the cluster, and the Sun Cluster software can manage failover. In Figure A-1, the nodes are named foo and bar. The logical host name you use during installation as shown in Figure A-1 however would be ds.example.com, not foo or bar. Notice that the logical host name used is a fully qualified domain name.

Refer to the Sun Cluster 3.1 product documentation for more information on these key concepts and for instructions on setting up a logical host name.

Sun Cluster software can also manage failover for the file system resources. When you use HAStoragePlus with the Sun Cluster HA for Directory Server data service, you enable this capability. If you have used HAStorage previously, and intend to migrate from HAStorage with a global file system to HAStoragePlus and a higher performance failover file system, follow the links in Chapter 3, "Finding Patch Update Instructions" to find upgrade instructions.

After setting up a logical host name and shared file system, perform the following steps:

  1. Become super user on a node in the cluster.
  2. Verify that all network addresses you use have been added to the name service database.
  3. To avoid failures during name service lookup, ensure as well that the logical host name, the fully qualified domain name, the fully qualified logical host name, and all shared IP addresses are present in the /etc/hosts file on each cluster node. For example, /etc/hosts might contain the following line:

    192.168.0.99    ds    ds.example.com

    Also configure name service mapping in /etc/nsswitch.conf on each cluster node to check local files first before trying to access other name services.

  4. Create a failover resource group to hold network and application resources. For example:
  5. # scrgadm -a -g resource-group [-h node-list]

    Here resource-group specifies the name of the group.

    The optional node-list is a comma-separated list of physical node names or IDs identifying potential master nodes for the cluster. The order of the node names determines the order in which the nodes are considered as primary during failover. If all nodes in the cluster are potential masters, it is not necessary to specify the node-list.

  6. Add logical host name resources to the resource group.
  7. # scrgadm -a -L -g resource-group -l logical-host-name [-n netif-list]

    Here, the optional netif-list is a comma-separated list identifying the NAFO groups on each node. If you do not specify this option, scrgadm(1M) attempts to discover a network adapter on the subnet used by each logical host name specified on each node in node-list specified in Step 3.

  8. Register the resource type for HAStoragePlus.
  9. # scrgadm -a -t SUNW.HAStoragePlus

    Both server data services depend on HAStoragePlus to access data on the shared file system. For more information about HAStoragePlus, follow the links in Chapter 3, "Finding Patch Update Instructions".

  10. Add the HAStoragePlus resource to the failover resource group created when Setting Up Network and File System Resources.
  11. # scrgadm -a -j HAStoragePlus-resource-name -g resource-group \
    -t SUNW.HAStoragePlus -x FilesystemMountPoints=mount-point \
    -x AffinityOn=TRUE

    Here, you provide a new HAStoragePlus-resource-name to identify the resource. mount-point specifies the file system mount point for the ServerRoot directory as shown for example in the output of the df(1) command.

  12. Enable the resource group and bring it online.
  13. # scswitch -Z -g resource-group

With the resource group online, you may install the servers.


Installing the Servers

In Sun Cluster HA for Directory Server, both Directory Server and Administration Server run under the control of Sun Cluster. This means that instead of supplying the servers with a fully qualified domain name for the physical node during installation, you provide a fully qualified logical host name that can fail over to a different node.

You perform installation starting with the node online for the logical host name used by directory client applications, then repeating the process for all other cluster nodes that you want to master the Directory Server data service.


Note

You install product packages on a node’s local file system so that each node can be patched separately, but you place directory data on the shared cluster file system so that data does not depend on a particular node. This means you can, for example, patch an idle node while another node is providing the directory service.


Installing on the Active Node

For the cluster node that is online for the logical host name used by directory client applications:

  1. Install the Solaris packages for both Directory Server and Administration Server on the active node’s local file system.
  2. Refer to the links in Chapter 1, "Finding Installation Instructions" to determine how to find and install the packages without configuring them.

  3. Make sure the current node is the active node:
  4. # scswitch -z -g resource-group -h current-node

  5. Configure Directory Server.
  6. # /usr/sbin/directoryserver -u 5.2 configure

    When performing this step:

    • Place the Directory Server instance, which includes Directory Server data, on the shared cluster file system.
    • Use the logical host name, not the node name.
  7. Configure Administration Server, using the same logical host name used to configure Directory Server.
  8. # /usr/sbin/mpsadmserver configure

  9. Stop the servers.
  10. # /usr/sbin/directoryserver -u 5.2 stop
    # /usr/sbin/mpsadmserver stop

  11. When using Directory Server in secure mode only, create an empty file named ServerRoot/slapd-serverID/keypass to indicate to the cluster that the Directory Server instance runs in secure mode.
  12. Also create a ServerRoot/alias/slapd-serverID-pin.txt file, containing the password required to start the instance automatically in secure mode. This allows the cluster to restart the data service without human intervention.

Installing on Other Nodes

Repeat Step 2 through Step 4 for each node you want to master the Directory Server data service using exactly the same configuration data as you used for the first node.


Note

Do not remove or relocate any files placed on the shared file system.



Installing the Data Service Packages

The data service packages, SUNWdsha and SUNWasha, on the product media under Solaris_arch/Product/sun_cluster_agents/Solaris_version/Packages/, provide the management interfaces for administering the servers as a data services within the cluster.


Configuring the Data Service

Perform the following steps only on the cluster node that is online for the logical host name in use by Directory Server:

  1. Become super user.
  2. Stop Directory Server and Administration Server from the active node if the servers are not already stopped.
  3. # /usr/sbin/directoryserver -u 5.2 stop
    # /usr/sbin/mpsadmserver stop

  4. Register the resource types for both data services.
  5. # scrgadm -a -t SUNW.dsldap
    # scrgadm -a -t SUNW.mps

    Here SUNW.dsldap and SUNW.mps are the predefined resource type names for the data services. SUNW.dsldap and SUNW.mps define the data services.

  6. Add the server resources to the failover resource group created when Setting Up Network and File System Resources.
  7. # scrgadm -a -j resource-name-ds -g resource-group -t SUNW.dsldap \
    -y Network_resources_used=logical-host-name \
    -y Port_list=port-number/tcp \
    -x Confdir_list=ServerRoot/slapd-serverID \
    -y Resource_dependencies=HAStoragePlus-resource-name

    # scrgadm -a -j resource-name-as -g resource-group -t SUNW.mps \
    -y Network_resources_used=logical-host-name \
    -y Port_list=port-number/tcp \
    -x Confdir_list=ServerRoot \
    -y Resource_dependencies=HAStoragePlus-resource-name

    Here you provide a new resource-name-ds to identify the Directory Server instance, a new resource-name-as to identify the Administration Server instance, and a new HAStoragePlus resource name.

    The resource-group parameter is the name of the group specified in Setting Up Network and File System Resources.

    The logical-host-name identifies the logical host name used for the current Directory Server instance.

    The port-number is the numbers of the ports on which the server instances listen for client requests, specified in Installing the Servers. Notice the Port_list parameter of each command takes only one entry.

    ServerRoot and ServerRoot/slapd-serverID are paths specified in Installing the Servers. Notice the Confdir_list parameter of each command takes only one entry.

  8. Enable the server resources and monitors.
  9. # scswitch -e -j resource-name-ds
    # scswitch -e -j resource-name-as

    Here resource-name-ds and resource-name-as are the names you provided to identify the servers in Step 4.


Example Registration and Configuration

Code Example A-1 shows how you might register and configure the data service for the cluster illustrated in Figure A-1.

Code Example A-1  Registering and Configuring the Data Service 

(Create a failover resource group on the node that is online.)

# scrgadm -a -g ds-resource-group-1 -h foo,bar

(Add a logical hostname resource to the resource group.)

# scrgadm -a -L -g ds-resource-group-1 -l ds

(Register the SUNW.HAStoragePlus resource type.)

# scrgadm -a -t SUNW.HAStoragePlus

(Add the HAStoragePlus resource to the resource group.)

# scrgadm -a -j hasp-resource -g ds-resource-group-1 -t SUNW.HAStoragePlus \

-x FilesystemMountPoints=/shared/ds -x AffinityON=TRUE

# scswitch -e -j hasp-resource

(Bring the resource group online.)

# scswitch -Z -g ds-resource-group-1

(Install packages on each node in the cluster.)

(Stop the servers on the node that is online.)

# /usr/sbin/directoryserver -u 5.2 stop

# /usr/sbin/mpsadmserver stop

(Register the SUNW.dsldap and SUNW.mps resource types.)

# scrgadm -a -t SUNW.dsldap

# scrgadm -a -t SUNW.mps

(Create resources for the servers and add them to the resource group.)

# scrgadm -a -j ds-1 -g ds-resource-group-1 \

-t SUNW.dsldap -y Network_resources_used=ds \

-y Port_list=389/tcp -y Resource_dependencies=hasp-resource \

-x Confdir_list=/shared/ds/slapd-ds-1

# scrgadm -a -j as-1 -g ds-resource-group-1 \

-t SUNW.mps -y Network_resources_used=ds \

-y Port_list=5201/tcp -y Resource_dependencies=hasp-resource \

-x Confdir_list=/shared/ds

(Enable the application resources.)

# scswitch -e -j ds-1

# scswitch -e -j as-1


Configuring Extension Properties

Optionally configurable extension properties allow you to configure how the cluster software handles the application software. For example, you can adjust how the cluster determines when the data service must fail over.

What You Can Configure

You can configure both standard properties and extension properties. This section covers extension properties specific to management of Directory Server.

You typically configure resource extension properties using the Cluster Module of the Sun Management Center, or using the scrgadm utility. You can change the extension properties listed in Table A-2 using the scrgadm utility with the -x parameter=value option.

Table A-2  SUNW.dsldap Resource Extension Properties 

Property

Description

Default

Range

Monitor_retry_count

Integer value indicating the number of times the process monitor facility (PMF) restarts the fault monitor during the time window specified by the value of Monitor_retry_interval

 

4 attempts

-1 to 2,147,483,641 attempts

-1 means retry forever.

Monitor_retry_interval

Integer value indicating the time in minutes over which failures of the fault monitor are counted

If the number of times the fault monitor fails exceeds the value specified in Monitor_retry_count within this period, the PMF cannot restart the fault monitor.

 

2 minutes

-1 to 2,147,483,641 minutes

-1 specifies an infinite retry interval.

Probe_timeout

Integer value indicating the timeout value in seconds that the fault monitor uses to probe a Directory Server instance

30 seconds

0 to 2,147,483,641 seconds

Refer to the Sun Cluster 3.1 product documentation for more information on Sun Cluster properties.

How the Fault Monitor Operates

The cluster software determines whether the data service is healthy using a fault monitor. The fault monitor probes the data service, and then determines whether the service is healthy or must be restarted based on the results of the probe.

Table A-3  How the Fault Monitor Interprets Probes 

Directory Server running in...

Probe Used

Algorithm

Normal mode

ldapsearch

1.  Attempt a search.

2.  If the search operation results in:

  • LDAP_SUCCESS, then the service is considered healthy.
  • An LDAP error, then based on failure history the service is either restarted or fails over to another node.
  • A problem other than timeout, then the fault monitor probes again depending on Monitor_retry_count and Monitor_retry_interval.
  • The Probe_timeout duration being exceeded, then the fault monitor probes again depending on Monitor_retry_count and Monitor_retry_interval.

Potential causes of timeout include heavy loads on the system, network, or Directory Server instance. Timeout may also indicate that the Probe_timeout value is set too low for the number of Directory Server instances monitored.

 

Secure mode (SSL)

TCP connect

1.  Attempt to connect.

2.  If the connection operation:

  • Succeeds, then the service is considered healthy.
  • Fails, then based on failure history the service is either restarted or fails over to another node.
  • Exceeds Probe_timeout, then the service must be restarted.

The fault monitor uses the IP addresses and port numbers you specified when Configuring the Data Service to carry out probe operations. If Directory Server is configured to listen on two ports, one for SSL traffic and one for normal traffic, the fault monitor probes both ports using TCP connect, following the fault monitoring algorithm used for secure mode ports.


Creating an Additional Directory Server Instance

Perform the following steps:

  1. Create an additional Directory Server instance using the Sun Java System Server Console.
  2. Refer to the Administration Server Administration Guide for instructions.

  3. Stop the new Directory Server instance on the node that is online for the logical host name in use by the data service.
  4. # /usr/sbin/directoryserver -u 5.2 -server serverID stop

  5. Add the Directory Server instance to the failover resource group created in Setting Up Network and File System Resources.
  6. # scrgadm -a -j new-ds-resource -g resource-group -t SUNW.dsldap \
    -y Network_resources_used=logical-host-name \
    -y Port_list=port-number/tcp -x Confdir_list=ServerRoot/slapd-serverID \
    -y Resource_dependencies=HAStoragePlus-resource-name

    Here you provide a new-ds-resource to identify the Directory Server instance.

    The resource-group parameter is the name of the group specified in Setting Up Network and File System Resources.

    The logical-host-name identifies the logical host name used for the instance.

    The port-number is the number of the port on which the instance listens for client requests, specified in Installing the Servers. Notice the Port_list parameter takes only one entry.

    ServerRoot and ServerRoot/slapd-serverID are paths specified in Installing the Servers. Notice the Confdir_list parameter takes only one entry.

    HAStoragePlus-resource-name is the name of the HAStoragePlus resource on which Directory Server depends.

  7. Enable the server resources and monitors.
  8. # scswitch -e -j new-ds-resource

    Here new-ds-resource is the name you provided to identify the Directory Server in Step 3.


Uninstalling

To remove Sun Cluster HA for Directory Server and the associated Administration Server from the cluster, perform the following steps:

  1. Stop the server instances. With a single Directory Server instance for example:
  2. # scswitch -n -j resource-name-ds
    # scswitch -n -j resource-name-as

  3. Remove the resources. For example:
  4. # scrgadm -r -j resource-name-ds
    # scrgadm -r -j resource-name-as

  5. Remove the resource types from the cluster database.
  6. # scrgadm -r -t SUNW.dsldap
    # scrgadm -r -t SUNW.mps

  7. Delete the server configurations.
    1. For each node, unconfigure the Administration Server:
    2. # /usr/sbin/mpsadmserver unconfigure

    3. As Directory Server data and configuration are stored on the shared file system, run the following on any of the nodes where it is configured:
    4. # /usr/sbin/directoryserver -u 5.2 unconfigure

Remove the packages installed, including SUNWdsha and SUNWasha, from each node using the pkgrm(1M) utility.



Previous      Contents      Index      Next     


Copyright 2004 Sun Microsystems, Inc. All rights reserved.