Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide

Chapter 3 Installing and Configuring Sun Cluster HA for iPlanet Web Server

This chapter provides the procedures to install and configure the Sun Cluster HA for iPlanet Web Server data service. This data service was formerly known as Sun Cluster HA for NetscapeTM HTTP. Some error messages from the application might still use the name Netscape, but the messages refer to iPlanet Web Server.

This chapter contains the following procedures.

You can configure the Sun Cluster HA for iPlanet Web Server data service as a failover or scalable service. See Chapter 1, Planning for Sun Cluster Data Services and the Sun Cluster 3.0 U1 Concepts document for general information about data services, resource groups, resources, and other related topics.


Note -

You can use SunPlex Manager to install and configure this data service. See the SunPlex Manager online help for details.



Note -

If you run multiple data services in your Sun Cluster configuration, you can set up the data services in any order, with the following exception. If the Sun Cluster HA for iPlanet Web Server data service depends on the Sun Cluster HA for DNS data service, you must set up DNS first. See Chapter 6, Installing and Configuring Sun Cluster HA for Domain Name Service (DNS) for details. The Solaris operating environment includes the DNS software. If the cluster is to obtain the DNS service from another server, then configure the cluster to be a DNS client first.



Note -

After installation, do not manually start and stop the iPlanet Web server except by using the cluster administration command scswitch(1M). See the man page for details. After the iPlanet Web Server is started, the Sun Cluster software controls it.


Planning the Installation and Configuration

Use the following section in conjunction with your configuration worksheets as a checklist before you install and configure the Sun Cluster HA for iPlanet Web Server data service.

Consider the following questions before you start your installation.

Installing and Configuring Sun Cluster HA for iPlanet Web Server

The following table lists the sections that describe the installation and configuration tasks.

Table 3-1 Task Map: Installing and Configuring Sun Cluster HA for iPlanet Web Server

Task 

For Instructions, Go To 

Install iPlanet Web Server 

"Installing and Configuring an iPlanet Web Server"

Install the Sun Cluster HA for iPlanet Web Server data-service packages 

"Installing Sun Cluster HA for iPlanet Web Server Packages"

Configure the Sun Cluster HA for iPlanet Web Server data service 

"Registering and Configuring Sun Cluster HA for iPlanet Web Server"

Configure resource extension properties 

"Configuring Sun Cluster HA for iPlanet Web Server Extension Properties"

View fault-monitor information 

"Sun Cluster HA for iPlanet Web Server Fault Monitor"

Installing and Configuring an iPlanet Web Server

This section describes the steps to use the setup command to perform the following tasks.


Note -

You must follow certain conventions when you configure URL mappings for the Web server. For example, to preserve availability when setting the CGI directory, you must locate the mapped directories on the cluster file system. In this example, you map your CGI directory to /global/pathname/cgi-bin.

In situations where the CGI programs access "back-end" servers, such as an RDBMS, ensure that the Sun Cluster software also controls the "back-end" server. If the server is an RDBMS that the Sun Cluster software supports, use one of the highly available RDBMS packages. Alternatively, you can use the APIs documented in the Sun Cluster 3.0 U1 Data Services Developers' Guide to put the server under Sun Cluster control.


How to Install an iPlanet Web Server

To perform this procedure, you need the following information about your configuration.


Note -

If you run the Sun Cluster HA for iPlanet Web Server service and another HTTP server and they use the same network resources, configure them to listen on different ports. Otherwise, a port conflict might occur between the two servers.


  1. Become superuser on a cluster member.

  2. Run the setup command from the iPlanet install directory on the CD.

  3. When prompted, enter the location where the iPlanet server binaries will be installed.

    You can specify a location on the cluster file system or on local disks for the location of the install. If you choose to install on local disks, run the setup command on all the cluster nodes that are potential primaries of the network resource (logical hostname or shared address) specified in the next step.

  4. When prompted for a machine name, enter the logical hostname on which the iPlanet server depends and the appropriate DNS domain name.

    A full logical hostname is of the format network-resource.domainname, such as schost-1.sun.com.


    Note -

    For the Sun Cluster HA for iPlanet Web Server data service to fail over correctly, you must use either the logical hostname or shared-address resource name (rather than the physical hostname) here and everywhere else that you are asked.


  5. Select Run Admin Server as Root when you are asked.

    Note the port number that the iPlanet install script selects for the administration server if you want to use this default value later when configuring an instance of the iPlanet Web server. Otherwise, you can specify a different port number when you configure the iPlanet server instance.

  6. Type a Server Administrator ID and a chosen password when you are asked.

    Follow the guidelines for your system.

    When a message displays that the admin server will be started, your installation is ready for configuration.

Where to Go From Here

To configure the Web server, see the next section, "How to Configure an iPlanet Web Server".

How to Configure an iPlanet Web Server

This procedure describes how to configure an instance of the iPlanet Web server to be highly available. Use the Netscape browser to interact with this procedure.

Consider the following points before you perform this procedure.

  1. From the administrative workstation or a cluster node, start the Netscape browser.

  2. On one of the cluster nodes, go to the directory https-admserv, then start the iPlanet admin server.


    # cd https-admserv
    # ./start
    

  3. Enter the URL of the iPlanet admin server in the Netscape browser.

    The URL consists of the physical hostname and port number that the iPlanet installation script established in Step 4 of the server installation procedure, for example, n1.eng.sun.com:8888. When you perform Step 2 of this procedure, the ./start command displays the admin URL.

    When prompted, use the user ID and password you specified in Step 6 of the server installation procedure to log in to the iPlanet administration server interface.

  4. Begin to administer the iPlanet Web Server instance that was created.

    If you need another instance, create a new one.

    The administration graphical interface provides a form with details of the iPlanet server configuration. You can accept the defaults on the form, with the following exceptions.

    • Verify that the server name is correct.

    • Verify that the server user is set as superuser.

    • Change the bind address field to one of the following addresses.

      • A logical hostname or shared address if you use DNS as your name service

      • The IP address associated with the logical hostname or shared address if you use NIS as your name service

  5. Create a directory on the local disk of all the nodes to hold the logs, error files, and PID file that iPlanet Web Server manages.

    For iPlanet to work correctly, these files must be located on each node of the cluster, not on the cluster file system.

    Choose a location on the local disk that is the same for all the nodes in the cluster. Use the mkdir -p command to create the directory. Make nobody the owner of this directory.

    The following example shows how to complete this step.


    phys-schost-1# mkdir -p /var/pathname/http-instance/logs/
    

    Note -

    If you anticipate large error logs and PID files, do not put them in a directory under /var because they will overwhelm this directory. Rather, create a directory in a partition with adequate space to handle large files.


  6. Edit the ErrorLog and PidLog entries in the magnus.conf file to reflect the directory created in the previous step, and synchronize the changes from the administrator's interface.

    The magnus.conf file specifies the locations for the error files and PID files. Edit this file to change the error and PID file locations to the directory that you created in Step 5. The magnus.conf file is located in the config directory of the iPlanet server instance. If the instance directory is located on the local file system, you must modify the magnus.conf file on each of the nodes.

    Change the entries as follows.


    # Current ErrorLog and PidLog entries
    ErrorLog /global/data/netscape/https-schost-1/logs/error
    PidLog /global/data/netscape/https-insecure-schost-1/logs/pid
     
    # New entries
    ErrorLog /var/pathname/http-instance/logs/error
    PidLog /var/pathname/http-instance/logs/pid

    As soon as the administrator's interface detects your changes, the interface displays a warning message, as follows.


    Warning: Manual edits not loaded
    Some configuration files have been edited by hand. Use the Apply
    
    button on the upper right side of the screen to load the latest
    
    configuration files.

    Click Apply as prompted.

    The administrator's interface then displays the following warning.


    Configuration files have been edited by hand. Use this button to
    
    load the latest configuration files.

    Click Load Configuration Files as prompted.

  7. Use the administrator's interface to set the location of the access log file.

    From the administration graphical interface, click the Preferences tab and then Logging Options on the side bar. A form is then displayed for configuring the Access Log parameter.

    Change the location of the log file to the directory that you created in Step 5.

    For example, make the following changes to the log file.


    Log File: /var/pathname/http-instance/logs/access
  8. Click Save to save your changes.

    Do not click Save and Apply-doing so starts iPlanet Web Server.

Where to Go From Here

If you have not installed the Sun Cluster HA for iPlanet Web Server data-service packages from the Sun Cluster Agents CD, go to "Installing Sun Cluster HA for iPlanet Web Server Packages". Otherwise, go to "Registering and Configuring Sun Cluster HA for iPlanet Web Server".

Installing Sun Cluster HA for iPlanet Web Server Packages

You can use the scinstall(1M) utility to install SUNWschtt, the Sun Cluster HA for iPlanet Web Server data-service package, on a cluster. Do not use the -s option to non-interactive scinstall to install all data-service packages on the CD.

If you installed the data-service packages during your initial Sun Cluster installation, proceed to "Registering and Configuring Sun Cluster HA for iPlanet Web Server". Otherwise, use the following procedure to install the SUNWschtt package.

How to Install Sun Cluster HA for iPlanet Web Server Packages

You need the Sun Cluster Agents CD to complete this procedure. Run this procedure on all the cluster nodes that will run the Sun Cluster HA for iPlanet Web Server data service.

  1. Load the Agents CD into the CD-ROM drive.

  2. Run the scinstall utility with no options.

    This step starts the scinstall utility in interactive mode.

  3. Select the Add Support for New Data Service to This Cluster Node menu option.

    This option enables you to load software for any data services that exist on the CD.

  4. Exit the scinstall utility.

  5. Unload the CD from the drive.

Where to Go From Here

See "Registering and Configuring Sun Cluster HA for iPlanet Web Server" to register the Sun Cluster HA for iPlanet Web Server data service and configure the cluster for the data service.

Registering and Configuring Sun Cluster HA for iPlanet Web Server

You can configure the Sun Cluster HA for iPlanet Web Server data service as a failover service or as a scalable service. You must include some additional steps to configure iPlanet as a scalable service. In the first procedure in this section, these additional steps begin with a notation that they are required for scalable services only. Individual examples of a failover service and a scalable service follow the procedure.

How to Register and Configure Sun Cluster HA for iPlanet Web Server

This procedure describes how to use the scrgadm(1M) command to register and configure the Sun Cluster HA for iPlanet Web Server data service.


Note -

Other options also enable you to register and configure the data service. See "Tools for Data-Service Resource Administration" for details about these options.


To perform this procedure, you must have the following information.


Note -

Perform this procedure on any cluster member.


  1. Become superuser on a cluster member.

  2. Register the resource type for the Sun Cluster HA for iPlanet Web Server data service.


    # scrgadm -a -t SUNW.iws
    
    -a

    Adds the data-service resource type.

    -t SUNW.iws

    Specifies the predefined resource-type name for your data service.

  3. Create a failover resource group to hold the network and application resources.

    For failover services, this resource group also holds the application resources.

    You can optionally select the set of nodes on which the data service can run with the -h option.


    # scrgadm -a -g resource-group [-h nodelist]
    -g resource-group

    Specifies the name of the failover resource group. This name can be your choice but must be unique for resource groups within the cluster.

    -h nodelist

    An optional comma-separated list of physical node names or IDs that identify potential masters. The order here determines the order in which the nodes are considered as primary during failover.


    Note -

    Use -h to specify the order of the node list. If all the nodes in the cluster are potential masters, you need not use the -h option.


  4. Verify that all network addresses that you use have been added to your name-service database.

    You should have performed this verification during the Sun Cluster installation. See the planning chapter in the Sun Cluster 3.0 U1 Installation Guide for details.


    Note -

    To avoid any failures because of name-service lookup, ensure that all logical hostnames and shared addresses are present in the server's and client's /etc/hosts file. Configure name-service mapping in /etc/nsswitch.conf on the servers to first check the local files before trying to access NIS or NIS+.


  5. Add a network resource (logical hostname or shared address) to the failover resource group.


    # scrgadm -a {-S | -L} -g resource-group \
    -l network-resource,... [-j resource] \
    [-X auxnodelist=node, ...] [-n netiflist]
    -S | -L

    You use -S for shared-address resources or -L for logical-hostname resources.

    -g resource-group

    Specifies the name of the failover resource group.

    -l network-resource, ...

    Specifies a comma-separated list of network resources to add. You can use the -j option to specify a name for the resources. If you do not do so, the network resources have the name of the first entry on the list.

    -j resource

    Specifies an optional resource name. If you do not supply this name, the name of the network resource defaults to the first name specified after the -l option.

    -X auxnodelist=node, ...

    Specifies an optional comma-separated list of physical node IDs that identify cluster nodes that can host the shared address but never serve as a primary if failover occurs. These nodes are mutually exclusive with the nodes identified in nodelist for the resource group, if specified.

    -n netiflist

    Specifies an optional comma-separated list that identifies the NAFO groups on each node. All nodes in nodelist of the resource group must be represented in netiflist. If you do not specify this option, scrgadm attempts to discover a net adapter on the subnet that the hostname list identifies for each node in nodelist.

  6. For scalable services only - Create a scalable resource group to run on all desired nodes of the cluster.

    If you run the Sun Cluster HA for iPlanet Web Server data service as a failover data service, do not perform this step-go to Step 8.

    Create a resource group to hold a data-service application resource. You must specify the maximum and desired number of primary nodes, as well as a dependency between this resource group and the failover resource group that you created in Step 3. This dependency ensures that in the event of failover, the resource manager starts the network resource before starting any data services that depend on the network resource.


    # scrgadm -a -g resource-group \
    -y Maximum_primaries=m -y Desired_primaries=n \
    -y RG_dependencies=resource-group
    
    -y Maximum_primaries=m

    Specifies the maximum number of active primary nodes allowed for this resource group. If you do not assign a value to this property, the default is 1.

    -y Desired_primaries=n

    Specifies the desired number of active primary nodes allowed for this resource group. If you do not assign a value to this property, the default is 1.

    -y RG_dependencies= resource-group

    Identifies the resource group that contains the shared-address resource on which the resource group being created depends.

  7. For scalable services only - Create an application resource in the scalable resource group.

    If you run the Sun Cluster HA for iPlanet Web Server data service as a failover data service, do not perform this step-go to Step 8.

    You can repeat this step to add multiple application resources (such as secure and insecure versions) to the same resource group.

    You might also want to set load balancing for the data service. To do so, use the two standard resource properties Load_balancing_policy and Load_balancing_weights. See Appendix A, Standard Properties for a description of these properties. Additionally, see the examples that follow this section.


    # scrgadm -a -j resource -g resource-group \
    -t resource-type -y Network_resources_used=network-resource, ... \
    -y Port_list=port-number/protocol, ... -y Scalable=True \
    -x Confdir_list=config-directory, ...
    -j resource

    Specifies the name of the resource to add.

    -g resource-group

    Specifies the name of the scalable resource group into which the resources are to be placed.

    -t resource-type

    Specifies the type of the resource to add.

    -y Network_resources_used= network-resource, ...

    Specifies a comma-separated list of network resources that identify the shared addresses that the data service uses.

    -y Port_list=port-number/protocol, ...

    Specifies a comma-separated list of port numbers and protocol to be used, for example, 80/tcp,81/tcp.

    -y Scalable=True

    Specifies a Boolean that is required for scalable services.

    -x Confdir_list=config-directory, ...

    Specifies a comma-separated list of the locations of the iPlanet configuration files. The Sun Cluster HA for iPlanet Web Server data service requires this extension property.


    Note -

    A one-to-one mapping applies for Confdir_List and Port_List, that is, each of the values in one list must correspond to the values in the other list in the order specified.


  8. For failover services only - Create an application resource in the failover resource group.

    Perform this step only if you run the Sun Cluster HA for iPlanet Web Server data service as a failover data service. If you run the Sun Cluster HA for iPlanet Web Server data service as a scalable service, you must have performed Step 6 and Step 7 previously and must now go to Step 10.

    You can repeat this step to add multiple application resources (such as secure and insecure versions) to the same resource group.


    # scrgadm -a -j resource -g resource-group \
    -t resource-type -y Network_resources_used=logical-hostname-list \
    -y Port_list=port-number/protocol \
    -x Confdir_list=config-directory
    
    -j resource

    Specifies the name of the resource to add.

    -g resource-group

    Specifies the name of the failover resource group into which the resources are to be placed.

    -t resource-type

    Specifies the type of the resource to add.

    -y Network_resources_used=network-resource, ...

    Specifies a comma-separated list of network resources that identify the logical hosts that the data service uses.

    -y Port_list=port-number/protocol

    Specifies the port number and protocol to use, for example, 80/tcp. Port_list for failover services must have exactly one entry only because of the one-to-one mapping rule between Port_list and Confdir_list.

    -x Confdir_list=config-directory

    Specifies the location of the iPlanet configuration files. The Confdir_list file for failover services must have exactly one entry only. The config-directory must contain a directory called config. The Sun Cluster HA for iPlanet Web Server data service requires this extension property.


    Note -

    Optionally, you can set additional extension properties that belong to the iPlanet data service to override the default value. See Table 3-2 for a list of these properties.


  9. Bring the failover resource group online.


    # scswitch -Z -g resource-group
    
    -Z

    Enables the network resource and fault monitoring, switches the resource group into a managed state, and brings the resource group online.

    -g resource-group

    Specifies the name of the failover resource group.

  10. For scalable services only - Bring the scalable resource group online.


    # scswitch -Z -g resource-group
    
    -Z

    Enables the resource and monitor, moves the resource group to the managed state, and brings the resource group online.

    -g resource-group

    Specifies the name of the scalable resource group.

Example - Registering Scalable Sun Cluster HA for iPlanet Web Server

The following example shows how to register a scalable iPlanet service.


Cluster Information
Node names: phys-schost-1, phys-schost-2
Shared address: schost-1
Resource groups: sa-resource-group-1 (for shared addresses),
    iws-resource-group-1 (for scalable iPlanet application resources)
Resources: schost-1 (shared address), iplanet-insecure-1 (insecure iPlanet
    application resource), iplanet-secure-1 (secure iPlanet application
    resource)
 
(Add a failover resource group to contain shared addresses.)
# scrgadm -a -g sa-resource-group-1
 
(Add the shared address resource to the failover resource group.)
# scrgadm -a -S -g sa-resource-group-1 -l schost-1
 
(Add a scalable resource group.)
# scrgadm -a -g iws-resource-group-1 -y Maximum_primaries=2 \
-y Desired_primaries=2 -y RG_dependencies=sa-resource-group-1
 
(Register the iPlanet resource type.)
# scrgadm -a -t SUNW.iws
 
(Add an insecure iPlanet instance with default load balancing.)
# scrgadm -a -j iplanet-insecure-1 -g iws-resource-group-1 -t SUNW.iws \
-x Confdir_List=/opt/iplanet/https-iplanet-insecure-1 \
-y Scalable=True -y Network_resources_used=schost-1 -y Port_list=80/tcp
 
(Add a secure iPlanet instance with sticky IP load balancing.)
# scrgadm -a -j iplanet-secure-1 -g iws-resource-group-1 -t SUNW.iws \
-x Confdir_List=/opt/iplanet/https-iplanet-secure-1 \
-y Scalable=True -y Network_resources_used=schost-1 \
-y Port_list=443/tcp -y Load_balancing_policy=LB_STICKY \
-y Load_balancing_weight=40@1,60@2
 
(Bring the failover resource group online.)
# scswitch -Z -g sa-resource-group-1
 
(Bring the scalable resource group online.)
# scswitch -Z -g iws-resource-group-1

Example - Registering Failover Sun Cluster HA for iPlanet Web Server

The following example shows how to register a failover iPlanet service on a two-node cluster.


Cluster Information
Node names: phys-schost-1, phys-schost-2
Logical hostname: schost-1
Resource group: resource-group-1 (for all resources) 
Resources: schost-1 (logical hostname), iplanet-insecure-1 (insecure iPlanet 
    application resource), iplanet-secure-1 (secure iPlanet application 
    resource)
 
(Add the resource group to contain all resources.)
# scrgadm -a -g resource-group-1
 
(Add the logical hostname resource to the resource group.)
# scrgadm -a -L -g resource-group-1 -l schost-1 
 
(Register the iPlanet resource type.)
# scrgadm -a -t SUNW.iws
 
(Add an insecure iPlanet application resource instance.)
# scrgadm -a -j iplanet-insecure-1 -g resource-group-1 -t SUNW.iws \
-x Confdir_list=/opt/iplanet/conf -y Scalable=False \
-y Network_resources_used=schost-1 -y Port_list=80/tcp\
 
(Add a secure iPlanet application resource instance.)
# scrgadm -a -j iplanet-secure-1 -g resource-group-1 -t SUNW.iws \ 
-x Confdir_List=/opt/iplanet/https-iplanet-secure-1 -y Scalable=False \
-y Network_resources_used=schost-1 -y Port_list=443/tcp \
 
(Bring the failover resource group online.)
# scswitch -Z -g resource-group-1

Where to Go From Here

To configure the SUNW.HAStorage resource type, see "How to Configure SUNW.HAStorage Resource Type".

How to Configure SUNW.HAStorage Resource Type

The SUNW.HAStorage resource type synchronizes actions between HA storage and the data service. The Sun Cluster HA for iPlanet Web Server data service is scalable, and therefore you should configure the SUNW.HAStorage resource type.

See the SUNW.HAStorage(5) man page and "Relationship Between Resource Groups and Disk Device Groups" for background information. See "How to Set Up SUNW.HAStorage Resource Type for New Resources" for the procedure.

Configuring Sun Cluster HA for iPlanet Web Server Extension Properties

This section describes the Sun Cluster HA for iPlanet Web Server extension properties. For failover, the data service enforces that the size of Confdir_list is one. If you want multiple configuration files (instances), make multiple failover resources, each with one Confdir_list entry.

Typically, you use the command line scrgadm -x parameter=value to configure extension properties when you create the iPlanet Web Server resource. You can also use the procedures described in Chapter 11, Administering Data-Service Resources to configure them later. See Appendix A, Standard Properties for details on all Sun Cluster properties.

Table 3-2 describes extension properties that you can configure for the iPlanet server. The only required extension property for creating an iPlanet server resource is the Confdir_list property. You can update some extension properties dynamically. You can update others, however, only when you create the resource. The Tunable column of the following table indicates when you can update each property

Table 3-2 Sun Cluster HA for iPlanet Web Server Extension Properties

Name/Data Type 

Default 

Range 

Tunable 

Description 

Confdir_list (string array)

None 

None 

At creation 

A pointer to the server root directory for a particular iPlanet Web server instance. If the Netscape Directory Server is in secure mode, the path name must contain a file named keypass, which contains the secure key password needed to start this instance.

Monitor_retry_count (integer)

4

0 - 2,147,483,641

-1 indicates an infinite number of retry attempts.

Any time 

The number of times the process monitor facility (PMF) restarts the fault monitor during the time window that the Monitor_retry_interval property specifies. Note that this property refers to restarts of the fault monitor itself rather than to the resource. The system-defined properties Retry_interval and Retry_count control restarts of the resource.

Monitor_retry_interval (integer)

2

0 - 2,147,483,641

-1 indicates an infinite retry interval.

Any time 

The time (in minutes) over which failures of the fault monitor are counted. If the number of times the fault monitor fails exceeds the value specified in the extension property Monitor_retry_count within this period, the PMF does not restart the fault monitor.

Probe_timeout (integer)

30

0 - 2,147,483,641

Any time 

The time-out value (in seconds) that the fault monitor uses to probe an iPlanet Web Server instance. 

Sun Cluster HA for iPlanet Web Server Fault Monitor

The probe for the Sun Cluster HA for iPlanet Web Server (iWS) data service uses a request to the server to query the health of that server. Before the probe actually queries the server, a check is made to confirm that network resources are configured for this Web server resource. If no network resources are configured, an error message (No network resources found for resource) is logged, and the probe exits with failure.

The probe must address the following two configurations of iWS.

If the Web server is in secure mode and if the probe cannot get the secure ports from the configuration file, an error message (Unable to parse configuration file) is logged, and the probe exits with failure. The secure and insecure instance probes involve common steps.

The probe uses the time-out value that the resource property Probe_timeout specifies to limit the time spent trying to successfully probe iWS. See Appendix A, Standard Properties for details on this resource property.

The Network_resources_used resource-property setting on the iWS resource determines the set of IP addresses that the Web server uses. The Port_list resource-property setting determines the list of port numbers that iWS uses. The fault monitor assumes that the Web server is listening on all combinations of IP and port. If you customize your Web server configuration to listen on different port numbers (in addition to port 80), ensure that your resultant configuration (magnus.conf) file contains all possible combinations of IP addresses and ports. The fault monitor attempts to probe all such combinations and might fail if the Web server is not listening on a particular IP address and port combination.

The probe executes the following steps.

  1. The probe uses the specified IP address and port combination to connect to the Web server. If the connection is unsuccessful, the probe concludes that a complete failure has occurred. The probe then records the failure and takes appropriate action.

  2. If the probe successfully connects, the probe checks if the Web server is run in a secure mode. If so, the probe disconnects and returns with a success status. No further checks are performed for a secure iWS server.

    However, if the Web server is running in insecure mode, the probe sends an HTTP 1.0 HEAD request to the Web server and waits for the response. The request can be unsuccessful for various reasons, including heavy network traffic, heavy system load, and misconfiguration.

    Misconfiguration can occur when the Web server is not configured to listen on all IP address and port combinations that are being probed. The Web server should service every port for every IP address specified for this resource.

    Misconfigurations can also result if the Network_resources_used and Port_list resource properties are not set correctly while you create the resource.

    If the reply to the query is not received within the Probe_timeout resource proper limit, the probe considers this a failure of the Sun Cluster HA for iPlanet Web Server data service. The failure is recorded in the probe's history.

    A probe failure can be a complete or partial failure. The following probe failures are considered complete failures.

    • Failure to connect to the server, as the following error message flags, with %s indicating the host name and %d the port number.


      Failed to connect to %s port %d
    • Running out of time (exceeding the resource-property timeout Probe_timeout) after trying to connect to the server.

    • Failure to successfully send the probe string to the server, as the following error message flags, with the first %s indicating the host name and %d the port number. The second %s indicates further details about the error.


      Failed to communicate with server %s port %d: %s

    Two such partial failures within the resource-property interval Retry_interval are accumulated by the monitor and are counted as one.

    The following probe failures are considered partial failures.

    • Running out of time (exceeding the resource-property timeout Probe_timeout) while trying to read the reply from the server to the probe's query.

    • Failing to read data from the server for other reasons, as the following error message flags, with the first %s indicating the host name and %d the port number. The second %s indicates further details about the error.


      Failed to communicate with server %s port %d: %s
  3. Based on the history of failures, a failure can cause either a local restart or a failover of the data service. This action is further described in "Health Checks of the Data Service".