This chapter provides the procedures to install and configure the Sun Cluster HA for iPlanet Web Server data service. This data service was formerly known as Sun Cluster HA for NetscapeTM HTTP. Some error messages from the application might still use the name Netscape, but the messages refer to iPlanet Web Server.
This chapter contains the following procedures.
"How to Install Sun Cluster HA for iPlanet Web Server Packages"
"How to Register and Configure Sun Cluster HA for iPlanet Web Server"
You can configure the Sun Cluster HA for iPlanet Web Server data service as a failover or scalable service. See Chapter 1, Planning for Sun Cluster Data Services and the Sun Cluster 3.0 U1 Concepts document for general information about data services, resource groups, resources, and other related topics.
You can use SunPlex Manager to install and configure this data service. See the SunPlex Manager online help for details.
If you run multiple data services in your Sun Cluster configuration, you can set up the data services in any order, with the following exception. If the Sun Cluster HA for iPlanet Web Server data service depends on the Sun Cluster HA for DNS data service, you must set up DNS first. See Chapter 6, Installing and Configuring Sun Cluster HA for Domain Name Service (DNS) for details. The Solaris operating environment includes the DNS software. If the cluster is to obtain the DNS service from another server, then configure the cluster to be a DNS client first.
After installation, do not manually start and stop the iPlanet Web server except by using the cluster administration command scswitch(1M). See the man page for details. After the iPlanet Web Server is started, the Sun Cluster software controls it.
Use the following section in conjunction with your configuration worksheets as a checklist before you install and configure the Sun Cluster HA for iPlanet Web Server data service.
Consider the following questions before you start your installation.
Will you be running the Sun Cluster HA for iPlanet Web Server data service as a failover or as a scalable service? See the Sun Cluster 3.0 U1 Concepts document for information on the two types of services. For scalable services, consider the following questions.
What nodes will host the scalable service? In most cases, you will want to scale across all nodes. You can, however, limit the set of nodes that host the service.
Will your iPlanet Web Server instances require sticky IP? Sticky IP is a resource property setting, Load_balancing_policy, which stores the client state in memory so return traffic from the same node always goes to the same cluster node. You can choose from several load balancing policies, as described in the table on resource properties in Appendix A, Standard Properties.
Exercise caution when changing Load_balancing_weights for an online scalable service that has Load_balancing_policy set to LB_STICKY or LB_STICKY_WILD. Changing those properties while the service is online can cause existing client affinities to be reset, and hence a different node might service a subsequent client request even if another cluster member had previously serviced the client.
Similarly, when a new instance of the service is started on a cluster, existing client affinities might be reset.
Where will the Web server root reside?
Does the Web server serve data for another highly available application? If so, resource dependencies might exist between the resources so that one starts or stops before the other. See Appendix A, Standard Properties for a description of the resource property Resource_dependencies that sets these dependencies.
Determine the resource groups to use for network addresses and application resources and the dependencies between them. See Appendix A, Standard Properties for a description of the resource group property RG_dependencies that sets these dependencies.
Provide the logical hostname (for failover services) or shared address (for scalable services) for clients to use to access the data service.
Because you can configure iPlanet Web Server to bind to INADDR_ANY, if you plan to run multiple instances of the iPlanet Web Server data service or multiple data services on the same node, each instance must bind to a unique network address and port number.
Determine the entries for the Confdir_list and Port_list properties. For failover services, both of these properties can have only one entry. For scalable services, they can have multiple entries. The number of entries, however, must be the same and must map to each other in the order specified. See "How to Register and Configure Sun Cluster HA for iPlanet Web Server" for details.
Determine where to place logs, error files, and the PID file on the local file system.
Determine where to place the contents on the cluster file system.
The following table lists the sections that describe the installation and configuration tasks.
Table 3-1 Task Map: Installing and Configuring Sun Cluster HA for iPlanet Web Server
Task |
For Instructions, Go To |
---|---|
Install iPlanet Web Server | |
Install the Sun Cluster HA for iPlanet Web Server data-service packages | |
Configure the Sun Cluster HA for iPlanet Web Server data service |
"Registering and Configuring Sun Cluster HA for iPlanet Web Server" |
Configure resource extension properties |
"Configuring Sun Cluster HA for iPlanet Web Server Extension Properties" |
View fault-monitor information |
This section describes the steps to use the setup command to perform the following tasks.
Install the iPlanet Web Server.
Enable the iPlanet Web Server to run as the Sun Cluster HA for iPlanet Web Server data service.
You must follow certain conventions when you configure URL mappings for the Web server. For example, to preserve availability when setting the CGI directory, you must locate the mapped directories on the cluster file system. In this example, you map your CGI directory to /global/pathname/cgi-bin.
In situations where the CGI programs access "back-end" servers, such as an RDBMS, ensure that the Sun Cluster software also controls the "back-end" server. If the server is an RDBMS that the Sun Cluster software supports, use one of the highly available RDBMS packages. Alternatively, you can use the APIs documented in the Sun Cluster 3.0 U1 Data Services Developers' Guide to put the server under Sun Cluster control.
To perform this procedure, you need the following information about your configuration.
The server root directory (the path to the application binaries). You can install the binaries on the local disks or on the cluster file system. For a discussion of the advantages and disadvantages of each location, see "Determining the Location of the Application Binaries".
The logical hostname (for failover services) or shared address (for scalable services) that clients use to access the data service. You must configure these addresses, and they must be online.
If you run the Sun Cluster HA for iPlanet Web Server service and another HTTP server and they use the same network resources, configure them to listen on different ports. Otherwise, a port conflict might occur between the two servers.
Become superuser on a cluster member.
Run the setup command from the iPlanet install directory on the CD.
When prompted, enter the location where the iPlanet server binaries will be installed.
You can specify a location on the cluster file system or on local disks for the location of the install. If you choose to install on local disks, run the setup command on all the cluster nodes that are potential primaries of the network resource (logical hostname or shared address) specified in the next step.
When prompted for a machine name, enter the logical hostname on which the iPlanet server depends and the appropriate DNS domain name.
A full logical hostname is of the format network-resource.domainname, such as schost-1.sun.com.
For the Sun Cluster HA for iPlanet Web Server data service to fail over correctly, you must use either the logical hostname or shared-address resource name (rather than the physical hostname) here and everywhere else that you are asked.
Select Run Admin Server as Root when you are asked.
Note the port number that the iPlanet install script selects for the administration server if you want to use this default value later when configuring an instance of the iPlanet Web server. Otherwise, you can specify a different port number when you configure the iPlanet server instance.
Type a Server Administrator ID and a chosen password when you are asked.
Follow the guidelines for your system.
When a message displays that the admin server will be started, your installation is ready for configuration.
To configure the Web server, see the next section, "How to Configure an iPlanet Web Server".
This procedure describes how to configure an instance of the iPlanet Web server to be highly available. Use the Netscape browser to interact with this procedure.
Consider the following points before you perform this procedure.
Before you start, ensure that you have installed the browser on a machine that can access the network on which the cluster resides. You can install the browser on a cluster node or on the administrative workstation for the cluster.
Your configuration files can reside on either a local file system or on the cluster file system.
Any certificates that are installed for the secure instances must be installed from all cluster nodes. This installation involves running the admin console on each node. Thus, if a cluster has nodes n1, n2, n3, and n4, the installation steps are as follows.
Run the admin server on node n1.
From your Web browser, connect to the admin server as http://n1.domain:port-for example, http://n1.eng.sun.com:8888-or whatever you specified as the admin server port. The port is typically 8888.
Install the certificate.
Stop the admin server on node n1 and run the admin server from node n2.
From the Web browser, connect to the new admin server as http://n2.domain:port, for example, http://n2.eng.sun.com:8888.
Repeat these steps for nodes n3 and n4.
After you have considered the preceding points, complete the following steps.
From the administrative workstation or a cluster node, start the Netscape browser.
On one of the cluster nodes, go to the directory https-admserv, then start the iPlanet admin server.
# cd https-admserv # ./start |
Enter the URL of the iPlanet admin server in the Netscape browser.
The URL consists of the physical hostname and port number that the iPlanet installation script established in Step 4 of the server installation procedure, for example, n1.eng.sun.com:8888. When you perform Step 2 of this procedure, the ./start command displays the admin URL.
When prompted, use the user ID and password you specified in Step 6 of the server installation procedure to log in to the iPlanet administration server interface.
Begin to administer the iPlanet Web Server instance that was created.
If you need another instance, create a new one.
The administration graphical interface provides a form with details of the iPlanet server configuration. You can accept the defaults on the form, with the following exceptions.
Verify that the server name is correct.
Verify that the server user is set as superuser.
Change the bind address field to one of the following addresses.
A logical hostname or shared address if you use DNS as your name service
The IP address associated with the logical hostname or shared address if you use NIS as your name service
Create a directory on the local disk of all the nodes to hold the logs, error files, and PID file that iPlanet Web Server manages.
For iPlanet to work correctly, these files must be located on each node of the cluster, not on the cluster file system.
Choose a location on the local disk that is the same for all the nodes in the cluster. Use the mkdir -p command to create the directory. Make nobody the owner of this directory.
The following example shows how to complete this step.
phys-schost-1# mkdir -p /var/pathname/http-instance/logs/ |
If you anticipate large error logs and PID files, do not put them in a directory under /var because they will overwhelm this directory. Rather, create a directory in a partition with adequate space to handle large files.
Edit the ErrorLog and PidLog entries in the magnus.conf file to reflect the directory created in the previous step, and synchronize the changes from the administrator's interface.
The magnus.conf file specifies the locations for the error files and PID files. Edit this file to change the error and PID file locations to the directory that you created in Step 5. The magnus.conf file is located in the config directory of the iPlanet server instance. If the instance directory is located on the local file system, you must modify the magnus.conf file on each of the nodes.
Change the entries as follows.
# Current ErrorLog and PidLog entries ErrorLog /global/data/netscape/https-schost-1/logs/error PidLog /global/data/netscape/https-insecure-schost-1/logs/pid # New entries ErrorLog /var/pathname/http-instance/logs/error PidLog /var/pathname/http-instance/logs/pid |
As soon as the administrator's interface detects your changes, the interface displays a warning message, as follows.
Warning: Manual edits not loaded Some configuration files have been edited by hand. Use the Apply button on the upper right side of the screen to load the latest configuration files. |
Click Apply as prompted.
The administrator's interface then displays the following warning.
Configuration files have been edited by hand. Use this button to load the latest configuration files. |
Click Load Configuration Files as prompted.
Use the administrator's interface to set the location of the access log file.
From the administration graphical interface, click the Preferences tab and then Logging Options on the side bar. A form is then displayed for configuring the Access Log parameter.
Change the location of the log file to the directory that you created in Step 5.
For example, make the following changes to the log file.
Log File: /var/pathname/http-instance/logs/access |
Click Save to save your changes.
Do not click Save and Apply-doing so starts iPlanet Web Server.
If you have not installed the Sun Cluster HA for iPlanet Web Server data-service packages from the Sun Cluster Agents CD, go to "Installing Sun Cluster HA for iPlanet Web Server Packages". Otherwise, go to "Registering and Configuring Sun Cluster HA for iPlanet Web Server".
You can use the scinstall(1M) utility to install SUNWschtt, the Sun Cluster HA for iPlanet Web Server data-service package, on a cluster. Do not use the -s option to non-interactive scinstall to install all data-service packages on the CD.
If you installed the data-service packages during your initial Sun Cluster installation, proceed to "Registering and Configuring Sun Cluster HA for iPlanet Web Server". Otherwise, use the following procedure to install the SUNWschtt package.
You need the Sun Cluster Agents CD to complete this procedure. Run this procedure on all the cluster nodes that will run the Sun Cluster HA for iPlanet Web Server data service.
Load the Agents CD into the CD-ROM drive.
Run the scinstall utility with no options.
This step starts the scinstall utility in interactive mode.
Select the Add Support for New Data Service to This Cluster Node menu option.
This option enables you to load software for any data services that exist on the CD.
Exit the scinstall utility.
Unload the CD from the drive.
See "Registering and Configuring Sun Cluster HA for iPlanet Web Server" to register the Sun Cluster HA for iPlanet Web Server data service and configure the cluster for the data service.
You can configure the Sun Cluster HA for iPlanet Web Server data service as a failover service or as a scalable service. You must include some additional steps to configure iPlanet as a scalable service. In the first procedure in this section, these additional steps begin with a notation that they are required for scalable services only. Individual examples of a failover service and a scalable service follow the procedure.
This procedure describes how to use the scrgadm(1M) command to register and configure the Sun Cluster HA for iPlanet Web Server data service.
Other options also enable you to register and configure the data service. See "Tools for Data-Service Resource Administration" for details about these options.
To perform this procedure, you must have the following information.
The name of the resource type for the Sun Cluster HA for iPlanet Web Server data service. This name is SUNW.iws.
The names of the cluster nodes that master the data service. For a failover service, only one node can master a data service at a time.
The logical hostname (for failover services) or shared address (for scalable services) that clients use to access the data service.
The path to the iPlanet binaries. You can install the binaries on the local disks or the cluster file system. See "Determining the Location of the Application Binaries" for a discussion of the advantages and disadvantages of each location.
The Network_resources_used setting on the iPlanet application resource determines the set of IP addresses that iPlanet Web Server uses. The Port_list setting on the resource determines the list of port numbers that iPlanet Web Server uses. The fault monitor assumes that the iPlanet Web Server daemon is listening on all combinations of IP and port. If you have customized your magnus.conf file for the iPlanet Web Server to listen on different port numbers (in addition to port 80), your resultant magnus.conf file must contain all possible combinations of IP address and ports. The fault monitor attempts to probe all such combinations and starts to fail if the iPlanet Web Server is not listening on a particular IP address-port combination. If the iPlanet Web Server does not serve all IP address-port combinations, you must break the iPlanet Web Server into separate instances that do.
Perform this procedure on any cluster member.
Become superuser on a cluster member.
Register the resource type for the Sun Cluster HA for iPlanet Web Server data service.
# scrgadm -a -t SUNW.iws |
Adds the data-service resource type.
Specifies the predefined resource-type name for your data service.
Create a failover resource group to hold the network and application resources.
For failover services, this resource group also holds the application resources.
You can optionally select the set of nodes on which the data service can run with the -h option.
# scrgadm -a -g resource-group [-h nodelist] |
Specifies the name of the failover resource group. This name can be your choice but must be unique for resource groups within the cluster.
An optional comma-separated list of physical node names or IDs that identify potential masters. The order here determines the order in which the nodes are considered as primary during failover.
Use -h to specify the order of the node list. If all the nodes in the cluster are potential masters, you need not use the -h option.
Verify that all network addresses that you use have been added to your name-service database.
You should have performed this verification during the Sun Cluster installation. See the planning chapter in the Sun Cluster 3.0 U1 Installation Guide for details.
To avoid any failures because of name-service lookup, ensure that all logical hostnames and shared addresses are present in the server's and client's /etc/hosts file. Configure name-service mapping in /etc/nsswitch.conf on the servers to first check the local files before trying to access NIS or NIS+.
Add a network resource (logical hostname or shared address) to the failover resource group.
# scrgadm -a {-S | -L} -g resource-group \ -l network-resource,... [-j resource] \ [-X auxnodelist=node, ...] [-n netiflist] |
You use -S for shared-address resources or -L for logical-hostname resources.
Specifies the name of the failover resource group.
Specifies a comma-separated list of network resources to add. You can use the -j option to specify a name for the resources. If you do not do so, the network resources have the name of the first entry on the list.
Specifies an optional resource name. If you do not supply this name, the name of the network resource defaults to the first name specified after the -l option.
Specifies an optional comma-separated list of physical node IDs that identify cluster nodes that can host the shared address but never serve as a primary if failover occurs. These nodes are mutually exclusive with the nodes identified in nodelist for the resource group, if specified.
Specifies an optional comma-separated list that identifies the NAFO groups on each node. All nodes in nodelist of the resource group must be represented in netiflist. If you do not specify this option, scrgadm attempts to discover a net adapter on the subnet that the hostname list identifies for each node in nodelist.
For scalable services only - Create a scalable resource group to run on all desired nodes of the cluster.
If you run the Sun Cluster HA for iPlanet Web Server data service as a failover data service, do not perform this step-go to Step 8.
Create a resource group to hold a data-service application resource. You must specify the maximum and desired number of primary nodes, as well as a dependency between this resource group and the failover resource group that you created in Step 3. This dependency ensures that in the event of failover, the resource manager starts the network resource before starting any data services that depend on the network resource.
# scrgadm -a -g resource-group \ -y Maximum_primaries=m -y Desired_primaries=n \ -y RG_dependencies=resource-group |
Specifies the maximum number of active primary nodes allowed for this resource group. If you do not assign a value to this property, the default is 1.
Specifies the desired number of active primary nodes allowed for this resource group. If you do not assign a value to this property, the default is 1.
Identifies the resource group that contains the shared-address resource on which the resource group being created depends.
For scalable services only - Create an application resource in the scalable resource group.
If you run the Sun Cluster HA for iPlanet Web Server data service as a failover data service, do not perform this step-go to Step 8.
You can repeat this step to add multiple application resources (such as secure and insecure versions) to the same resource group.
You might also want to set load balancing for the data service. To do so, use the two standard resource properties Load_balancing_policy and Load_balancing_weights. See Appendix A, Standard Properties for a description of these properties. Additionally, see the examples that follow this section.
# scrgadm -a -j resource -g resource-group \ -t resource-type -y Network_resources_used=network-resource, ... \ -y Port_list=port-number/protocol, ... -y Scalable=True \ -x Confdir_list=config-directory, ... |
Specifies the name of the resource to add.
Specifies the name of the scalable resource group into which the resources are to be placed.
Specifies the type of the resource to add.
Specifies a comma-separated list of network resources that identify the shared addresses that the data service uses.
Specifies a comma-separated list of port numbers and protocol to be used, for example, 80/tcp,81/tcp.
Specifies a Boolean that is required for scalable services.
Specifies a comma-separated list of the locations of the iPlanet configuration files. The Sun Cluster HA for iPlanet Web Server data service requires this extension property.
A one-to-one mapping applies for Confdir_List and Port_List, that is, each of the values in one list must correspond to the values in the other list in the order specified.
For failover services only - Create an application resource in the failover resource group.
Perform this step only if you run the Sun Cluster HA for iPlanet Web Server data service as a failover data service. If you run the Sun Cluster HA for iPlanet Web Server data service as a scalable service, you must have performed Step 6 and Step 7 previously and must now go to Step 10.
You can repeat this step to add multiple application resources (such as secure and insecure versions) to the same resource group.
# scrgadm -a -j resource -g resource-group \ -t resource-type -y Network_resources_used=logical-hostname-list \ -y Port_list=port-number/protocol \ -x Confdir_list=config-directory |
Specifies the name of the resource to add.
Specifies the name of the failover resource group into which the resources are to be placed.
Specifies the type of the resource to add.
Specifies a comma-separated list of network resources that identify the logical hosts that the data service uses.
Specifies the port number and protocol to use, for example, 80/tcp. Port_list for failover services must have exactly one entry only because of the one-to-one mapping rule between Port_list and Confdir_list.
Specifies the location of the iPlanet configuration files. The Confdir_list file for failover services must have exactly one entry only. The config-directory must contain a directory called config. The Sun Cluster HA for iPlanet Web Server data service requires this extension property.
Optionally, you can set additional extension properties that belong to the iPlanet data service to override the default value. See Table 3-2 for a list of these properties.
Bring the failover resource group online.
# scswitch -Z -g resource-group |
Enables the network resource and fault monitoring, switches the resource group into a managed state, and brings the resource group online.
Specifies the name of the failover resource group.
For scalable services only - Bring the scalable resource group online.
# scswitch -Z -g resource-group |
Enables the resource and monitor, moves the resource group to the managed state, and brings the resource group online.
Specifies the name of the scalable resource group.
The following example shows how to register a scalable iPlanet service.
Cluster Information Node names: phys-schost-1, phys-schost-2 Shared address: schost-1 Resource groups: sa-resource-group-1 (for shared addresses), iws-resource-group-1 (for scalable iPlanet application resources) Resources: schost-1 (shared address), iplanet-insecure-1 (insecure iPlanet application resource), iplanet-secure-1 (secure iPlanet application resource) (Add a failover resource group to contain shared addresses.) # scrgadm -a -g sa-resource-group-1 (Add the shared address resource to the failover resource group.) # scrgadm -a -S -g sa-resource-group-1 -l schost-1 (Add a scalable resource group.) # scrgadm -a -g iws-resource-group-1 -y Maximum_primaries=2 \ -y Desired_primaries=2 -y RG_dependencies=sa-resource-group-1 (Register the iPlanet resource type.) # scrgadm -a -t SUNW.iws (Add an insecure iPlanet instance with default load balancing.) # scrgadm -a -j iplanet-insecure-1 -g iws-resource-group-1 -t SUNW.iws \ -x Confdir_List=/opt/iplanet/https-iplanet-insecure-1 \ -y Scalable=True -y Network_resources_used=schost-1 -y Port_list=80/tcp (Add a secure iPlanet instance with sticky IP load balancing.) # scrgadm -a -j iplanet-secure-1 -g iws-resource-group-1 -t SUNW.iws \ -x Confdir_List=/opt/iplanet/https-iplanet-secure-1 \ -y Scalable=True -y Network_resources_used=schost-1 \ -y Port_list=443/tcp -y Load_balancing_policy=LB_STICKY \ -y Load_balancing_weight=40@1,60@2 (Bring the failover resource group online.) # scswitch -Z -g sa-resource-group-1 (Bring the scalable resource group online.) # scswitch -Z -g iws-resource-group-1 |
The following example shows how to register a failover iPlanet service on a two-node cluster.
Cluster Information Node names: phys-schost-1, phys-schost-2 Logical hostname: schost-1 Resource group: resource-group-1 (for all resources) Resources: schost-1 (logical hostname), iplanet-insecure-1 (insecure iPlanet application resource), iplanet-secure-1 (secure iPlanet application resource) (Add the resource group to contain all resources.) # scrgadm -a -g resource-group-1 (Add the logical hostname resource to the resource group.) # scrgadm -a -L -g resource-group-1 -l schost-1 (Register the iPlanet resource type.) # scrgadm -a -t SUNW.iws (Add an insecure iPlanet application resource instance.) # scrgadm -a -j iplanet-insecure-1 -g resource-group-1 -t SUNW.iws \ -x Confdir_list=/opt/iplanet/conf -y Scalable=False \ -y Network_resources_used=schost-1 -y Port_list=80/tcp\ (Add a secure iPlanet application resource instance.) # scrgadm -a -j iplanet-secure-1 -g resource-group-1 -t SUNW.iws \ -x Confdir_List=/opt/iplanet/https-iplanet-secure-1 -y Scalable=False \ -y Network_resources_used=schost-1 -y Port_list=443/tcp \ (Bring the failover resource group online.) # scswitch -Z -g resource-group-1 |
To configure the SUNW.HAStorage resource type, see "How to Configure SUNW.HAStorage Resource Type".
The SUNW.HAStorage resource type synchronizes actions between HA storage and the data service. The Sun Cluster HA for iPlanet Web Server data service is scalable, and therefore you should configure the SUNW.HAStorage resource type.
See the SUNW.HAStorage(5) man page and "Relationship Between Resource Groups and Disk Device Groups" for background information. See "How to Set Up SUNW.HAStorage Resource Type for New Resources" for the procedure.
This section describes the Sun Cluster HA for iPlanet Web Server extension properties. For failover, the data service enforces that the size of Confdir_list is one. If you want multiple configuration files (instances), make multiple failover resources, each with one Confdir_list entry.
Typically, you use the command line scrgadm -x parameter=value to configure extension properties when you create the iPlanet Web Server resource. You can also use the procedures described in Chapter 11, Administering Data-Service Resources to configure them later. See Appendix A, Standard Properties for details on all Sun Cluster properties.
Table 3-2 describes extension properties that you can configure for the iPlanet server. The only required extension property for creating an iPlanet server resource is the Confdir_list property. You can update some extension properties dynamically. You can update others, however, only when you create the resource. The Tunable column of the following table indicates when you can update each property
Table 3-2 Sun Cluster HA for iPlanet Web Server Extension Properties
Name/Data Type |
Default |
Range |
Tunable |
Description |
---|---|---|---|---|
Confdir_list (string array) |
None |
None |
At creation |
A pointer to the server root directory for a particular iPlanet Web server instance. If the Netscape Directory Server is in secure mode, the path name must contain a file named keypass, which contains the secure key password needed to start this instance. |
Monitor_retry_count (integer) |
4 |
0 - 2,147,483,641 -1 indicates an infinite number of retry attempts. |
Any time |
The number of times the process monitor facility (PMF) restarts the fault monitor during the time window that the Monitor_retry_interval property specifies. Note that this property refers to restarts of the fault monitor itself rather than to the resource. The system-defined properties Retry_interval and Retry_count control restarts of the resource. |
Monitor_retry_interval (integer) |
2 |
0 - 2,147,483,641 -1 indicates an infinite retry interval. |
Any time |
The time (in minutes) over which failures of the fault monitor are counted. If the number of times the fault monitor fails exceeds the value specified in the extension property Monitor_retry_count within this period, the PMF does not restart the fault monitor. |
Probe_timeout (integer) |
30 |
0 - 2,147,483,641 |
Any time |
The time-out value (in seconds) that the fault monitor uses to probe an iPlanet Web Server instance. |
The probe for the Sun Cluster HA for iPlanet Web Server (iWS) data service uses a request to the server to query the health of that server. Before the probe actually queries the server, a check is made to confirm that network resources are configured for this Web server resource. If no network resources are configured, an error message (No network resources found for resource) is logged, and the probe exits with failure.
The probe must address the following two configurations of iWS.
The secure instance
The insecure instance
The probe uses the time-out value that the resource property Probe_timeout specifies to limit the time spent trying to successfully probe iWS. See Appendix A, Standard Properties for details on this resource property.
The Network_resources_used resource-property setting on the iWS resource determines the set of IP addresses that the Web server uses. The Port_list resource-property setting determines the list of port numbers that iWS uses. The fault monitor assumes that the Web server is listening on all combinations of IP and port. If you customize your Web server configuration to listen on different port numbers (in addition to port 80), ensure that your resultant configuration (magnus.conf) file contains all possible combinations of IP addresses and ports. The fault monitor attempts to probe all such combinations and might fail if the Web server is not listening on a particular IP address and port combination.
The probe executes the following steps.
The probe uses the specified IP address and port combination to connect to the Web server. If the connection is unsuccessful, the probe concludes that a complete failure has occurred. The probe then records the failure and takes appropriate action.
If the probe successfully connects, the probe checks if the Web server is run in a secure mode. If so, the probe disconnects and returns with a success status. No further checks are performed for a secure iWS server.
However, if the Web server is running in insecure mode, the probe sends an HTTP 1.0 HEAD request to the Web server and waits for the response. The request can be unsuccessful for various reasons, including heavy network traffic, heavy system load, and misconfiguration.
Misconfiguration can occur when the Web server is not configured to listen on all IP address and port combinations that are being probed. The Web server should service every port for every IP address specified for this resource.
Misconfigurations can also result if the Network_resources_used and Port_list resource properties are not set correctly while you create the resource.
If the reply to the query is not received within the Probe_timeout resource proper limit, the probe considers this a failure of the Sun Cluster HA for iPlanet Web Server data service. The failure is recorded in the probe's history.
A probe failure can be a complete or partial failure. The following probe failures are considered complete failures.
Failure to connect to the server, as the following error message flags, with %s indicating the host name and %d the port number.
Failed to connect to %s port %d |
Running out of time (exceeding the resource-property timeout Probe_timeout) after trying to connect to the server.
Failure to successfully send the probe string to the server, as the following error message flags, with the first %s indicating the host name and %d the port number. The second %s indicates further details about the error.
Failed to communicate with server %s port %d: %s |
Two such partial failures within the resource-property interval Retry_interval are accumulated by the monitor and are counted as one.
The following probe failures are considered partial failures.
Running out of time (exceeding the resource-property timeout Probe_timeout) while trying to read the reply from the server to the probe's query.
Failing to read data from the server for other reasons, as the following error message flags, with the first %s indicating the host name and %d the port number. The second %s indicates further details about the error.
Failed to communicate with server %s port %d: %s |
Based on the history of failures, a failure can cause either a local restart or a failover of the data service. This action is further described in "Health Checks of the Data Service".