Sun Cluster 3.0 Data Services Installation and Configuration Guide

Chapter 3 Installing and Configuring Sun Cluster HA for iPlanet Web Server

This chapter provides the procedures for installing and configuring Sun Cluster HA for iPlanet Web Server. This data service was formerly known as Sun Cluster HA for Netscape HTTP. Some error messages from the application might still use the name Netscape but they refer to iPlanet Web Server.

This chapter contains the following procedures:

You can configure Sun Cluster HA for iPlanet Web Server as a failover or scalable service. For general information about data services, resource groups, resources, and other related topics, see Chapter 1, Planning for Sun Cluster Data Services and the Sun Cluster 3.0 Concepts document.


Note -

If you are running multiple data services in your Sun Cluster configuration, you can set up the data services in any order, with one exception: If Sun Cluster HA for iPlanet Web Server depends on Sun Cluster HA for DNS, you must set up DNS first. See Chapter 6, Installing and Configuring Sun Cluster HA for Domain Name Service (DNS) for details. DNS software is included in the Solaris operating environment. If the cluster is to obtain the DNS service from another server, then configure the cluster to be a DNS client first.



Note -

After installation, do not manually start and stop the iPlanet Web server except by using the cluster administration command scswitch(1M). See the man page for details. After it is started, the iPlanet Web Server is controlled by Sun Cluster.


Planning the Installation and Configuration

Use the following section in conjunction with the worksheets in the Sun Cluster 3.0 Release Notes as a checklist before installing and configuring Sun Cluster HA for iPlanet Web Server.

Consider the following prior to starting your installation:

Installing and Configuring Sun Cluster HA for iPlanet Web Server

Table 3-1 lists the sections that describe the installation and configuration tasks.

Table 3-1 Task Map: Installing and Configuring Sun Cluster HA for iPlanet Web Server

Task 

For Instructions, Go To ... 

Install iPlanet Web Server 

"Installing and Configuring an iPlanet Web Server"

Install the Sun Cluster HA for iPlanet Web Server data service packages 

"Installing Sun Cluster HA for iPlanet Web Server Packages"

Configure the Sun Cluster HA for iPlanet Web Server data service 

"Registering and Configuring Sun Cluster HA for iPlanet Web Server"

Configure resource extension properties 

"Configuring Sun Cluster HA for iPlanet Web Server Extension Properties"

Installing and Configuring an iPlanet Web Server

This section describes the steps for installing the iPlanet Web Server (by using the setup command) and enabling it to run as the Sun Cluster HA for iPlanet Web Server data service.


Note -

You must follow certain conventions when you configure URL mappings for the Web server. For example, to preserve availability when setting the CGI directory, you must locate the mapped directories on the cluster file system. In this example, you map your CGI directory to /global/pathname/cgi-bin.

In situations where the CGI programs access "back-end" servers, such as an RDBMS, ensure that the "back-end" server is also controlled by Sun Cluster. If the server is an RDBMS supported by Sun Cluster, use one of the highly available RDBMS packages. Alternatively, you can put the server under Sun Cluster control by using the APIs documented in the Sun Cluster 3.0 Data Services Developers' Guide.


How to Install an iPlanet Web Server

To perform this procedure, you need the following information about your configuration:


Note -

If you are running the Sun Cluster HA for iPlanet Web Server service and another HTTP server and they use the same network resources, configure them to listen on different ports. Otherwise, a port conflict might occur between the two servers.


  1. Become superuser on a node in the cluster.

  2. Run the setup command from the iPlanet install directory on the CD.

  3. When prompted, type the location where the iPlanet server binaries will be installed.

    You can specify a location on the cluster file system or on local disks for the location of the install. If you choose to install on local disks, run setup on all the cluster nodes that are potential primaries of the network resource (logical host name or shared address) specified in the next step.

  4. When prompted for a machine name, type the logical host name on which the iPlanet server depends and the appropriate DNS domain name.

    A full logical host name is of the format network-resource.domainname, such as schost-1.sun.com.


    Note -

    For Sun Cluster HA for iPlanet Web Server to fail over correctly, you must use either the logical host name or shared address resource name (rather than the physical host name) here and everywhere else you are asked.


  5. Select "Run admin server as root" when asked.

    Note the port number selected by the iPlanet install script for the administration server if you want to use this default value later when configuring an instance of the iPlanet Web server. Otherwise, you can specify a different port number when configuring the iPlanet server instance.

  6. Type a Server Administrator ID and a chosen password when asked.

    Follow the guidelines for your system.

    When a message indicating that the admin server will be started is displayed, your installation is ready for configuration.

Where to Go from Here

To configure the Web server, see the next section, "How to Configure an iPlanet Web Server".

How to Configure an iPlanet Web Server

This procedure describes how to configure an instance of the iPlanet Web server to be highly available. You interact with this procedure by using the Netscape browser.

Note the following before performing this procedure:

  1. From the administrative workstation or a cluster node, start the Netscape browser.

  2. On one of the cluster nodes, go to the directory https-admserv, then start the iPlanet admin server:


    # cd https-admserv
    # ./start
    

  3. Type the URL of the iPlanet admin server in the Netscape browser.

    The URL consists of the physical host name and port number that was established by the iPlanet installation script in Step 4 of the server installation procedure, for example, n1.eng.sun.com:8888. When you perform Step 2 above, the admin URL is displayed by the ./start command.

    When prompted, log in to the iPlanet administration server interface by using the user ID and password you specified in Step 6 of the server installation procedure.

  4. Begin to administer the iPlanet Web Server instance that was created. If you need another instance, create a new one.

    The administration graphical interface provides a form with details of the iPlanet server configuration. You can accept the defaults on the form, with the following exceptions:

    • Verify that the server name is correct.

    • Verify that the server user is set as root.

    • Change the bind address field to:

      • A logical host name or shared address if you are using DNS as your name service

      • The IP address associated with the logical host name or shared address if you are using NIS as your name service

  5. Create a directory on the local disk of all the nodes to hold the logs, error files, and PID file managed by iPlanet Web Server.

    For iPlanet to work correctly, these files must be located on each node of the cluster, not on the cluster file system.

    Choose a location on the local disk that is the same for all the nodes in the cluster. Use the mkdir -p command to create the directory. Make nobody the owner of this directory.

    For example:


    phys-schost-1# mkdir -p /var/pathname/http_instance/logs/
    

    Note -

    If you anticipate large error logs and PID files, do not put them in a directory under /var because they will overwhelm this directory. Rather, create a directory in a partition with adequate space to handle large files.


  6. Edit the ErrorLog and PidLog entries in the magnus.conf file to reflect the directory created in the previous step and synchronize the changes from the administrator's interface.

    The magnus.conf file specifies the locations for the error files and PID files. You must edit this file to change the location to that of the directory you created in Step 5. The magnus.conf file is located in the config directory of the iPlanet server instance. If the instance directory is located on the local file system, you must modify magnus.conf on each of the nodes.

    Change the entries as follows:


    # Current ErrorLog and PidLog entries
    ErrorLog /global/data/netscape/https-schost-1/logs/error
    PidLog /global/data/netscape/https-insecure-schost-1/logs/pid
     
    # New entries
    ErrorLog /var/pathname/http_instance/logs/error
    PidLog /var/pathname/http_instance/logs/pid

    As soon as the administrator's interface detects your changes, it displays a warning message, as follows:


    Warning: Manual edits not loaded
    Some configuration files have been edited by hand. Use the Apply
    
    button on the upper right side of the screen to load the latest
    
    configuration files.

    Click Apply as prompted.

    The administrator's interface then displays this warning:


    Configuration files have been edited by hand. Use this button to
    
    load the latest configuration files.

    Click Load Configuration Files as prompted.

  7. Use the administrator's interface to set the location of the access log file.

    From the administration graphical interface, click the Preferences tab and then Logging Options on the side bar. A form is then displayed for configuring the Access Log parameter.

    Change the log file to be in the directory you created in Step 5.

    For example:


    Log File: /var/pathname/http_instance/logs/access
  8. Click Save to save your changes.

    Do not click Save and Apply; doing so starts the iPlanet Web Server.

Where to Go from Here

If the data service packages for Sun Cluster HA for iPlanet Web Server have not been installed from the Sun Cluster data service CD, go to "Installing Sun Cluster HA for iPlanet Web Server Packages". Otherwise, go to "Registering and Configuring Sun Cluster HA for iPlanet Web Server".

Installing Sun Cluster HA for iPlanet Web Server Packages

The scinstall(1M) utility installs SUNWschtt, the Sun Cluster HA for iPlanet Web Server data service package, on a cluster. You can install specific data service packages from the Sun Cluster data service CD by using interactive scinstall, or you can install all data service packages on the CD by using the -s option to non-interactive scinstall. The preferred method is to use interactive scinstall, as described in the following procedure.

The data service packages might have been installed as part of your initial Sun Cluster installation. If not, use this procedure to install them now.

How to Install Sun Cluster HA for iPlanet Web Server Packages

You need the Sun Cluster data service CD to complete this procedure. Run this procedure on all the cluster nodes that will run Sun Cluster HA for iPlanet Web Server.

  1. Load the data service CD into the CD-ROM drive.

  2. Run scinstall with no options.

    This command starts scinstall in interactive mode.

  3. Select the menu option: "Add support for new data service to this cluster node."

    You can then load software for any data services that exist on the CD.

  4. Exit scinstall and unload the CD from the drive.

Where to Go from Here

See "Registering and Configuring Sun Cluster HA for iPlanet Web Server" to register Sun Cluster HA for iPlanet Web Server and configure the cluster for the data service.

Registering and Configuring Sun Cluster HA for iPlanet Web Server

You can configure Sun Cluster HA for iPlanet Web Server as a failover service or as a scalable service. You must include some additional steps to configure iPlanet as a scalable service. In the first procedure in this section, these additional steps begin with a notation that they are required for scalable services only. Individual examples of a failover service and a scalable service follow the procedure.

How to Register and Configure Sun Cluster HA for iPlanet Web Server

To register and configure the Sun Cluster HA for iPlanet Web Server data service, use the Cluster Module of Sun Management Center or the following command-line procedure.

To perform this procedure, you must have the following information:

Perform this procedure on any cluster member.

  1. Become superuser on a node in the cluster.

  2. Register the resource type for Sun Cluster HA for iPlanet Web Server.


    # scrgadm -a -t SUNW.iws
    
    -a

    Adds the data service resource type.

    -t SUNW.iws

    Specifies the predefined resource type name for your data service.

  3. Create a failover resource group to hold the network and application resources.

    For failover services, this resource group also holds the application resources.

    You can optionally select the set of nodes on which the data service can run with the -h option.


    # scrgadm -a -g fo-resource-group-name [-h nodelist]
    -g fo-resource-group-name

    Specifies the name of the failover resource group. This name can be your choice but must be unique for resource groups within the cluster.

    -h nodelist

    An optional comma-separated list of physical node names or IDs that identify potential masters. The order here determines the order in which the nodes are considered as primary during failover.


    Note -

    Use -h to specify the order of the node list. If all the nodes in the cluster are potential masters, you need not use the -h option.


  4. Verify that all network addresses that you are using have been added to your name service database.

    You should have done this verification as part of the Sun Cluster installation. For details, see the planning chapter in the Sun Cluster 3.0 Installation Guide.


    Note -

    To avoid any failures because of name service lookup, ensure that all logical host names and shared addresses are present in the server's and client's /etc/hosts file. Configure name service mapping in /etc/nsswitch.conf on the servers to first check the local files before trying to access NIS or NIS+.


  5. Add a network resource (logical host name or shared address) to the failover resource group.


    # scrgadm -a {-S | -L} -g fo-resource-group-name \
    -l network-resource,... [-j resource-name] \
    [-X auxnodelist=nodeid, ...] [-n network-interface-id-list]
    -S | -L

    You use -S for shared address resources or -L for logical host name resources.

    -g fo-resource-group-name

    Specifies the name of the failover resource group.

    -l network-resource, ...

    Specifies a comma-separated list of network resources to add. You can use the -j option to specify a name for the resources. If you do not do so, the network resources have the name of the first entry on the list.

    -j resource-name

    Specifies an optional resource name. If you do not supply this name, the name of the network resource defaults to the first name specified after the -l option.

    -X auxnodelist=nodeid, ...

    Specifies an optional comma-separated list of physical node IDs that identify cluster nodes that can host the shared address but never serve as a primary in the case of failover. These nodes are mutually exclusive with the nodes identified in nodelist for the resource group, if specified.

    -n network-interface-id-list

    Specifies an optional comma-separated list that identifies the NAFO groups on each node. All nodes in nodelist of the resource group must be represented in network-interface-list. If you do not specify this option, scrgadm attempts to discover a net adapter on the subnet identified by the hostname list for each node in nodelist.

  6. Scalable services only: Create a scalable resource group to run on all desired nodes of the cluster.

    If you are running Sun Cluster HA for iPlanet Web Server as a failover data service, skip Step 7.

    Create a resource group to hold a data service application resource. You must specify the maximum and desired number of primary nodes, as well as a dependency between this resource group and the failover resource group you created in Step 3. This dependency ensures that in the event of failover, the resource manager starts up the network resource before any data services that depend on it.


    # scrgadm -a -g resource-group-name \
    -y Maximum_primaries=m -y Desired_primaries=n \
    -y RG_dependencies=resource-group-name
    
    -y Maximum_primaries=m

    Specifies the maximum number of active primary nodes allowed for this resource group. If you do not assign a value to this property, the default is 1.

    -y Desired_primaries=n

    Specifies the desired number of active primary nodes allowed for this resource group. If you do not assign a value to this property, the default is 1.

    -y RG_dependencies= resource-group-name

    Identifies the resource group that contains the shared address resource on which the resource group being created depends.

  7. Scalable services only: Create an application resource in the scalable resource group.

    If you are running Sun Cluster HA for iPlanet Web Server as a failover data service, skip to Step 8. You can repeat this step to add multiple application resources (such as secure and insecure versions) to the same resource group.

    You might also want to set load balancing for the data service. To do so, use the two standard resource properties Load_balancing_policy and Load_balancing_weights. For a description of these properties, see Appendix A, Standard Properties. See also the examples that follow this section.


    # scrgadm -a -j resource-name -g ss-resource-group-name \
    -t resource-type-name -y Network_resources_used=network-resource, ... \
    -y Port_list=port-number/protocol, ... -y Scalable=True \
    -x Confdir_list=config-directory, ...
    -j resource-name

    Specifies the name of the resource to add.

    -g ss-resource-group-name

    Specifies the name of the scalable resource group into which the resources are to be placed.

    -t resource-type-name

    Specifies the type of the resource to add.

    -y Network_resources_used= network-resource, ...

    Specifies a comma-separated list of network resources that identify the shared addresses used by the data service.

    -y Port_list=port-number/protocol, ...

    Specifies a comma-separated list of port numbers and protocol to be used, for example, 80/tcp,81/tcp.

    -y Scalable=True

    Specifies a Boolean that is required for scalable services.

    -x Confdir_list=config-directory, ...

    Specifies a comma-separated list of the locations of the iPlanet configuration files. This is a required extension property for Sun Cluster HA for iPlanet Web Server.


    Note -

    A one-to-one mapping applies for Confdir_List and Port_List-that is, each of the values in one list must correspond to the values in the other list in the order specified.


  8. Failover services only: Create an application resource in the failover resource group.

    Perform this step only if you are running Sun Cluster HA for iPlanet Web Server as a failover data service. If you are running Sun Cluster HA for iPlanet Web Server as a scalable service, you must have performed Step 6 and Step 7 previously and must now go to Step 10. You can repeat this step to add multiple application resources (such as secure and insecure versions) to the same resource group.


    # scrgadm -a -j resource-name -g fo-resource-group-name \
    -t resource-type-name -y Network_resources_used=logical-hostname-list \
    -y Port_list=port-number/protocol \
    -x Confdir_list=config-directory
    
    -j resource-name

    Specifies the name of the resource to add.

    -g fo-resource-group-name

    Specifies the name of the failover resource group into which the resources are to be placed.

    -t resource-type-name

    Specifies the type of the resource to add.

    -y Network_resources_used=network-resource, ...

    Specifies a comma-separated list of network resources that identify the logical hosts used by the data service.

    -y Port_list=port-number/protocol

    Specifies the port number and protocol to be used, for example, 80/tcp. Port_list for failover services must have exactly one entry only because of the one-to-one mapping rule between Port_list and Confdir_list.

    -x Confdir_list=config-directory

    Specifies the location of the iPlanet configuration files. Confdir_list for failover services must have exactly one entry only. config-directory must contain a directory called config. This is a required extension property.


    Note -

    Optionally, you can set additional extension properties that belong to the iPlanet data service to override the default value. For a list of these properties, see Table 3-2.


  9. Bring the failover resource group online.


    # scswitch -Z -g fo-resource-group-name
    
    -Z

    Enables the network resource and fault monitoring, switches the resource group into a managed state, and brings it online.

    -g fo-resource-group-name

    Specifies the name of the failover resource group.

  10. Scalable services only: Bring the scalable resource group online.


    # scswitch -Z -g ss-resource-group-name
    
    -Z

    Enables the resource and monitor, moves the resource group to the managed state, and brings it online.

    -g ss-resource-group-name

    Specifies the name of the scalable resource group.

Example-Registering Scalable Sun Cluster HA for iPlanet Web Server

The following example shows how to register a scalable iPlanet service.


Cluster Information
Node names: phys-schost-1, phys-schost-2
Shared address: schost-1
Resource groups: sa-schost-1 (for shared addresses), 	iws-schost-1 (for scalable iPlanet application resources)
Resources: schost-1 (shared address),	iplanet-insecure (insecure iPlanet application resource). 
    iplanet-secure (secure iPlanet application resource)
 
(Add a failover resource group to contain shared addresses.)
# scrgadm -a -g sa-schost-1
 
(Add the shared address resource to the failover resource group.)
# scrgadm -a -S -g sa-schost-1 -l schost-1
 
(Add a scalable resource group.)
# scrgadm -a -g iws-schost-1 -y Maximum_primaries=2 \
-y Desired_primaries=2 -y RG_dependencies=sa-schost-1
 
(Register the iPlanet resource type.)
# scrgadm -a -t SUNW.iws
 
(Add an insecure iPlanet instance with default load balancing.)
# scrgadm -a -j iplanet-insecure -g iws-schost-1 \
-t SUNW.iws \
-x Confdir_List=/opt/iplanet/https-iplanet-insecure \
-y Scalable=True -y Network_resources_used=schost-1 \
-y Port_list=80/tcp
 
(Add a secure iPlanet instance with sticky IP load balancing.)
# scrgadm -a -j iplanet-secure -g iws-schost-1 \
-t SUNW.iws \ 
-x Confdir_List=/opt/iplanet/https-iplanet-secure \
-y Scalable=True -y Network_resources_used=schost-1 \
-y Port_list=443/tcp -y Load_balancing_policy=LB_STICKY \
-y Load_balancing_weight=40@1,60@2
 
(Bring the failover resource group online.)
# scswitch -Z -g sa-schost-1
 
(Bring the scalable resource group online.)
# scswitch -Z -g iws-schost-1

Example-Registering Failover Sun Cluster HA for iPlanet Web Server

The following example shows how to register a failover iPlanet service on a two- node cluster.


Cluster Information
Node names: phys-schost-1, phys-schost-2
Logical hostname: schost-1
Resource group: lh-schost-1 (for all resources), 
Resources: schost-1 (logical hostname), iplanet-insecure (insecure iPlanet application resource), 
  iplanet-secure (secure iPlanet application resource)
 
(Add the resource group to contain all resources.)
# scrgadm -a -g lh-schost-1
 
(Add the logical hostname resource to the resource group.)
# scrgadm -a -L -g lh-schost-1 -l schost-1 
 
(Register the iPlanet resource type.)
# scrgadm -a -t SUNW.iws
 
(Add an insecure iPlanet application resource instance.)
# scrgadm -a -j iplanet-insecure -g lh-schost-1 \
-t SUNW.iws -x Confdir_list=/opt/iplanet/conf \
-y Scalable=False -y Network_resources_used=schost-1 \
-y Port_list=80/tcp
 
(Add a secure iPlanet application resource instance.)
# scrgadm -a -j iplanet-secure -g lh-schost-1 \ 
-t SUNW.iws \ 
-x Confdir_List=/opt/iplanet/https-iplanet-secure \
-y Scalable=False -y Network_resources_used=schost-1 \ 
-y Port_list=443/tcp
 
(Bring the failover resource group online.)
# scswitch -Z -g lh-schost-1

Where to Go from Here

To set or modify resource extension properties, see "Configuring Sun Cluster HA for iPlanet Web Server Extension Properties".

How to Configure SUNW.HAStorage Resource Type

The SUNW.HAStorage resource type synchronizes actions between HA storage and data service. Because Sun Cluster HA for iPlanet Web Server is scalable, we strongly recommend that you set up SUNW.HAStorage.

For details on the background, see the SUNW.HAStorage(5) man page and "Relationship Between Resource Groups and Disk Device Groups". For the procedure, see "How to Set Up SUNW.HAStorage Resource Type for New Resources".

Configuring Sun Cluster HA for iPlanet Web Server Extension Properties

For failover, the data service enforces that the size of Confdir_list is one. If you want multiple configuration files (instances), make multiple failover resources, each with one Confdir_list entry.

For details on all Sun Cluster properties, see Appendix A, Standard Properties.

How to Configure Sun Cluster HA for iPlanet Web Server Extension Properties

Typically, you configure extension properties by using the Cluster Module of Sun Management Center or the command line scrgadm -x parameter=value at the time you create the iPlanet Web Server resource. You can also configure them later by using the procedures described in Chapter 9, Administering Data Service Resources.

Some extension properties can be updated dynamically and others only when the resource is created. The only required extension property for creating an iPlanet server resource is Confdir_list. Table 3-2 describes extension properties you can configure for the iPlanet server. The Tunable column indicates when the property can be updated.

Table 3-2 Sun Cluster HA for iPlanet Web Server Extension Properties

Name/Data Type 

Default 

Range 

Tunable 

Description 

Confdir_list (string array)

None 

None 

At creation 

A pointer to the server root directory for a particular iPlanet Web server instance. If the Netscape Directory Server is in secure mode, the path name must contain a file named keypass, which contains the secure key password needed to start this instance.

Monitor_retry_count (integer)

0 - 2,147,483,641 

-1 indicates an infinite number of retry attempts. 

Any time 

The number of times the fault monitor is to be restarted by the process monitor facility during the time window specified by the Monitor_retry_interval property. Note that this property refers to restarts of the fault monitor itself rather than to the resource. Restarts of the resource are controlled by the system-defined properties Retry_interval and Retry_count.

Monitor_retry_interval (integer)

0 - 2,147,483,641 

-1 indicates an infinite retry interval. 

Any time 

The time (in minutes) over which failures of the fault monitor are counted. If the number of times the fault monitor fails exceeds the value specified in the extension property Monitor_retry_count within this period, the fault monitor is not restarted by the process monitor facility.

Probe_timeout (integer)

30 

0 - 2,147,483,641 

Any time 

The time-out value (in seconds) used by the fault monitor to probe an iPlanet Web Server instance.