Sun Cluster 3.0 Release Notes

Known Documentation Problems

This section discusses documentation errors you might encounter and steps to correct these problems.

Installation Guide

The Sun Cluster 3.0 Installation Guide contains the following documentation errors:

Hardware Guide

In the Sun Cluster 3.0 Hardware Guide, the following procedures are incorrect or do not exist:

How to Move a Disk Cable to a New Adapter

Use the following procedure to move a disk cable to a new adapter within a node.

  1. Quiesce all I/O to the affected disk(s).

  2. Unplug the cable from the old adapter.

  3. Run the cfgadm(1M) command on the local node to unconfigure all drives affected by the move.

    Or, reboot the node by using the following command.


    # reboot -- -r
    
  4. Run the devfsadm -C command on the local node to clean up the Solaris device link.

  5. Run the scdidadm -C command on the local node to clean up the DID device path.

  6. Connect the cable to the new adapter.

  7. Run the cfgadm command on the local node to configure the drives in the new location.

    Or, reboot the node by using the following command.


    # reboot -- -r
    
  8. Run the scgdevs command to add the new DID device path.

How to Move a Disk Cable From One Node to Another

Use the following procedure to move a disk cable from one node to another node.

  1. Delete all references to the path you wish to remove from all volume manager and data service configurations.

  2. Quiesce all I/O to the affected disk(s).

  3. Unplug the cable from the old node.

  4. Run the cfgadm command on the old node to unconfigure all drives affected by the move.

    Or, reboot the node by using the following command.


    # reboot -- -r
    
  5. Run the devfsadm -C command on the old node to clean up the Solaris device link.

  6. Run the scdidadm -C command on the old node to clean up the DID device path.

  7. Connect the cable to the new node.

  8. Run the cfgadm command on the new node to configure the drives in the new location.

    Or, reboot the node by using the following command.


    # reboot -- -r
    
  9. Run the devfsadm command on the new node to create the new Solaris device links.

  10. Run the scgdevs command on the new node to add the new DID device path.

  11. Add the path on the new node to any required volume manager and data service configurations.

    When configuring data services, check that your node failover preferences are set to reflect the new configuration.

How to Update Cluster Software to Reflect Proper Device Configuration

If the preceding procedures are not followed correctly, an error might be logged the next time you run the scdidadm -r command or the scgdevs command. To update the cluster software to reflect the proper device configuration, perform the following steps.

  1. Make sure cable configuration is as you want it to be. Make sure the cable is detached from the old node.

  2. Make sure the old node is removed from any required volume manager or data service configurations.

  3. Run the cfgadm command on the old node to unconfigure all drives affected by the move.

    Or, reboot the node by using the following command.


    # reboot -- -r
    
  4. Run the devfsadm -C command on the node from where you removed the cable.

  5. Run the scdidadm -C command on the node from where you removed the cable.

  6. Run the cfgadm command on the new node to configure the drives in the new location.

    Or, reboot the node by using the following command.


    # reboot -- -r
    
  7. Run the scgdevs command on the new node to add the new DID device path.

  8. Run the scdidadm -R device command on the new node to make sure that SCSI reservations are in the correct state.

Data Services Developers' Guide

The sample code in Appendix B of the Sun Cluster 3.0 Data Services Developers' Guide has two known problems:

Concepts Guide

The following points should be noted about Sun Cluster 3.0 Concepts:

Using the Cluster Interconnect for Application Traffic

A cluster must have multiple network connections between nodes, forming the cluster interconnect. The clustering software uses multiple interconnects both for high availability and to improve performance. For internal traffic (for example, file system data or scalable services data), messages are striped across all available interconnects in a round-robin fashion.

The cluster interconnect is also available to applications, for highly available communication between nodes. For example, a distributed application might have components running on different nodes that need to communicate. By using the cluster interconnect rather than the public interconnect, these connections can withstand the failure of an individual link.

To use the cluster interconnect for communication between nodes, an application must use the private hostnames configured when the cluster was installed. For example, if the private hostname for node 1 is clusternode1-priv, use that name to communicate over the cluster interconnect to node 1. TCP sockets opened using this name are routed over the cluster interconnect and can be transparently re-routed in the event of network failure.

Note that because the private hostnames can be configured during installation, the cluster interconnect can use any name chosen at that time. The actual name can be obtained from scha_cluster_get(3HA) with the scha_privatelink_hostname_node argument.

For application-level use of the cluster interconnect, a single interconnect is used between each pair of nodes, but separate interconnects are used for different node pairs if possible. For example, consider an application running on three nodes and communicating over the cluster interconnect. Communication between nodes 1 and 2 might take place on interface hme0, while communication between nodes 1 and 3 might take place on interface qfe1. That is, application communication between any two nodes is limited to a single interconnect, while internal clustering communication is striped over all interconnects.

Note that the application shares the interconnect with internal clustering traffic, so the bandwidth available to the application depends on the bandwidth used for other clustering traffic. In the event of a failure, internal traffic can round-robin over the remaining interconnects, while application connections on a failed interconnect can switch to a working interconnect.

Two types of addresses support the cluster interconnect, and gethostbyname(3N) on a private hostname normally returns two IP addresses. The first address is called the logical pairwise address, and the second address is called the logical pernode address.

A separate logical pairwise address is assigned to each pair of nodes. This small logical network supports failover of connections. Each node is also assigned a fixed pernode address. That is, the logical pairwise addresses for clusternode1-priv are different on each node, while the logical pernode address for clusternode1-priv is the same on each node. A node does not have a pairwise address to itself, however, so gethostbyname(clusternode1-priv) on node 1 returns only the logical pernode address.

Note that applications accepting connections over the cluster interconnect and then verifying the IP address for security reasons must check against all IP addresses returned from gethostbyname, not just the first IP address.

If you need consistent IP addresses in your application at all points, configure the application to bind to the pernode address on both the client and the server side so that all connections can appear to come and go from the pernode address.

Data Services Installation and Configuration Guide

Chapter 5, "Installing and Configuring Sun Cluster HA for Apache," in the Sun Cluster 3.0 Data Services Installation and Configuration Guide describes the procedure for installing the Apache Web Server from the Apache web site (http://www.apache.org). However, you can also install the Apache Web Server from the Solaris 8 operating environment CD-ROM.

The Apache binaries are included in three packages--SUNWapchr, SUNWapchu, and SUNWapchd--that form the SUNWCapache package metacluster. You must install SUNWapchr before SUNWapchu.

Place the Web server binaries on the local file system on each of your cluster nodes or on a cluster file system.

Installing Apache from the Solaris 8 CD-ROM

This procedure documents the steps required to use the Sun Cluster HA for Apache data service with the version of the Apache Web Server that is on the Solaris 8 operating environment CD-ROM.

  1. Install the Apache packages SUNWapchr, SUNWapchu, and SUNWapchd if they are not already installed.

    Use pkginfo(1) to determine if the packages are already installed.


    # pkgadd -d Solaris 8 Product directory SUNWapchr SUNWapchu SUNWapchd
    ...
    Installing Apache Web Server (root) as SUNWapchr
    ...
    [ verifying class initd ]
    /etc/rc0.d/K16apache linked pathname
    /etc/rc1.d/K16apache linked pathname
    /etc/rc2.d/K16apache linked pathname
    /etc/rc3.d/S50apache linked pathname
    /etc/rcS.d/K16apache linked pathname
    ...
  2. Disable the start and stop run control scripts that were just installed as part of the SUNWapchr package.

    Disabling these scripts is necessary because the Sun Cluster HA for Apache data service will start and stop the Apache application after the data service has been configured. Perform the following steps:

    1. List the Apache run control scripts.

    2. Rename the Apache run control scripts.

    3. Verify that all the Apache-related scripts have been renamed.


    Note -

    The following example changes the first letter in the name of the run control script from upper case to lower case. You can rename the scripts, however, in a fashion consistent with your normal administration practices.



    # ls -1 /etc/rc?.d/*apache
    /etc/rc0.d/K16apache
    /etc/rc1.d/K16apache
    /etc/rc2.d/K16apache
    /etc/rc3.d/S50apache
    /etc/rcS.d/K16apache
    
    # mv /etc/rc0.d/K16apache  /etc/rc0.d/k16apache# mv /etc/rc1.d/K16apache  /etc/rc1.d/k16apache
    
    # mv /etc/rc2.d/K16apache  /etc/rc2.d/k16apache
    
    # mv /etc/rc3.d/S50apache  /etc/rc3.d/s50apache
    
    # mv /etc/rcS.d/K16apache  /etc/rcS.d/k16apache
    
    
    
    # ls -1 /etc/rc?.d/*apache
    /etc/rc0.d/k16apache/etc/rc1.d/k16apache/etc/rc2.d/k16apache/etc/rc3.d/s50apache/etc/rcS.d/k16apache

Man Pages

New man pages are included for each data service supplied with Sun Cluster 3.0 software. The data service man pages include: SUNW.apache(5), SUNW.dns(5), SUNW.iws(5), SUNW.nfs(5), SUNW.nsldap(5), SUNW.oracle_listener(5), SUNW.oracle_server(5), SUNW.HAStorage(5) and scalable_service(5). These man pages describe the standard and extension properties that these data services use.