Sun Cluster 3.0 12/01 Release Notes Supplement

Chapter 1 Sun Cluster 3.0 12/01 Release Notes Supplement

This document supplements the standard user documentation, including the Sun Cluster 3.0 12/01 Release Notes shipped with the SunTM Cluster 3.0 product. These "online release notes" provide the most current information on the Sun Cluster 3.0 product. This document includes the following information.

Revision Record

The following table lists the information contained in this document and provides the revision date for this information.

Table 1-1 Sun Cluster 3.0 12/01 Release Notes Supplement

Revision Date 

Information Added 

April 2002 

"Bug ID 4511699", Sun Cluster HA for NFS will not be able to fail over correctly in the presence of public network failures.

Documentation references added to support dynamic reconfiguration operations on Sun FireTM 15K systems. See "Dynamic Reconfiguration Operations For Sun Cluster Nodes".

Correction to the Solaris8 upgrade instruction in the Sun Cluster update-release upgrade procedure. See "Upgrading to a Sun Cluster 3.0 Software Update Release".

Documentation bugs in Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide. See "Data Services Installation and Configuration Guide".

Documentation bugs in Sun Cluster 3.0 12/01 Release Notes. See "Release Notes".

Updated procedures to support Sun Cluster HA for SAP on SAP 6.10. See Appendix B, Installing and Configuring Sun Cluster HA for SAP. The updated procedures are: "How to Install SAP for Scalable Application Server" and "How to Enable Failover SAP Instances to Run in the Cluster".

Updated procedures to support Sun StorEdge Traffic Manager software and campus cluster configurations with Sun StorEdge 9910 and Sun StorEdge 9960 systems. See Appendix D, Installing and Maintaining a Sun StorEdge 9910 or StorEdge 9960 Array.

Updated information to support Sun StorEdge 9910 and Sun StorEdge 9960 systems in a campus cluster with Sun Cluster software. See Appendix E, Campus Clustering with Sun Cluster 3.0 Software - Concepts.

Information and procedures for using the new scalable cluster topology. See Appendix G, Scalable Cluster Topology.

March 2002 

 

 

 

 

Doc references added and information rewritten to support dynamic reconfiguration operations on Sun FireTM 6800, 4810, 4800, and 3800 systems (in addition to Sun Enterprise 10000 systems). See "Dynamic Reconfiguration Operations For Sun Cluster Nodes".

Correction to maximum number of disksets and metadevice names. See "Maximum Number of Metadevice Names Per Diskset" and "Maximum Number of Disksets Per Cluster".

Notice that the Sun Cluster 3.0 12/01 multipathing restriction has been removed for EMC storage devices using EMC PowerPath software. See "Release Notes".

New procedures to support Sun StorEdgeTM 6900 Series systems. See Appendix C, Installing and Maintaining a Sun StorEdge 3900 or 6900 Series System.

Addition to procedures for the Sun StorEdge A1000 array to note that the procedures are valid for the Sun Netra st A1000 array also. See Appendix F, Installing and Maintaining Sun StorEdge A1000 and Netra st A1000 Arrays.

February 2002 

 

 

 

 

 

Upgrade restriction for OPS. See "Upgrade Restrictions and Requirements".

Dynamic reconfiguration procedure to supplement Hardware Guide procedures. See "Dynamic Reconfiguration Operations For Sun Cluster Nodes".

Notice that the restriction against using Sun StorEdge T3/T3+ partner-group arrays as quorum devices has been removed. See"Sun StorEdge T3/T3+ (Partner-Group Configuration)".

Corrections to the procedures that support Sun Cluster HA for SAP with Application Server as a scalable data service. See Appendix B, Installing and Configuring Sun Cluster HA for SAP.

Information to support campus clustering on Sun Cluster software, including configuration requirements and guidelines. See Appendix E, Campus Clustering with Sun Cluster 3.0 Software - Concepts.

Procedures to support Sun StorEdge A1000 systems. See Appendix F, Installing and Maintaining Sun StorEdge A1000 and Netra st A1000 Arrays.

January 2002 

 

"Bug ID 4362925", problem during Sun Cluster shutdown when scshutdown tries to unmount a cluster file system that the process nsrmmd is still referencing.

"Bug ID 4626010", description of known problem and clustering fix regarding the default hardware configuration of Sun StorEdgeTM 3900 Series systems.

Known documentation problems with the Sun Cluster 3.0 12/01 Hardware Guide. See "Hardware Guide".

Procedures to support Sun Cluster HA for SAP with Application Server as a scalable data service. See Appendix B, Installing and Configuring Sun Cluster HA for SAP.

Procedures to support Sun StorEdge 3910 and Sun StorEdge 3960 systems. See Appendix C, Installing and Maintaining a Sun StorEdge 3900 or 6900 Series System.

Procedures to support Sun StorEdge 9910 and Sun StorEdge 9960 systems. See Appendix D, Installing and Maintaining a Sun StorEdge 9910 or StorEdge 9960 Array.

December 2001 

New Patch 110651-07 replaces 110651-04 to provide Oracle 9i support on Sun Cluster software. This patch will be available with Sun Cluster 3.0 5/02. See "Data Service Restrictions and Requirements".

"Bug ID 4525403", required Sun Cluster setting in the /etc/system file overwritten when VxFS is installed after Sun Cluster software.

Generic data services (GDS) overview and instructions on creating a service that uses GDS is provided in Appendix A, Generic Data Services.

New Features

In addition to features documented in Sun Cluster 3.0 12/01 Release Notes, this release now includes support for the following features.

Generic Data Service

Appendix A, Generic Data Services provides information on the generic data service (GDS) and shows you how to create a service that uses GDS using either the command line interface or the SunPlex Agent Builder.

Scalable Cluster Topology

Appendix G, Scalable Cluster Topology provides information and procedures for using the scalable cluster topology. By using this topology with Oracle Parallel Server/Real Application Clusters (OPS/RAC) software installed on all nodes, the connectivity of up to four cluster nodes to one storage array is now supported.

Restrictions and Requirements

The following restrictions and requirements have been added or updated since the Sun Cluster 3.0 7/01 release.

Data Service Restrictions and Requirements

Upgrade Restrictions and Requirements

Known Problems

In addition to known problems documented in Sun Cluster 3.0 12/01 Release Notes, the following known problems affect the operation of the Sun Cluster 3.0 12/01 release.

Bug ID 4362925

Problem Summary:


nodeA# scshutdown -g0 -y
scshutdown: Unmount of /dev/md/sc/dsk/d30 failed: Device busy.
scshutdown: Could not unmount all PxFS filesystems.

The Networker packages were bundled and installed during the Oracle installation. Therefore, the nsrmmd daemon is running and mounting the /global/oracle directory, which prevents the unmount of all cluster file systems.


nodeA# umount /global/oracle
umount: global/oracle busy
nodeA# fuser -c /global/oracle
/global/oracle: 335co 317co 302co 273co 272co
nodeA# ps -ef|grep 335
 root 335 273 0 17:17:41 ?       0:00 /usr/sbin/nsrmmd -n 1
 root 448 397 0 17:19:37 console 0:00 grep 335

This problem occurs during Sun Cluster shutdown when the shutdown tries to unmount a cluster file system that the process nsrmmd is still referencing.

Workaround: Run the fuser(1M) command on each node to establish a list of all processes still using the cluster file systems that cannot be unmounted. Check that no Resource Group Manager resources have been restarted since the failed scshutdown(1M) command was first run. Kill all these processes with the kill -9 command. This kill list should not include any processes under the control of the Resource Group Manager. After all such processes have terminated, rerun the scshutdown command, and the shutdown should run to successful completion.

Bug ID 4525403

Problem Summary: VxFS installation adds the following line to the /etc/system file:


* vxfs_START -- do not remove the following lines:
*
* VxFS requires a stack size greater than the default 8K.
* The following values allow the kernel stack size
* for nfs threads to be increased to 16K.
*
set rpcmod:svc_default_stksize=0x4000
* vxfs_END

If you install VxFS after Sun Cluster software has been installed, the VxFS installation overrides the following required Sun Cluster 3.0 12/01 setting in the /etc/system file:


* Start of lines added by SUNWscr
set rpcmod:svc_default_stksize=0x6000

Workaround: Change the setting value from 0x4000 to 0x6000.


set rpcmod:svc_default_stksize=0x6000

Bug ID 4511699

Problem Summary: Sun Cluster HA for NFS requires files [SUCCESS=return] for the hosts lookup entry in the /etc/nsswitch.conf file, and requires that all cluster private IP addresses be present in the /etc/inet/hosts file on all cluster nodes.

Otherwise, Sun Cluster HA for NFS will not be able to fail over correctly in the presence of public network failures.

Workaround: Perform the following steps on each node of the cluster.

  1. Modify the hosts entry in the /etc/nsswitch.conf file so that, upon success in resolving a name locally, it returns success immediately and does not contact NIS or DNS.


    hosts: cluster files [SUCCESS=return] nis dns

  2. Add entries for all cluster private IP addresses to the /etc/inet/hosts file.

You only need to list the IP addresses plumbed on the physical private interfaces in the /etc/nsswitch.conf and /etc/inet/hosts files. The logical IP addresses are already resolvable through the cluster nsswitch library.

To list the physical private IP addresses, run the following command on any cluster node.


% grep ip_address /etc/cluster/ccr/infrastructure

Each IP address in this list must be assigned a unique hostname that does not conflict with any other hostname in the domain.


Note -

Sun Cluster software already requires that any HA IP addresses (LogicalHostname/SharedAddresses) be present in /etc/inet/hosts on all cluster nodes and that files is listed before nis or dns. The additional requirements mandated by this bug is to also list [SUCCESS=return] after files and to list all cluster private IP addresses in the /etc/inet/hosts file.


Bug ID 4626010

Problem Summary: In a Sun StorEdge 3900 Series system, the preconfigured, default hard zones that are used on the Sun StorEdge Network FC Switch-8 and Switch-16 switches are incompatible for use with host-based mirroring in a cluster. The hard zones prevent the cluster nodes from seeing all attached storage because there are only two ports per zone.

Workaround: Remove the default hard zones from the Sun StorEdge Network FC Switch-8 and Switch-16 switches (this is documented as a necessary step in the procedures in Appendix C, Installing and Maintaining a Sun StorEdge 3900 or 6900 Series System. See the SANbox-8/16 Switch Management User's Manual for instructions on removing the preconfigured hard zones on all Sun StorEdge Network FC Switch-8 and Switch-16 switches.

Known Documentation Problems

This section discusses documentation errors you might encounter and steps to correct these problems. This information is in addition to known documentation problems documented in the Sun Cluster 3.0 12/01 Release Notes.

Hardware Guide

The following subsections describe omissions or new information that will be added to the next publishing of the Hardware Guide.

Dynamic Reconfiguration Operations For Sun Cluster Nodes

The Sun Cluster 3.0 12/01 release supports Solaris 8 dynamic reconfiguration (DR) operations on qualified servers. The current Sun Cluster 3.0 12/01 Hardware Guide does not specifically consider scenarios in which the cluster nodes have been enabled with the DR feature.

In the Sun Cluster 3.0 12/01 Hardware Guide, some procedures require that the user add or remove host adapters or public network adapters in a cluster node. Contact your service provider for a list of storage arrays that are qualified for use with DR-enabled servers.


Note -

Review the documentation for the Solaris 8 DR feature on your hardware platform before using the DR feature with Sun Cluster software. All of the requirements, procedures, and restrictions that are documented for the Solaris 8 DR feature also apply to Sun Cluster 3.0 12/01 DR support (except for the operating environment quiescence operation).


Documentation for DR on currently qualified server platforms are listed here.

DR Operations in a Cluster With DR-Enabled Servers

Some procedures in the current Sun Cluster 3.0 12/01 Hardware Guide instruct the user to shut down and power off a cluster node before adding, removing, or replacing a host adapter or a public network adapter (PNA).

However, if the node is a server that is enabled with the DR feature, the user does not have to power off the node before adding, removing, or replacing the host adapter or PNA. Instead, do the following:

  1. Follow the procedure steps in the Hardware Guide, including any steps for disabling and removing the host adapter or PNA from the active cluster interconnect.

    See the Sun Cluster 3.0 12/01 System Administration Guide for instructions on removing cluster host adapters (transport adapters) or PNAs from the cluster configuration.

  2. Skip any step that instructs you to power off the node, where the purpose of the power-off is to add, remove, or replace a host adapter or PNA.

  3. Perform the DR operation (add, remove, or replace) on the host adapter or PNA.

  4. Continue with the next step of the procedure in the Hardware Guide.

For conceptual information about Sun Cluster 3.0 12/01 support of the DR feature, see the Sun Cluster 3.0 12/01 Concepts document.

Sun Netra st D1000

The NetraTM st D1000 has been qualified for use with Sun Cluster 3.0, but maintenance procedures do not yet exist in the Sun Cluster 3.0 12/01 Hardware Guide specifically for the Netra st D1000. However, you can use the existing Sun Cluster 3.0 12/01 Hardware Guide procedures for the StorEdge D1000 to maintain a Netra st D1000 in a cluster environment. Substitute the following document references in place of the StorEdge D1000 document references if you are maintaining a Netra st D1000.

Sun StorEdge T3/T3+ (Partner-Group Configuration)

Quorum Device Restriction Removed

When the Sun Cluster 3.0 12/01 Hardware Guide was published, it noted a restriction against using Sun StorEdge T3/T3+ arrays as quorum devices, when the arrays are used in a partner-group configuration.

That restriction has since been removed. That is, you can use Sun StorEdge T3/T3+ arrays as quorum devices, whether the arrays are used in a partner-group configuration or a single-controller configuration.

Sun StorEdge T3/T3+ (Single-Controller Configuration)

The Sun Cluster 3.0 12/01 Hardware Guide chapter on "Installing and Maintaining a Sun StorEdge T3 or T3+ Array Single-Controller Configuration" incorrectly indicates that the procedure for upgrading a Sun StorEdge T3 array controller to a Sun StorEdge T3+ array controller does not require any cluster-specific steps. The correct procedure should be as follows.

How To Upgrade a StorEdge T3 Controller to a StorEdge T3+ Controller


Caution - Caution -

Perform this procedure on one array at a time. This procedure requires that you take the array in which you are upgrading the controller offline. If you take more than one array offline, your cluster will lose access to data, if the arrays are submirrors of each other.


  1. On one node attached to the StorEdge T3 array in which you are upgrading the controller, detach that array's submirrors.

    For more information, see your Solstice DiskSuiteTM or VERITAS Volume Manager documentation.

  2. Upgrade the StorEdge T3 array controller to a StorEdge T3+ array controller.

    See the Sun StorEdge T3 Array Controller Upgrade Manual for instructions.

  3. Reattach the submirrors to resynchronize them.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

Software Installation Guide

The following subsections describe omissions or new information that will be added to the next publishing of the Software Installation Guide.

Maximum Number of Metadevice Names Per Diskset

The Sun Cluster 3.0 12/01 Software Installation Guide incorrectly states that the maximum number of metadevice names per diskset is 8192. The correct maximum is 1024 metadevice names per diskset.

Maximum Number of Disksets Per Cluster

The Sun Cluster 3.0 12/01 Software Installation Guide statement, "The cluster can have a maximum of 32 disksets" is misleading. The maximum number of disksets per cluster is 31, not including the diskset for private disk management.

Upgrading to a Sun Cluster 3.0 Software Update Release

The following is a correction to Step 8 of "How to Upgrade to a Sun Cluster 3.0 Software Update Release" in the Sun Cluster 3.0 12/01 Software Installation Guide.

    (Optional)

    (Optional) Upgrade Solaris 8 software.

    1. Temporarily comment out all global device entries in the /etc/vfstab file.

      Do this to prevent the Solaris upgrade from attempting to mount the global devices.

    2. Shut down the node to upgrade.


      # shutdown -y -g0
      ok

    3. Follow instructions in the installation guide for the Solaris 8 Maintenance Update version you want to upgrade to.


      Note -

      Do not reboot the node when prompted to reboot.


    4. Uncomment all global device entries that you commented out in Step a in the /a/etc/vfstab file.

    5. Install any Solaris software patches and hardware-related patches, and download any needed firmware contained in the hardware patches.

      If any patches require rebooting, reboot the node in non-cluster mode as described in Step f.

    6. Reboot the node in non-cluster mode.

      Include the double dashes (--) and two quotation marks (") in the command.


      # reboot -- "-x"
      

Data Services Installation and Configuration Guide

The "How to Configure an iPlanet Web Server" procedure in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide is missing the following step, which is not dependent on any other step in the procedure.

Create a file that contains the secure key password you need to start this instance, and place this file under the server root directory. Name this file keypass.


Note -

Because this file contains the key database password, protect the file with the appropriate permissions.


Release Notes