Sun Cluster 3.2 Release Notes for Solaris OS

Sun Cluster 3.2 Release Notes for Solaris OS

This document provides the following information for SunTM Cluster 3.2 software.

What's New in the Sun Cluster 3.2 Software

This section provides information related to new features, functionality, and supported products in the Sun Cluster 3.2 software. This section also provides information on any restrictions that are introduced in this release.

New Features and Functionality

This section describes each of the following new features provided in the Sun Cluster 3.2 software.

New Sun Cluster Object-Oriented Command Set

The new Sun Cluster command-line interface includes a separate command for each cluster object type and uses consistent subcommand names and option letters. The new Sun Cluster command set also supports short and long command names. The command output provides improved help and error messages as well as more readable status and configuration reports. In addition, some commands include export and import options with the use of portable XML-based configuration files. These options allow you to replicate a portion of, or the entire, cluster configuration, which speeds up partial or full configuration cloning. See the Intro(1CL) man page for more information.

Oracle RAC 10g Improved Integration and Administration

Sun Cluster Oracle RAC package installation, as well as configuration, is now integrated in the Sun Cluster procedures. New Oracle RAC-specific resource types and properties can be used for finer-grained control.

Oracle RAC extended manageability, which is provided by the ScalDeviceGroup and ScalMountPoint resource types, leads to easier set up of Oracle RAC within Sun Cluster configurations, as well as improved diagnosability and availability. See Sun Cluster Data Service for Oracle RAC Guide for Solaris OS for more information.

Data Service Configuration Wizards

Sun Cluster provides new data service configuration wizards that simplify configuration of popular applications through automatic discovery of parameter choices and immediate validation. The Sun Cluster data service configuration wizards are provided in the following two formats:

The following data services are supported in the Sun Cluster Manager GUI format:

The clsetup command-line interface format supports all applications that are supported by Sun Cluster Manager .

See the Sun Cluster documentation for each of the supported data services for more information.

Flexible IP Address Scheme

Sun Cluster software now allows a reduced range of IP addresses for its private interconnect. In addition, you can now customize the IP base address and its range during or after installation.

These changes to the IP address scheme facilitate integration of Sun Cluster environments in existing networks with limited or regulated address spaces. See How to Change the Private Network Address or Address Range of an Existing Cluster in Sun Cluster System Administration Guide for Solaris OS for more information.

Sun Cluster Support for Service Management Facility Services

Sun Cluster software now integrates tightly with Solaris 10 OS Service Management Facility (SMF) and enables the encapsulation of SMF-controlled applications in the Sun Cluster resource management model. Local service-level life-cycle management continues to be operated by SMF, while whole resource level cluster-wide failure-handling operations (node, storage) are carried out by Sun Cluster software.

Moving applications from a single-node Solaris 10 OS environment to a multi-node Sun Cluster environment enables increased availability while requiring minimal effort. See Enabling Solaris SMF Services to Run With Sun Cluster in Sun Cluster Data Services Planning and Administration Guide for Solaris OS for more information.

Extended Flexibility for Fencing Protocol

This new functionality allows the customization of the default fencing protocol. Choices include SCSI-3, SCSI-2, or per-device discovery.

This flexibility enables the default usage of SCSI-3, a more recent protocol, for better support for multipathing, easier integration with non-Sun storage, and shorter recovery times on newer storage while still supporting the Sun Cluster 3.0 or 3.1 behavior and SCSI-2 for older devices. See Administering the SCSI Protocol Settings for Storage Devices in Sun Cluster System Administration Guide for Solaris OS for more information.

Sun Cluster Quorum Server

A new quorum device option is now available in the Sun Cluster software. Instead of using a shared disk and SCSI reservation protocols, it is now possible to use a Solaris server outside of the cluster to run a quorum-server module, which supports an atomic reservation protocol over TCP/IP. This support enables faster failover time and also lowers deployment costs: it removes the need of a shared quorum disk for any scenario where quorum is required (two-node) or desired. See Sun Cluster Quorum Server User’s Guide for more information.

Disk-Path Failure Handling

Sun Cluster software can now be configured to automatically reboot a node if all its paths to shared disks have failed. Faster reaction in case of severe disk-path failure enables improved availability. See Administering Disk-Path Monitoring in Sun Cluster System Administration Guide for Solaris OS for more information.

HAStoragePlus Availability Improvements

HAStoragePlus mount points are now created automatically in case of mount failure. This feature eliminates failure-to-fail over cases, thus improving availability of the environment.

Solaris Zones Expanded Support

Sun Cluster software now supports the following data services in Solaris non–global zones.

This support allows the combination of the benefits of application containment that is offered by Solaris zones and the increased availability that is provided by Sun Cluster software. See the Sun Cluster documentation for the appropriate data services for more information.

ZFS

ZFS is supported as a highly available local file system in the Sun Cluster 3.2 release. ZFS with Sun Cluster software offers a best-class file system solution combining high availability, data integrity, performance, and scalability, covering the needs of the most demanding environments.

Continuous enhancements are being added to ZFS for optimizing performance with all workloads, especially database transactions. Ensure that you have the latest ZFS patches installed and that your configuration is optimized for your specific type of workload.

HDS TrueCopy Campus Cluster

Sun Cluster-based campus clusters now support HDS TrueCopy controller-based replication, allowing for automated management of TrueCopy configurations. Sun Cluster software handles automatically and transparently the switch to the secondary campus site in case of failover, making this procedure less error-prone and improving the overall availability of the solution. This new remote data-replication infrastructure allows Sun Cluster software to support new configurations for customers who have been standardizing on specific replication infrastructure like TrueCopy, and for places where host-based replication is not a viable solution because of distance or application incompatibility.

This new combination brings improved availability and less complexity while lowering cost. Sun Cluster software can make use of existing TrueCopy customer replication infrastructure, limiting the need for additional replication solutions.

Specifications-Based Campus Cluster

Specifications-Based Campus Clusters now support a wider range of distance configurations. These clusters support such configurations by requiring compliance to a latency and error rate, rather than to a rigid set of distances and components.

See Chapter 7, Campus Clustering With Sun Cluster Software, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS for more information.

Multi-Terabyte Disk and Extensible Firmware Interface (EFI) Label Support

Sun Cluster configurations now support disks with a capacity over 1TB which use a new Extensible Firmware Interface (EFI) disk format. This format is required for multi-terabyte disks but can also be used with smaller capacity disks. This new feature extends the supported Sun Cluster configurations to environments with high-end storage requirements.

Extended Support for VERITAS Software Components

VERITAS Volume Manager and File System, part of VERITAS Storage Foundation 5.0, are now supported on SPARC platforms as well as VERITAS Volume Manager 4.1 with Solaris 10 OS on x86/x64 platforms.

VERITAS Volume Replicator (VVR) 5.0 and VERITAS Fast Mirror Resynchronization (FMR) 4.1 and 5.0, part of VERITAS FlashSnap, can now be used in Sun Cluster environments on SPARC platforms.

Quota Support

Quota management can now be used with HAStoragePlus on local UFS file systems for better control of resource consumption.

Oracle DataGuard Support

Sun Cluster software now offers improved usability for Oracle deployments including DataGuard data replication software. Customers can now specify an HA-Oracle database to be part of an Oracle DataGuard configuration as either a primary or a standby site. This secondary database can be a logical or a physical standby. For more information , see Sun Cluster Data Service for Oracle Guide for Solaris OS.


Note –

When the HA-Oracle agent is managing a standby database, the agent will only control start, stop, and monitoring of that database. The agent does not re-initiate the recovery of the standby database if it fails over to another node.


Dual-Partition Upgrade

With this new software swap feature the upgrade process is greatly simplified. Any component of the software stack along with Sun Cluster software can be upgraded in one step: Solaris operating system, Sun Cluster software, file systems, volume managers, applications, and data services. This automation lowers the risk of human errors during cluster upgrade and minimizes the service outages that occur for a standard cluster upgrade.

Live Upgrade

The Live Upgrade method can now be used with Sun Cluster software. This method reduces system downtime of a node during upgrade as well as unnecessary reboots, therefore lowering the required maintenance window where the service is at risk.

At the time of publication, Live Upgrade can be used only if your Sun Cluster installation uses Solaris Volume Manager for managing the storage or disk groups. Live Upgrade does not currently support VxVM. See Upgrade for more information.

Any Live Upgrade from Solaris 8 to Solaris 9 requires SVM patch 116669-18 to be applied before rebooting from the alternate root.

Optional Sun Cluster Manager Installation

Installation of Sun Cluster Manager, the Sun Cluster management GUI, is now optional. This change removes web-based access to the cluster, to comply with potential security rules. See How to Install Sun Cluster Framework and Data-Service Software Packages in Sun Cluster Software Installation Guide for Solaris OS for information about deselecting Sun Cluster Manager at installation time.

SNMP Event MIB

Sun Cluster software includes a new Sun Cluster SNMP event mechanism as well as a new SNMP MIB. These new features allow third-party SNMP management applications to directly register with Sun Cluster software and receive timely notifications of cluster events. Fine-grained event notification and direct integration with third-party enterprise-management framework through standard SNMP support allow proactive monitoring and increase availability. See Creating, Setting Up, and Managing the Sun Cluster SNMP Event MIB in Sun Cluster System Administration Guide for Solaris OS for more information.

Command Logging

Command information can now be logged within Sun Cluster software. This ability facilitates diagnostics of cluster failures and provides history of the administration actions for archiving or replication. For more information, see How to View the Contents of Sun Cluster Command Logs in Sun Cluster System Administration Guide for Solaris OS.

Workload System-Resource Monitoring

Sun Cluster software offers new system-resources utilization measurement and visualization tools, including fine-grained measurement of consumptions per node, resource, and resource group. These new tools provide historical data as well as threshold management and CPU reservation and control. This improved control allows for better management of service level and capacity.

Automatic Creation of Multiple-Adapter IPMP Groups by scinstall

The interactive scinstall utility now configures either a single-adapter or a multiple-adapter IPMP group for each set of public-network adapters depending on the adapters available in each subnet. This functionality replaces the utility's previous behavior which created one single-adapter IPMP group for each adapter available regardless of their subnet. For more information about this and other changes to IPMP group policies, see Public Networks in Sun Cluster Software Installation Guide for Solaris OS.

Secure Shell Support for Cluster Control Panel Software

Support for Secure Shell is added to the Cluster Control Panel (CCP) by the following new features:

For more information about preparing for and using the Secure Shell features of the CCP, see How to Install Cluster Control Panel Software on an Administrative Console in Sun Cluster Software Installation Guide for Solaris OS. For updates to the related man pages, see ccp(1M), cconsole(1M), crlogin(1M), cssh(1M), and ctelnet(1M), and serialports(4).

New Minimum Requirement of One Cluster Interconnect

The minimum required number of cluster interconnects that a cluster must have is changed to one cluster interconnect between the nodes of the cluster. The interactive scinstall utility is revised to permit configuration of only one interconnect when you use the utility in Custom mode. To use the utility's Typical mode, you must still configure two interconnects. For more information, see Cluster Interconnect in Sun Cluster Software Installation Guide for Solaris OS.

IP Filter Support for Failover Services

Sun Cluster 3.2 software supports the Solaris IP Filter for failover services. Solaris IP Filter provides stateful packet filtering and network address translation (NAT). Solaris IP Filter also includes the ability to create and manage address pools. For more information on the Solaris IP Filter, see Part IV, IP Security, in System Administration Guide: IP Services. For information on how to set up IP filtering with Sun Cluster software, see Using Solaris IP Filtering with Sun Cluster.

Restrictions

NetApp NAS Fencing Restriction

The fencing feature requires that each cluster node always use the same source IP address when accessing the NetApp NAS unit. Multi-homed systems use multiple source IP addresses. The administrator for a multi-homed system must ensure that one source IP address is always used when accessing the NetApp NAS unit. This can be achieved by setting up an appropriate network configuration.

Compatibility Issues

This section contains information about Sun Cluster compatibility issues, such as features nearing end of life.

Features Nearing End of Life

The following features are nearing end of life in Sun Cluster 3.2 software.

Sun Cluster 3.0

As of the Sun Cluster 3.2 release, Sun Cluster 3.0 is being discontinued. The Sun Cluster 3.0 part number will no longer be available.

Solaris 8

As of Sun Cluster 3.2, Sun Cluster will not longer support Solaris 8.

Rolling Upgrade

The rolling upgrade functionality might not be available for upgrading Sun Cluster to the next minor release. In that case, other procedures will be provided that are designed to limit cluster outage during those software upgrades.

sccheck

The sccheck command might not be included in a future release. However, the corresponding functionality will be provided by the cluster check command.

Solaris 10 11/06 Operating System

The following known issues might affect the operation of the Sun Cluster 3.2 release with Solaris 10 11/06 operating system. Contact your Sun representative to obtain the necessary Solaris patches to fix these issues. For more information, refer to Infodoc 87995.


Caution – Caution –

You must upgrade your operating system to Solaris 10 11/06 before applying the Solaris patches.


6252216

metaset command fails after the rpcbind server is restarted.

6331216

disksets: devid information not written to a newly created diskset.

6345158

svm exited with error 1 in step cmmstep5, nodes panic.

6367777

fsck: svc:/system/filesystem/usr fails to start from milestone none.

6401357

Solaris Volume Manager (SVM) does not show metaset after cluster upgrade in x86.

6402556

commd timeout should be a percentage of metaclust timeout value.

6474029

metaset -s diskset -t should take ownership of a cluster node after reboot.

6496941

SVM still removes the diskset if the Sun Cluster nodeid file is missing.

6367777

fsck* svc:/systsem/filesystem/usr fails to start from milestone.

6367948

New fsck_ufs(1M) has nits when dealing with already mounted file.

6425930

Node panics with CMM:cluster lost operational quorum in amd64.

6361537

create_ramdisk: cannot seek to offset -1.

6393691

Add etc/cluster/nodeid entry to filelist.ramdisk.

6344611

create_ramdisk needs to react less poorly to missing files or directories.

6462748

devfsadm link removal does not provide full interpose support.

fssnap Support

Sun Cluster does not support fssnap which is a feature of UFS. You can use fssnap on local systems that are not controlled by Sun Cluster. The following restrictions apply to fssnap support:

Solaris Volume Manager GUI

The Enhanced Storage module of Solaris Management Console (Solaris Volume Manager) is not compatible with Sun Cluster software. Use the command-line interface or Sun Cluster utilities to configure Solaris Volume Manager software.

Loopback File System (LOFS)

Sun Cluster 3.2 software does not support the use of LOFS under certain conditions. If you must enable LOFS on a cluster node, such as when you configure non-global zones, first determine whether the LOFS restrictions apply to your configuration. See the guidelines in Solaris OS Feature Restrictions in Sun Cluster Software Installation Guide for Solaris OS for more information about the restrictions and workarounds that permit the use of LOFS when restricting conditions exist.

Accessibility Features for People With Disabilities

To obtain accessibility features that have been released since the publishing of this media, consult Section 508 product assessments that are available from Sun upon request to determine which versions are best suited for deploying accessible solutions.

Commands Modified in This Release

This section describes changes to the Sun Cluster command interfaces that might cause user scripts to fail.

Object-Oriented Command Line Interface

Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more information about the object-oriented command set, see the Intro(1CL) man page. For a list of object-oriented commands for common Sun Cluster procedures, see the Sun Cluster Quick Reference.

scinstall Command

The following options to the scinstall command have changed in the Sun Cluster 3.2 release:

scconf Command

The -q option of the scconf command has been modified to distinguish between shared local quorum devices (SCSI) and other types of quorum devices (including NetApp NAS devices). Use the name suboption to specify the name of the attached shared-storage device when adding or removing a shared quorum device to or from the cluster. This suboption can also be used with the change form of the command to change the state of a quorum device. The globaldev suboption can still be used for SCSI shared-storage devices, but the name suboption must be used for all other types of shared storage devices. For more information about this change to scconf and working with quorum devices, see scconf(1M), scconf_quorum_dev_netapp_nas(1M), scconf_quorum_dev_netapp_nas(1M), and scconf_quorum_dev_scsi(1M).

Resource Properties

It is no longer necessary to modify the Network_resources_used resource property directly. Instead, use the Resource_dependencies property. The RGM automatically updates the Network_resources_used property based on the settings of the Resource_dependencies property. For more information about the current uses of these two resource properties, see r_properties(5).

Product Name Changes

This section provides information about product name changes for applications that Sun Cluster software supports. Depending on the Sun Cluster software release that you are running, your Sun Cluster documentation might not reflect the following product name changes.


Note –

Sun Cluster 3.2 software is distributed under Solaris Cluster 3.2 and Sun Java Availability Suite.


Current Product Name 

Former Product Name 

Sun Cluster Manager 

SunPlex Manager 

Sun Cluster Agent Builder 

SunPlex Agent Builder 

Sun Java System Application Server 

Sun ONE Application Server 

Sun Java System Application Server EE (HADB) 

Sun Java System HADB 

Sun Java System Message Queue 

Sun ONE Message Queue 

Sun Java System Web Server 

  • Sun ONE Web Server

  • iPlanet Web Server

  • NetscapeTM HTTP

Supported Products

This section describes the supported software and memory requirements for Sun Cluster 3.2 software.

Platform 

Operating System 

Volume Manager 

Cluster Feature 

SPARC 

Solaris 9 

Solaris Volume Manager. 

Solaris Volume Manager for Sun Cluster. 

VERITAS Volume Manager 4.1. This support requires VxVM 4.1 MP2. 

VERITAS Volume Manager 4.1 cluster feature. 

VERITAS Volume Manager components that are delivered as part of VERITAS Storage Foundation 4.1. This support requires VxVM 4.1 MP2. 

VERITAS Volume Manager 4.1 cluster feature. 

VERITAS Volume Manager components that are delivered as part of VERITAS Storage Foundation 5.0. This support requires VxVM 5.0 MP1. 

VERITAS Volume Manager 5.0 cluster feature. 

Solaris 10 

Solaris Volume Manager. 

Solaris Volume Manager for Sun Cluster. 

VERITAS Volume Manager 4.1. This support requires VxVM 4.1 MP2. 

VERITAS Volume Manager 4.1 with cluster feature. 

VERITAS Volume Manager 4.1. This support requires VxVM 4.1 MP2. 

VERITAS Volume Manager 4.1 with cluster feature. 

VERITAS Volume Manager components that are delivered as part of VERITAS Storage Foundation 5.0. This support requires VxVM 5.0 MP1. 

VERITAS Volume Manager 5.0 cluster feature. 

x86 

Solaris 10 

Solaris Volume Manager. 

Solaris Volume Manager for Sun Cluster. 

VERITAS Volume Manager components that are delivered as part of VERITAS Storage Foundation 4.1. 

N/A - Sun Cluster 3.2 does not support the VxVM cluster feature on the x86 platform. 

Platform 

Operating System 

File System 

Features and External Volume Management 

SPARC 

Solaris 9 

Solaris UFS. 

N/A 

Sun StorEdge QFS: 

N/A 

QFS 4.5 Standalone Filesystem. 

Features: 

  • HA-NFS

  • HA-Oracle

External Volume Management: 

  • SVM

  • VxVM

QFS 4.5 — Shared QFS File System. 

Feature: 

  • Oracle RAC

External Volume Management: 

  • SVM Cluster File Manager

QFS 4.6. 

Features: 

  • COTC-Shared QFS clients outside the cluster

  • HA-SAM Failover

VERITAS File System 4.1. 

N/A 

VERITAS File System components that are delivered as part of VERITAS Storage Foundation 4.1 and 5.0. 

N/A 

SPARC 

Solaris 10 

Solaris ZFS. 

N/A 

Solaris ZFS. 

N/A 

Sun StorEdge QFS: 

N/A 

QFS 4.5 Standalone Filesystem. 

Features: 

  • HA-NFS

  • HA-Oracle

External Volume Management: 

  • SVM

  • VxVM

QFS 4.5 — Shared QFS File System. 

Feature: 

  • Oracle RAC

External Volume Management: 

  • SVM Cluster File Manager

QFS 4.6. 

Features: 

  • COTC-Shared QFS clients outside the cluster

  • HA-SAM Failover

VERITAS File System 4.1. 

N/A 

VERITAS File System components that are delivered as part of VERITAS Storage Foundation 4.1 and 5.0. 

N/A 

x86 

Solaris 10 

Solaris UFS. 

N/A 

Solaris ZFS. 

N/A 

Sun StorEdge QFS: 

N/A 

QFS 4.5 Standalone Filesystem 

Features: 

  • HA-NFS

  • HA-Oracle

External Volume Management: 

  • SVM

  • VxVM

QFS 4.5 — Shared QFS File System. 

Feature: 

  • Oracle RAC

External Volume Management: 

  • SVM Cluster File Manager

QFS 4.6. 

Features: 

  • COTC-Shared QFS clients outside the cluster

  • HA-SAM Failover

Sun Cluster Security Hardening

Sun Cluster Security Hardening uses the Solaris operating system hardening techniques recommended by the Sun BluePrintsTM program to achieve basic security hardening for clusters. The Solaris Security Toolkit automates the implementation of Sun Cluster Security Hardening.

The Sun Cluster Security Hardening documentation is available at http://www.sun.com/blueprints/0203/817-1079.pdf. You can also access the article from http://www.sun.com/software/security/blueprints. From this URL, scroll down to the Architecture heading to locate the article “Securing the Sun Cluster 3.x Software.” The documentation describes how to secure Sun Cluster 3.x deployments in a Solaris environment. The description includes the use of the Solaris Security Toolkit and other best-practice security techniques recommended by Sun security experts. The following data services are support by Sun Cluster security hardening:

Known Issues and Bugs

The following known issues and bugs affect the operation of the Sun Cluster 3.2 release. Bugs and issues are grouped into the following categories:

Administration

The clnode remove -f Option Fails to Remove the Node with the Solaris Volume Manager Device Group (6471834)

Problem Summary: The -clnode remove --force command should remove nodes from the metasets. The Sun Cluster System Administration Guide for Solaris OS provides procedures for removing a node from the cluster. These procedures instruct the user to run the metaset command for the Solaris Volume Manager disk set removal prior to running clnode remove.

Workaround: If the procedures were not followed, it might be necessary to clear the stale node data from the CCR in the usual way: From an active cluster node, use the metaset command to clear the node from the Solaris Volume Manager disk sets. Then run clnode clear --force obsolete_nodename.

scsnapshot is Nonfunctional With Solaris 10 SUNWCluster Meta Cluster (6477905)

Problem Summary: On a cluster installed with the Solaris 10 End User software group, SUNWCuser, running the scsnapshot command might fail with the following error:


# scsnapshot -o
…
/usr/cluster/bin/scsnapshot[228]: /usr/perl5/5.6.1/bin/perl:  not found

Workaround: Do either of the following:

Entries in the Auxnodelist Property Causes SEGV During Scalable Resource Creation (6494243)

Problem Summary: The Auxnodelist property of the shared-address resource cannot be used during shared-address resource creation. This will cause validation errors and SEGV when the scalable resource that depends on this shared address network resource is created. The scalable resource's validate error message is in the following format:


Method methodname (scalable svc) on resource resourcename stopped or terminated 
due to receipt of signal 11

Also, the core file is generated from ssm_wrapper. Users will not be able to set the Auxnodelist property and thus cannot identify the cluster nodes that can host the shared address but never serve as primary.

Workaround: On one node, re-create the shared-address resource without specifying the Auxnodelist property. Then rerun the scalable-resource creation command and use the shared-address resource that you re-created as the network resource.

clquorumserver Start and Stop Commands Should Set the Startup State Properly for Next Boot (6496008)

Problem Summary: The Quorum Server command clquorumserver does not set the state for the startup mechanism correctly for the next reboot.

Workaround: Perform the following tasks to start or stop Quorum Server software.

ProcedureHow To Start Quorum Server Software on the Solaris 10 OS

  1. Display the status of the quorumserver service.


    # svcs -a | grep quorumserver
    

    If the service is disabled, output appears similar to the following:


    disabled        3:33:45 svc:/system/cluster/quorumserver:default
  2. Start Quorum Server software.

    • If the quorumserver service is disabled, use the svcadm enable command.


      # svcadm enable svc:/system/cluster/quorumserver:default
      
    • If the quorumserver service is online, use the clquorumserver command.


      # clquorumserver start +
      

ProcedureHow to Stop Quorum Server Software on the Solaris 10 OS

    Disable the quorumserver service.


    # svcadm disable svc:/system/cluster/quorumserver:default
    

ProcedureHow To Start Quorum Server Software on the Solaris 9 OS

  1. Start Quorum Server software.


    # clquorumserver start +
    
  2. Rename the /etc/rc2.d/.S99quorumserver file as /etc/rc2.d/S99quorumserver.


    # mv /etc/rc2.d/.S99quorumserver /etc/rc2.d/S99quorumserver
    

ProcedureHow To Stop Quorum Server Software on the Solaris 9 OS

  1. Stop Quorum Server software.


    # clquorumserver stop +
    
  2. Start Quorum Server software.


    # mv /etc/rc2.d/S99quorumserver /etc/rc2.d/.S99quorumserver
    

Data Services

Creation of Node Agent Resource for Sun Cluster HA for Sun Java Systems Application Server Succeeds Even if Resource Dependency is Not Set on Domain Administration Server (DAS) Resource (6262459)

Problem Summary: When creating the node agent (NA) resource in Sun Cluster HA for Application Server, the resource gets created even if there is no dependency set on the DAS resource. The command should error out if the dependency is not set, because a DAS resource must be online in order to start the NA resource.

Workaround: While creating the NA resource, make sure you set a resource dependency on the DAS resource.

New Variable in HA MySQL Patch Must be Configured for All New Instances (6516322)

Problem Summary: The HA MySQL patch adds a new variable called MYSQL_DATADIR in the mysql_config file. This new variable must point to the directory where the MySQL configuration file my.conf file is stored. If this variable is not configured correctly, the database preparation with mysql_register will fail.

Workaround: Point the MYSQL_DATADIR variable to the directory where the MySQL configuration file, my.conf is stored.

Installation

Autodiscovery With InfiniBand Configurations Can Sometimes Suggest Two Paths Using the Same Adapter (6299097)

Problem Summary: If InfiniBand is used as the cluster transport and there are two adapters on each node with two ports per adapter and a total of two switches, the scinstall utility's adapter autodiscovery could suggest two transport paths that use the same adapter.

Workaround: Manually specify the transport adapters on each node.

IPv6 Scalable Service Support is Not Enabled by Default (6332656)

Problem Summary: IPv6 plumbing on the interconnects, which is required for forwarding of IPv6 scalable service packets, will no longer be enabled by default. The IPv6 interfaces, as seen when using the ifconfig command, will no longer be plumbed on the interconnect adapters by default.

Workaround: Manually enable IPv6 scalable service support.

ProcedureHow to Manually Enable IPv6 Scalable Service Support

Before You Begin

Ensure that you have prepared all cluster nodes to run IPv6 services. These tasks include proper configuration of network interfaces, server/client application software, name services, and routing infrastructure. Failure to do so might result in unexpected failures of network applications. For more information, see your Solaris system-administration documentation for IPv6 services.

  1. On each node, add the following entry to the /etc/system file.


    set cl_comm:ifk_disable_v6=0
    
  2. On each node, enable IPv6 plumbing on the interconnect adapters.


    # /usr/cluster/lib/sc/config_ipv6
    

    The config_ipv6 utility brings up an IPv6 interface on all cluster interconnect adapters that have a link-local address. The utility enables proper forwarding of IPv6 scalable service packets over the interconnects.

    Alternately, you can reboot each cluster node to activate the configuration change.

clnode add Fails to Add a Node from an XML File if the File Contains Direct-Connect Transport Information (6485249)

Problem Summary: If the clnode add command is attempted using an XML file that is using direct-connect transport, the command misinterprets the cable information and adds the wrong configuration information. As a result, the joining node is not able to join the cluster.

Workaround: Use the scinstall command to add a node to the cluster when the cluster transport is directly connected.

The /etc/nsswitch.conf File is Not Updated with host and netmasks Database Information During Non-Global Zone Installation (6345227)

Problem Summary: The scinstall command updates the /etc/nsswitch.conf file to add the cluster entry for the hosts and netmasks databases. This change updates the /net/nsswitch.conf file for the global zone. But when a non-global zone is created and installed, the non-global zone receives its own copy of the /etc/nsswitch.conf file. The /etc/nsswitch.conf files on the non-global zones will not have the cluster entry for the hosts and netmasks databases. Any attempt to resolve cluster-specific private hostnames and IP addresses from within a non-global zone by using getXbyY queries will fail.

Workaround: Manually update the /etc/nsswitch.conf file for non-global zones with the cluster entry for the hosts and netmasks database. This ensures that the cluster-specific private-hostname and IP-address resolutions are available within non-global zones.

Localization

Translated Messages for Quorum Server are Delivered as Part of the Core Translation Packages (6482813)

Problem Summary: Translated messages for the Quorum Server administration programs, such as clquorumserver, are delivered as part of the core translation packages. As a result, Quorum Server messages appear only in English. The Quorum server translation packages must be separated from the core translation packages and installed on the quorum server system.

Workaround: Install the following packages on the host where Quorum Server software is installed:

If the Japanese man page is needed on the quorum server, install the SUNWjscman (Japanese man page) package.

Installer Displays Incorrect Swap Size for the Sun Cluster 3.2 Simplified Chinese Version (6495984)

Problem Summary: The Sun Cluster 3.2 installer displays a warning message about short swap when installing the Sun Cluster 3.2 Simplified Chinese version of the software. The installer provides an incorrect swap size of 0.0KB size on the system requirements check screen.

Workaround: If the swap size is larger than the system requirement, you can safely ignore this problem. The SC 3.2 installer on the C or English locale can be used for installation and this version checks swap size correctly.

Runtime

SAP cleanipc Binary Needs User_env Parameter for LD_LIBRARY_PATH (4996643)

Problem Summary: The cleanipc fails if the runtime linking environment does not contain the /sapmnt/SAPSID/exe path.

Workaround: As the Solaris root user, add the /sapmnt/SAPSID/exe path to the default library in the ld.config file.

To configure the runtime linking environment default library path for 32–bit applications, enter the following command:


# crle -u -l /sapmnt/SAPSID/exe

To configure the runtime linking environment default library path for 64–bit applications, enter the following command:


# crle -64 -u -l /sapmnt/SAPSID/exe

Node Panics Due to a metaclust Return Step Error: RPC: Program Not Registered (6256220)

Problem Summary: When a cluster shutdown is performed, the UCMMD can go into a reconfiguration on one or more of the nodes if one of the nodes leaves the cluster slightly ahead of the UCMMD. When this occurs, the shutdown stops the rpc.md command on the node while the UCMMD is trying to perform the return step. In the return step, the metaclust command gets an RPC timeout and exits the step with an error, due to the missing rpc.mdcommd process. This error causes the UCMMD to abort the node, which might cause the node to panic.

Workaround: You can safely ignore this problem. When the node boots back up, Sun Cluster software detects this condition and allows the UCMMD to start, despite the fact that an error occurred in the previous reconfiguration.

Sun Cluster Resource Validation Does Not Accept the Hostname for IPMP Groups for the netiflist Property (6383994)

Problem Summary: Sun Cluster resource validation does not accept the hostname for IPMP groups for the netiflist property during logical-hostname or shared-address resource creation.

Workaround: Use the node ID instead of the node name when you specify the IPMP group names during logical-hostname and shared-address resource creation.

Upgrade

The vxlufinish Script Returns an Error When the Root Disk is Encapsulated (6448341)

Problem Summary: This problem is seen when the original disk is root encapsulated and a live upgrade is attempted from VxVM 3.5 on Solaris 9 8/03 OS to VxVM 5.0 on Solaris 10 6/06 OS. The vxlufinish script fails with the following error.


#./vslufinish -u 5.10

    VERITAS Volume Manager VxVM 5.0
    Live Upgrade finish on the Solairs release <5.10>

    Enter the name of the alternate root diskgroup: altrootdg
ld.so.1: vxparms: fatal: libvxscsi.so: open failed: No such file or directory
ld.so.1: vxparms: fatal: libvxscsi.so: open failed: No such file or directory
Killed
ld.so.1: ugettxt: fatal: libvxscsi.so: open failed: No such file or directory
ERROR:vxlufinish Failed: /altroot.5.10/usr/lib/vxvm/bin/vxencap -d -C 10176
-c -p 5555 -g
    -g altrootdg rootdisk=c0t1d0s2
    Please install, if 5.0 or higher version of VxVM is not installed
    on alternate bootdisk.

Workaround: Use the standard upgrade or dual-partition upgrade method instead.

Contact Sun support or your Sun representative to learn whether Sun Cluster 3.2 Live Upgrade support for VxVM 5.0 becomes available at a later date.

Live Upgrade Should Support Mounting Global Devices From Boot Disk (6433728)

Problem Summary: During live upgrade, the lucreate and luupgrade commands fail to change the DID names in the alternate boot environment that corresponds to the /global/.devices/node@N entry.

Workaround: Before you start the live upgrade, perform the following steps on each cluster node.

  1. Become superuser.

  2. Back up the /etc/vfstab file.


    # cp /etc/vfstab /etc/vfstab.old
    
  3. Open the /etc/vfstab file for editing.

  4. Locate the line that corresponds to /global/.device/node@N.

  5. Edit the global device entry.

    • Change the DID names to the physical names.

      Change /dev/did/{r}dsk/dYsZ to /dev/{r}dsk/cNtXdYsZ.

    • Remove global from the entry.

    The following example shows the name of DID device d3s3 which corresponds to /global/.devices/node@s, changed to its physical device names and the global entry removed:


    Original:
    /dev/did/dsk/d3s3    /dev/did/rdsk/d3s3    /global/.devices/node@2   ufs   2   no   global
    
    Changed:
    dev/dsk/c0t0d0s3     /dev/rdsk/c0t0d0s3    /global/.devices/node@2   ufs   2   no   -
  6. When the /etc/vfstab file is modified on all cluster nodes, perform live upgrade of the cluster, but stop before you reboot from the upgraded alternate boot environment.

  7. On each node, on the current, unupgraded, boot environment, restore the original /etc/vfstab file.


    # cp /etc/vstab.old /etc/vfstab
    
  8. In the alternate boot environment, open the /etc/vfstab file for editing.

  9. Locate the line that corresponds to /global/.devices/node@N and replace the dash (-) at to the end of the entry with the word global.


    /dev/dsk/cNtXdYsZ    /dev/rdsk/cNtXdYsZ    /global/.devices/node@N   ufs   2   no   global
    
  10. Reboot the node from the upgraded alternate boot environment.

    The DID names are substituted in the /etc/vfstab file automatically.

The vxlustart Script Fails to Create the Alternate Boot Environment During a Live Upgrade (6445430)

Problem Summary: This problem is seen when upgrading VERITAS Volume Manager (VxVM) during a Sun Cluster live upgrade. The vxlustart script is used to upgrade the Solaris OS and VxVM from the previous version. The script fails with error messages similar to the following:


# ./vxlustart -u 5.10 -d c0t1d0 -s OSimage

   VERITAS Volume Manager VxVM 5.0.
   Live Upgrade is now upgrading from 5.9 to <5.10>
…
ERROR: Unable to copy file systems from boot environment &lt;sorce.8876> to BE &lt;dest.8876>.
ERROR: Unable to populate file systems on boot environment &lt;dest.8876>.
ERROR: Cannot make file systems for boot environment &lt;dest.8876>.
ERROR: vxlustart: Failed: lucreate -c sorce.8876 -C /dev/dsk/c0t0d0s2 
-m -:/dev/dsk/c0t1d0s1:swap -m /:/dev/dsk/c0t1d0s0:ufs 
-m /globaldevices:/dev/dsk/c0t1d0s3:ufs -m /mc_metadb:/dev/dsk/c0t1d0s7:ufs 
-m /space:/dev/dsk/c0t1d0s4:ufs -n dest.8876

Workaround: Use the standard upgrade or dual-partition upgrade method if you are upgrading the cluster to VxVM 5.0.

Contact Sun support or your Sun representative to learn whether Sun Cluster 3.2 Live Upgrade support for VxVM 5.0 becomes available at a later date.

vxio Major Numbers Different Across the Nodes When the Root Disk is Encapsulated (6445917)

Problem Summary: For clusters that run VERITAS Volume Manager (VxVM), a standard upgrade or dual-partition upgrade of any of the following software fails if the root disk is encapsulated:

The cluster node panics and fails to boot after upgrade. This is due to the major-number or minor-number changes made by VxVM during the upgrade.

Workaround: Unencapsulate the root disk before you begin the upgrade.


Caution – Caution –

If the above procedure is not followed correctly, you may experience serious unexpected problems on all nodes being upgraded. Also, unencapsulation and encapsulation of root disk causes an additional reboot (each time) of the node automatically, increasing the number of required reboots during upgrade.


Cannot Use Zones Following Live Upgrade From Sun Cluster Version 3.1 on Solaris 9 to Version 3.2 on Solaris 10 (6509958)

Problem Summary: Following a live upgrade from Sun Cluster version 3.1 on Solaris 9 to version 3.2 on Solaris 10, zones cannot be used properly with the cluster software. The problem is that the pspool data is not created for the Sun Cluster packages. So those packages that must be propagated to the non-global zones, such as SUNWsczu, are not propagated correctly.

Workaround: After the Sun Cluster packages have been upgraded by using the scinstall -R command but before the cluster has booted into cluster mode, run the following script twice:

ProcedureInstructions for Using the Script

Before You Begin

Prepare and run this script in one of the following ways:

  1. Become superuser.

  2. Create a script with the following content.

    #!/bin/ksh
    
    typeset PLATFORM=${PLATFORM:-`uname -p`}
    typeset PATHNAME=${PATHNAME:-/cdrom/cdrom0/Solaris_${PLATFORM}/Product/sun_cluster/Solaris_10/Packages}
    typeset BASEDIR=${BASEDIR:-/}
    
    cd $PATHNAME
    for i in *
    do
    	if pkginfo -R ${BASEDIR} $i >/dev/null 2>&1
    	then
    		mkdir -p ${BASEDIR}/var/sadm/pkg/$i/save/pspool
    		pkgadd -d . -R ${BASEDIR} -s ${BASEDIR}/var/sadm/pkg/$i/save/pspool $i
    	fi
    done
  3. Set the variables PLATFORM, PATHNAME, and BASEDIR.

    Either set these variables as environment variables or modify the values in the script directly.

    PLATFORM

    The name of the platform. For example, it could be sparc or x86. By default, the PLATFORM variable is set to the output of the uname -p command.

    PATHNAME

    A path to the device from where the Sun Cluster framework or data-service packages can be installed. This value corresponds to the -d option in the pkgadd command.

    As an example, for Sun Cluster framework packages, this value would be of the following form:


    /cdrom/cdrom0/Solaris_${PLATFORM}/Product/sun_cluster/Solaris_10/Packages

    For the data services packages, this value would be of the following form:


    /cdrom/cdrom0/Solaris_${PLATFORM}/Product/sun_cluster_agents/Solaris_10/Packages
    BASEDIR

    The full path name of a directory to use as the root path and corresponds to the -R option in the pkgadd command. For live upgrade, set this value to the root path that is used with the -R option in the scinstall command. By default, the BASEDIR variable is set to the root (/) file system.

  4. Run the script, once for the Sun Cluster framework packages and once for the data-service packages.

    After the script is run, you should see the following message at the command prompt for each package:


    Transferring pkgname package instance

    Note –

    If the pspool directory already exists for a package or if the script is run twice for the same set of packages, the following error is displayed at the command prompt:


    Transferring pkgname package instance
    pkgadd: ERROR: unable to complete package transfer
        - identical version of pkgname already exists on destination device

    This is a harmless message and can be safely ignored.


  5. After you run the script for both framework packages and data-service packages, boot your nodes into cluster mode.

Can't Add Node to an Existing Sun Cluster 3.2–Patched Cluster Without Adding the Sun Cluster 3.2 Core Patch to the Node (6554107)

Problem Summary: Adding a new cluster node without ensuring that the node has the same patches as the existing cluster nodes might cause the cluster nodes to panic.

Workaround: Before adding nodes to the cluster, ensure that the new node is first patched to the same level as the existing cluster nodes. Failure to do this might cause the cluster nodes to panic.

Patches and Required Firmware Levels

This section provides information about patches for Sun Cluster configurations. If you are upgrading to Sun Cluster 3.2 software, see Chapter 8, Upgrading Sun Cluster Software, in Sun Cluster Software Installation Guide for Solaris OS. Applying a Sun Cluster 3.2 Core patch does not provide the same result as upgrading the software to the Sun Cluster 3.2 release.


Note –

Read the patch README before applying or removing any patch.


If you are using the rebooting patch (node) method to install the Sun Cluster core patch, 125510 (S9/SPARC), 125511 (S10/SPARC), or 125512 (S19/x64), you must have the -02 version of the patch installed before you can install higher versions of the patch. If you do not have the -02 patch installed and wish to install -03 or higher (when available) you must use the rebooting cluster method.

See the following list for examples of patching scenarios:


Note –

You must be a registered SunSolveTM user to view and download the required patches for the Sun Cluster product. If you do not have a SunSolve account, contact your Sun service representative or sales engineer, or register online at http://sunsolve.sun.com.


Applying the Sun Cluster 3.2 Core Patch

Complete the following procedure to apply the Sun Cluster 3.2 core patch.

ProcedureHow to Apply the Sun Cluster 3.2 Core Patch

  1. Install the patch using the usual rebooting patch procedure for a core patch.

  2. Verify that the patch has been installed correctly on all nodes and is functioning properly.

  3. Register the new version of resource types SUNW.HAStoragePlus, SUNW.ScalDeviceGroup, and SUNW.ScalMountPoint that are being updated in this patch. Perform resource type upgrade on any existing resources of these types to the new versions.

    For information about registering a resource type, see Registering a Resource Type in Sun Cluster Data Services Planning and Administration Guide for Solaris OS in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.


    Caution – Caution –

    If the Sun Cluster 3.2 core patch is removed, any resources that were upgraded in step 3 must be downgraded to the earlier resource type versions. The procedure for downgrading will require planned downtime of these services. Therefore, do not perform step 3 until you are ready to commit the Sun Cluster 3.2 core patch permanently to your cluster.


Removing the Sun Cluster 3.2 Core Patch

Complete the following procedure to remove the Sun Cluster 3.2 core patch.

ProcedureHow to Remove the Sun Cluster 3.2 Core Patch

  1. List the resource types on the cluster.


    # clrt list
    
  2. If the list returns SUNW.HAStoragePlus:5, SUNW.ScalDeviceGroup:2, or SUNW.ScalMountPoint:2, you must remove these resource types. For instructions on removing a resource type, see How to Remove a Resource Type in Sun Cluster Data Services Planning and Administration Guide for Solaris OS in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  3. Reboot all nodes of the cluster into noncluster, single user mode.

    For instructions on rebooting cluster nodes into noncluster, single user mode, see How to Boot a Cluster Node in Noncluster Mode in Sun Cluster System Administration Guide for Solaris OS in Sun Cluster System Administration Guide for Solaris OS.

  4. Remove the Sun Cluster 3.2 core patch from each node on which you installed the patch.


    # patchrm patch-id
    
  5. Reboot into cluster mode all of the nodes from which you removed the Sun Cluster 3.2 core patch.

    Rebooting all of the nodes from which you removed the Sun Cluster 3.2 core patch before rebooting any unaffected nodes ensures that the cluster is formed with the correct information in the CCR. If all nodes on the cluster were patched with the core patch, you can reboot the nodes into cluster mode in any order.

    For instructions on rebooting nodes into cluster mode, see How to Reboot a Cluster Node in Sun Cluster System Administration Guide for Solaris OS in Sun Cluster System Administration Guide for Solaris OS.

  6. Reboot any remaining nodes into cluster mode.

Patch Management Tools

The PatchPro patch management technology is now available as Patch Manager 2.0 for Solaris 9 OS and as Sun Update Connection 1.0 for Solaris 10 OS.

If some patches must be applied when the node is in noncluster mode, you can apply them in a rolling fashion, one node at a time, unless a patch's instructions require that you shut down the entire cluster. Follow procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and boot it into noncluster mode. For ease of installation, consider applying all patches at once to a node that you place in noncluster mode.

SunSolve Online

The SunSolve Online Web site provides 24-hour access to the most up-to-date information regarding patches, software, and firmware for Sun products. Access the SunSolve Online site at http://sunsolve.sun.com for the most current matrixes of supported software, firmware, and patch revisions.

Sun Cluster 3.2 third-party patch information is provided through a SunSolve Info Doc. This Info Doc page provides any third-party patch information for specific hardware that you intend to use in a Sun Cluster 3.2 environment. To locate this Info Doc, log on to SunSolve. From the SunSolve home page, type Sun Cluster 3.x Third-Party Patches in the search criteria box.

Before you install Sun Cluster 3.2 software and apply patches to a cluster component (Solaris OS, Sun Cluster software, volume manager software, data services software, or disk hardware), review each README file that accompanies the patches that you retrieved. All cluster nodes must have the same patch level for proper cluster operation.

For specific patch procedures and tips on administering patches, see Chapter 10, Patching Sun Cluster Software and Firmware, in Sun Cluster System Administration Guide for Solaris OS.

Sun Cluster 3.2 Documentation

The Sun Cluster 3.2 user documentation set consists of the following collections:

The Sun Cluster 3.2 user documentation is available in PDF and HTML format at the following web site:

http://htt;://docs.sun.com/app/docs/prod/sun.cluster32


Note –

Beginning with Sun Cluster 3.2, documentation for individual data services will not be translated. Documentation for individual data services will be available only in English.


Searching Sun Product Documentation

Besides searching for Sun production documentation from the docs.sun.com web site, you can use a search engine of your choice by typing the following syntax in the search field:


search-term site:docs.sun.com

For example, to search for “broker,” type the following:


broker site:docs.sun.com

To include other Sun web sites in your search (for example, java.sun.com, www.sun.com, developers.sun.com), use “sun.com” in place of docs.sun.com” in the search field.

Sun Cluster 3.2 Software Manuals for Solaris OS

Table 1 Sun Cluster 3.2 Software Collection for Solaris OS Software Manuals

Part Number 

Book Title 

820–0335 

Sun Cluster 3.2 Documentation Center

819-2969 

Sun Cluster Concepts Guide for Solaris OS

819-2972 

Sun Cluster Data Services Developer’s Guide for Solaris OS

819-2974 

Sun Cluster Data Services Planning and Administration Guide for Solaris OS

819-2973 

Sun Cluster Error Messages Guide for Solaris OS

819-2968 

Sun Cluster Overview for Solaris OS

819–6811 

Sun Cluster Quick Reference

819-3055 

Sun Cluster Reference Manual for Solaris OS

819-2970 

Sun Cluster Software Installation Guide for Solaris OS

819–0912 

Sun Cluster Quick Start Guide for Solaris OS

819-2971 

Sun Cluster System Administration Guide for Solaris OS

Sun Cluster 3.2 Data Service Manuals for Solaris OS (SPARC Platform Edition)

Table 2 Sun Cluster 3.2 Software Collection for Solaris OS (SPARC Platform Edition): Individual Data Service Manuals

Part Number 

Book Title 

819-3056 

Sun Cluster Data Service for Agfa IMPAX Guide for Solaris OS

819-2975 

Sun Cluster Data Service for Apache Guide for Solaris OS

819-3057 

Sun Cluster Data Service for Apache Tomcat Guide for Solaris OS

819-3058 

Sun Cluster Data Service for DHCP Guide for Solaris OS

819-2977 

Sun Cluster Data Service for DNS Guide for Solaris OS

819–5415 

Sun Cluster Data Service for Kerberos Guide for Solaris OS

819-2982 

Sun Cluster Data Service for MaxDB Guide for Solaris OS

819-3059 

Sun Cluster Data Service for MySQL Guide for Solaris OS

819-3060 

Sun Cluster Data Service for N1 Grid Service Provisioning System for Solaris OS

819-0693 

Sun Cluster Data Service for NetBackup Guide for Solaris OS

819-2979 

Sun Cluster Data Service for NFS Guide for Solaris OS

819-3061 

Sun Cluster Data Service for Oracle Application Server Guide for Solaris OS

819-3062 

Sun Cluster Data Service for Oracle E-Business Suite Guide for Solaris OS

819-2980 

Sun Cluster Data Service for Oracle Guide for Solaris OS

819-2981 

Sun Cluster Data Service for Oracle RAC Guide for Solaris OS

819–5578 

Sun Cluster Data Service for PostgreSQL Guide for Solaris OS

819-3063 

Sun Cluster Data Service for Samba Guide for Solaris OS

819-2983 

Sun Cluster Data Service for SAP Guide for Solaris OS

819-2984 

Sun Cluster Data Service for SAP liveCache Guide for Solaris OS

819-2985 

Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS

819-2986 

Sun Cluster Data Service for Siebel Guide for Solaris OS

819-3069 

Sun Cluster Data Service for Solaris Containers Guide

819-3064 

Sun Cluster Data Service for Sun Grid Engine Guide for Solaris OS

819-2988 

Sun Cluster Data Service for Sun Java System Application Server Guide for Solaris OS

819-2987 

Sun Cluster Data Service for Sun Java System Application Server EE (HADB) Guide for Solaris OS

819-2989 

Sun Cluster Data Service for Sun Java System Message Queue Guide for Solaris OS

819-2990 

Sun Cluster Data Service for Sun Java System Web Server Guide for Solaris OS

819-3065 

Sun Cluster Data Service for SWIFTAlliance Access Guide for Solaris OS

819-3066 

Sun Cluster Data Service for SWIFTAlliance Gateway Guide for Solaris OS

819-2991 

Sun Cluster Data Service for Sybase ASE Guide for Solaris OS

819-2992 

Sun Cluster Data Service for WebLogic Server Guide for Solaris OS

819-3068 

Sun Cluster Data Service for WebSphere Message Broker Guide for Solaris OS

819-3067 

Sun Cluster Data Service for WebSphere MQ Guide for Solaris OS

Sun Cluster 3.2 Data Service Manuals for Solaris OS (x86 Platform Edition)

Table 3 Sun Cluster 3.2 Software Collection for Solaris OS (x86 Platform Edition): Individual Data Service Manuals

Part Number 

Book Title 

819-2975 

Sun Cluster Data Service for Apache Guide for Solaris OS

819-2975 

Sun Cluster Data Service for Apache Tomcat Guide for Solaris OS

819-3058 

Sun Cluster Data Service for DHCP Guide for Solaris OS

819-2977 

Sun Cluster Data Service for DNS Guide for Solaris OS

819–5415 

Sun Cluster Data Service for Kerberos Guide for Solaris OS

819-2982 

Sun Cluster Data Service for MaxDB Guide for Solaris OS

819-3059 

Sun Cluster Data Service for MySQL Guide for Solaris OS

819-3060 

Sun Cluster Data Service for N1 Grid Service Provisioning System for Solaris OS

819-2979 

Sun Cluster Data Service for NFS Guide for Solaris OS

819-3061 

Sun Cluster Data Service for Oracle Application Server Guide for Solaris OS

819-2980 

Sun Cluster Data Service for Oracle Guide for Solaris OS

819-2981 

Sun Cluster Data Service for Oracle RAC Guide for Solaris OS

819–5578 

Sun Cluster Data Service for PostgreSQL Guide for Solaris OS

819-3063 

Sun Cluster Data Service for Samba Guide for Solaris OS

819-2983 

Sun Cluster Data Service for SAP Guide for Solaris OS

819-2985 

Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS

819-3069 

Sun Cluster Data Service for Solaris Containers Guide

819-3064 

Sun Cluster Data Service for Sun Grid Engine Guide for Solaris OS

819-2987 

Sun Cluster Data Service for Sun Java System Application Server EE (HADB) Guide for Solaris OS

819-2988 

Sun Cluster Data Service for Sun Java System Application Server Guide for Solaris OS

819-2989 

Sun Cluster Data Service for Sun Java System Message Queue Guide for Solaris OS

819-2990 

Sun Cluster Data Service for Sun Java System Web Server Guide for Solaris OS

819-2992 

Sun Cluster Data Service for WebLogic Server Guide for Solaris OS

819-3067 

Sun Cluster Data Service for WebSphere MQ Guide for Solaris OS

819-3068 

Sun Cluster Data Service for WebSphere Message Broker Guide for Solaris OS

Sun Cluster 3.1 - 3.2 Hardware Collection for Solaris OS (SPARC Platform Edition)

Table 4 Sun Cluster 3.1 - 3.2 Hardware Collection for Solaris OS (SPARC Platform Edition)

Part Number 

Book Title 

819-2993 

Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS

819–2995 

Sun Cluster 3.1 - 3.2 With SCSI JBOD Storage Device Manual for Solaris OS

819-3015 

Sun Cluster 3.1 - 3.2 With Sun StorEdge 3310 or 3320 SCSI RAID Array Manual for Solaris OS

819-3016 

Sun Cluster 3.1 - 3.2 With Sun StorEdge 3510 or 3511 FC RAID Array Manual for Solaris OS

819-3017 

Sun Cluster 3.1 - 3.2 With Sun StorEdge 3900 Series or Sun StorEdge 6900 Series System Manual

819-3018 

Sun Cluster 3.1 - 3.2 With Sun StorEdge 6120 Array Manual for Solaris OS

819-3020 

Sun Cluster 3.1 - 3.2 With Sun StorEdge 6320 System Manual for Solaris OS

819-3021 

Sun Cluster 3.1 - 3.2 With Sun StorEdge 9900 Series Storage Device Manual for Solaris OS

819-2996 

Sun Cluster 3.1 - 3.2 With StorEdge A1000 Array, Netra st A1000 Array, or StorEdge A3500 System Manual

819-3022 

Sun Cluster 3.1 - 3.2 With Sun StorEdge A3500FC System Manual for Solaris OS

819-2994 

Sun Cluster 3.1 - 3.2 With Fibre Channel JBOD Storage Device Manual

817–5681 

Sun Cluster 3.1 - 3.2 With SCSI JBOD Storage Device Manual for Solaris OS

819-3023 

Sun Cluster 3.1 - 3.2 With Sun StorEdge T3 or T3+ Array Manual for Solaris OS

819-3019 

Sun Cluster 3.1 - 3.2 With Sun StorEdge 6130 Array Manual

819-3024 

Sun Cluster 3.1 - 3.2 With Network-Attached Storage Devices Manual for Solaris OS

Sun Cluster 3.1 — 3.2 Hardware Collection for Solaris OS (x86 Platform Edition)

Table 5 Sun Cluster 3.1 – 3.2 Hardware Collection for Solaris OS (x86 Platform Edition)

Part Number 

Book Title 

819-2993 

Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS

817–0180 

Sun Cluster 3.1 - 3.2 With Sun StorEdge 3310 or 3320 SCSI RAID Array Manual for Solaris OS

819-3024 

Sun Cluster 3.1 - 3.2 With Network-Attached Storage Devices Manual for Solaris OS

819-3021 

Sun Cluster 3.1 - 3.2 With Sun StorEdge 9900 Series Storage Device Manual for Solaris OS

819-3020 

Sun Cluster 3.1 - 3.2 With Sun StorEdge 6320 System Manual for Solaris OS

819-3019 

Sun Cluster 3.1 - 3.2 With Sun StorEdge 6130 Array Manual

819-3018 

Sun Cluster 3.1 - 3.2 With Sun StorEdge 6120 Array Manual for Solaris OS

819-3016 

Sun Cluster 3.1 - 3.2 With Sun StorEdge 3510 or 3511 FC RAID Array Manual for Solaris OS

819–2995 

Sun Cluster 3.1 - 3.2 With SCSI JBOD Storage Device Manual for Solaris OS

Documentation Issues

This section discusses errors or omissions for documentation, online help, or man pages in the Sun Cluster 3.2 release.

Concepts Guide

This section discusses error and omissions in the Sun Cluster Concepts Guide for Solaris OS.

x86: Sun Cluster Topologies for x86

In the section Sun Cluster Topologies for x86 in Sun Cluster Concepts Guide for Solaris OS, the following statement is out of date for the Sun Cluster 3.2 release: "Sun Cluster that is composed of x86 based systems supports two nodes in a cluster."

The statement should instead read as follows: "A Sun Cluster configuration that is composed of x86 based systems supports up to eight nodes in a cluster that runs Oracle RAC, or supports up to four nodes in a cluster that does not run Oracle RAC."

Software Installation Guide

This section discussion errors or omissions in the Sun Cluster Software Installation Guide for Solaris OS.

Missing Upgrade Preparation for Clusters that Run Sun Cluster Geographic Edition Software

If you upgrade a cluster that also runs Sun Cluster Geographic Edition software, there are additional preparation steps you must perform before you begin Sun Cluster software upgrade. These steps include shutting down the Sun Cluster Geographic Edition infrastructure. Go instead to Chapter 4, Upgrading the Sun Cluster Geographic Edition Software, in Sun Cluster Geographic Edition Installation Guide in Sun Cluster Geographic Edition Installation Guide. These procedures document when to return to the Sun Cluster Software Installation Guide to perform Sun Cluster software upgrade.

Sun Cluster Data Services Planning and Administration Guide

This section discusses error and omissions in the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Support of Scalable Services on Non-Global Zones

In Resource Type Properties in Sun Cluster Data Services Planning and Administration Guide for Solaris OS, the description of the Failover resource property is missing a statement concerning support of scalable services on non-global zones. This support applies to resources for which the Failover property of the resource type is set to FALSE and the Scalable property of the resource is set to TRUE. This combination of property settings indicates a scalable service that uses a SharedAddress resource to do network load balancing. In the Sun Cluster 3.2 release, you can configure a scalable service of this type in a resource group that runs in a non-global zone. But you cannot configure a scalable service to run in multiple non-global zones on the same node.

Sun Cluster Data Service for MaxDB Guide

This section discusses error and omissions in the Sun Cluster Data Service for MaxDB Guide for Solaris OS.

Changes to Sun Cluster Data Service for MaxDB Support on Non-Global Zones on SPARC and x86 Based System

The Sun Cluster Data Service for MaxDB supports non-global zones on SPARC and x86 based systems. The following changes should be made to the Sun Cluster Data Service MaxDB Guide for this support. The following steps can be performed on a cluster that has been configured to run in global zones. If you are installing your cluster to run in non-global zones, a few of these steps might not be necessary as indicated below.

Sun Cluster Data Service for SAP Guide

This section discusses error and omissions in the Sun Cluster Data Service for SAP Guide for Solaris OS.

Changes to SAP Support on Non-Global Zones on SPARC and x86 Based System

The Sun Cluster Data Service for SAP supports non-global zones on SPARC and x86 based systems. The following changes should be made to the Sun Cluster Data Service SAP Guide for this support. The following steps can be performed on a cluster that has been configured to run in global zones. If you are installing your cluster to run in non-global zones, a few of these steps might not be necessary as indicated below.

Sun Cluster Data Service for SAP liveCache Guide

This section discusses error and omissions in the Sun Cluster Data Service for SAP liveCache Guide for Solaris OS.

Changes to SAP liveCache Support on Non-Global Zones on SPARC and x86 Based System

The Sun Cluster Data Service for SAP liveCache supports non-global zones on SPARC and x86 based systems. The following changes should be made to the Sun Cluster Data Service SAP liveCache Guide for this support. The following steps can be performed on a cluster that has been configured to run in global zones. If you are installing your cluster to run in non-global zones, a few of these steps might not be necessary as indicated below.

Sun Cluster Data Service for SAP Web Application Server Guide

This section discusses error and omissions in the Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS.

Support for SAP 7.0 for Sun Cluster HA for SAP Web Application Server (6461002)

In SAP 7.0 and NW2004SR1, when a SAP instance is started, the sapstartsrv process is started by default. The sapstartsrv process is not under the control of Sun Cluster HA for SAP Web Application Server. So, when a SAP instance is stopped or failed over by Sun Cluster HA for SAP Web Application Server, the sapstartsrv process is not stopped.

To avoid starting the sapstartsrv process when a SAP instance is started by Sun Cluster HA for SAP Web Application, you must modify the startsap script. In addition, rename the /etc/rc3.d/S90sapinit file to /etc/rc3.d/xxS90sapinit on all the Sun Cluster nodes.

Changes to SAP Web Application Server Support on Non-Global Zones on SPARC and x86 Based System

The Sun Cluster Data Service for SAP Web Application Server supports non-global zones on SPARC and x86 based systems. The following changes should be made to the Sun Cluster Data Service SAP Web Application Server Guide for this support. The following steps can be performed on a cluster that has been configured to run in global zones. If you are installing your cluster to run in non-global zones, a few of these steps might not be necessary as indicated below.

Setting Up the SAP Web Application Server on Non-Global Zones for HASP Configuration (6530281)

Use the following procedure to configure a HAStoragePlus resource for non-global zones.


Note –

ProcedureHow to Set Up the SAP Web Application Server on Non-Global Zones for HAStoragePlus Configuration

  1. On any node in the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Create the scalable resource group with non-global zones that contain the HAStoragePlus resource.


       # clresourcegroup create \
         -p Maximum_primaries=m\
         -p Desired_primaries=n\
        [-n node-zone-list] hasp-resource-group
    
    -p Maximum_primaries=m

    Specifies the maximum number of active primaries for the resource group.

    -p Desired_primaries=n

    Specifies the number of active primaries on which the resource group should attempt to start.

    -n node-zone-list

    In the node list of a HAStoragePlus resource group, specifies the list of nodename:zonename pairs as the node list of the HAStoragePlus resource group, where the SAP instances can come online.

    hasp-resource-group

    Specifies the name of the scalable resource group to be added. This name must begin with an ASCII character.

  3. Register the resource type for the HAStoragePlus resource.


    # clresourcetype register HAStoragePlus
  4. Create the HAStoragePlus resource hasp-resource and define the SAP filesystem mount points and global device paths.


     # clresource create -g hasp-resource-group -t SUNW.HAStoragePlus \
        -p GlobalDevicePaths=/dev/global/dsk/d5s2,dsk/d6 \
        -p affinityon=false -p 
    FilesystemMountPoints=/sapmnt/JSC,/usr/sap/trans,/usr/sap/JSC hasp-resource
    
    -g hasp-resource-group

    Specifies the resource group name.

    GlobalDevicePaths

    Contains the following values:

    • Global device group names, such as sap-dg, dsk/d5

    • Paths to global devices, such as /dev/global/dsk/d5s2, /dev/md/sap-dg/dsk/d6

    FilesystemMountPoints

    Contains the following values:

    • Mount points of local or cluster file systems, such as /local/mirrlogA,/local/mirrlogB,/sapmnt/JSC,/usr/sap/JSC

    The HAStoragePlus resource is created in the enabled state.

  5. Register the resource type for the SAP application.


    # clresourcetype register resource-type
    
    resource-type

    Specifies the name of the resource type to be added. For more information, see Supported Products.

  6. Create a SAP resource group.


      # clresourcegroup create [-n node-zone-list] -p 
    RG_affinities=++hastorageplus-rg resource-group-1
    
    resource-group-1

    Specifies the SAP services resource group.

  7. Add the SAP application resource to resource-group-1 and set the dependency to hastorageplus-1.


       # clresource create -g resource-group-1 -t SUNW.application \
         [-p "extension-property[{node-specifier}]"=value, ?] \
         -p Resource_dependencies=hastorageplus-1 resource
    
  8. Bring the failover resource group online.


    # clresourcegroup online resource-group-1
    

System Administration Guide

This section discusses error and omissions in the Sun Cluster System Administration Guide for Solaris OS.

Taking a Solaris Volume Manager Metaset From Nodes Booted in Non-Cluster Mode

ProcedureHow to Take a Solaris Volume Manager Metaset From Nodes Booted in Non-Cluster Mode

Use this procedure to run an application outside the cluster for testing purposes.

  1. Determine if the quorum device is used in the Solaris Volume Manager metaset, and determine if the quorum device uses scsi2 or scsi3 reservations.


    # clquorum show
    
    1. If the quorum device is in the Solaris Volume Manager metaset, add a new quorum device which is not part of the metaset to be taken later in non-cluster mode.


      # clquorum add did
      
    2. Remove the old quorum device.


      # clqorum remove did
      
    3. If the quorum device uses a scsi2 reservation, scrub the scsi2 reservation from the old quorum and verify that there are no scsi2 reservations remaining.


      # /usr/cluster/lib/sc/pgre -c pgre_scrub -d /dev/did/rdsk/dids2
      # /usr/cluster/lib/sc/pgre -c pgre_inkeys -d /dev/did/rdsk/dids2
      
  2. Evacuate the node you want to boot in non-cluster mode.


    # clresourcegroup evacuate -n targetnode
    
  3. Take offline any resource group or resource groups that contain HAStorage or HAStoragePlus resources and contain devices or file systems affected by the metaset you want to later take in non-cluster mode.


    # clresourcegroup offline resourcegroupname
    
  4. Disable all the resources in the resource groups you took offline.


    # clresource disable resourcename
    
  5. Unmanage the resource groups.


    # clresourcegroup unmanage resourcegroupname
    
  6. Take offline the corresponding device group or device groups.


    # cldevicegroup offline devicegroupname
    
  7. Disable the device group or device groups.


    # cldevicegroup disable devicegroupname
    
  8. Boot the passive node into non-cluster mode.


    # reboot -x
    
  9. Verify that the boot process has completed on the passive node before proceeding.

    • Solaris 9

      The login prompt will only appear after the boot process has completed, so no action is required.

    • Solaris 10


      # svcs -x
      
  10. Determine if there are any scsi3 reservations on the disks in the metaset or metasets. Perform the following commands on all disks in the metasets.


    # /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/dids2
    
  11. If there are any scsi3 reservations on the disks, scrub them.


    # /usr/cluster/lib/sc/scsi -c scrub -d /dev/did/rdsk/dids2
    
  12. Take the metaset on the evacuated node.


    # metaset -s name -C take -f
    
  13. Mount the filesystem or filesystems containing the defined device on the metaset.


    # mount device mountpoint
    
  14. Start the application and perform the desired test. After finishing the test, stop the application.

  15. Reboot the node and wait until the boot process has finished.


    # reboot
    
  16. Bring online the device group or device groups.


    # cldevicegroup online -e devicegroupname
    
  17. Start the resource group or resource groups.


    # clresourcegroup online -eM  resourcegroupname 
    

Using Solaris IP Filtering with Sun Cluster

Sun Cluster supports Solaris IP Filtering with the following restrictions:

ProcedureHow to Set Up Solaris IP Filtering

  1. In the /etc/iu.ap file, modify the public NIC entries to list clhbsndr pfil as the module list.

    The pfil must be the last module in the list.


    Note –

    If you have the same type of adapter for private and public network, your edits to the /etc/iu.ap file will push pfil to the private network streams. However, the cluster transport module will automatically remove all unwanted modules at stream creation, so pfil will be removed from the private network streams.


  2. To ensure that the IP filter works in non-cluster mode, update the /etc/ipf/pfil.ap file.

    Updates to the /etc/iu.ap file are slightly different. See the IP Filter documentation for more information.

  3. Reboot all affected nodes.

    You can boot the nodes in a rolling fashion.

  4. Add filter rules to the /etc/ipf/ipf.conf file on all affected nodes. For information on IP filter rules syntax, see ipf(4)

    Keep in mind the following guidelines and requirements when you add filter rules to Sun Cluster nodes.

    • Sun Cluster fails over network addresses from node to node. No special procedure or code is needed at the time of failover.

    • All filtering rules that reference IP addresses of logical hostname and shared address resources must be identical on all cluster nodes.

    • Rules on a standby node will reference a non-existent IP address. This rule is still part of the IP filter's active rule set and will become effective when the node receives the address after a failover.

    • All filtering rules must be the same for all NICs in the same IPMP group. In other words, if a rule is interface-specific, the same rule must also exist for all other interfaces in the same IPMP group.

  5. Enable the ipfilter SMF service.


    # svcadm enable /network/ipfilter:default
    

Data Services Developer's Guide

This section discusses errors and omissions in the Sun Cluster Data Services Developer’s Guide for Solaris OS.

Support of Certain Scalable Services on Non-Global Zones

In Resource Type Properties in Sun Cluster Data Services Developer’s Guide for Solaris OS, the description of the Failover resource property is missing a statement concerning support of scalable services on non-global zones. This support applies to resources for which the Failover property of the resource type is set to FALSE and the Scalable property of the resource is set to TRUE. This combination of property settings indicates a scalable service that uses a SharedAddress resource to do network load balancing. In the Sun Cluster 3.2 release, you can configure a scalable service of this type in a resource group that runs in a non-global zone. But you cannot configure a scalable service to run in multiple non-global zones on the same node.

Method Timeout Behavior is Changed

A description of the change in the behavior of method timeouts in the Sun Cluster 3.2 release is missing. If an RGM method callback times out, the process is now killed by using the SIGABRT signal instead of the SIGTERM signal. This causes all members of the process group to generate a core file.


Note –

Avoid writing a data-service method that creates a new process group. If your data service method does need to create a new process group, also write a signal handler for the SIGTERM and SIGABRT signals. Write the signal handlers to forward the SIGTERM or SIGABRT signal to the child process group before the signal handler terminates the parent process. This increases the likelihood that all processes that are spawned by the method are properly terminated.


CRNP Runs Only in the Global Zone

Chapter 12, Cluster Reconfiguration Notification Protocol, in Sun Cluster Data Services Developer’s Guide for Solaris OS is missing the statement that, on the Solaris 10 OS, the Cluster Reconfiguration Notification Protocol (CRNP) runs only in the global zone.

Required Solaris Software Group Statement is Unclear

In Setting Up the Development Environment for Writing a Data Service in Sun Cluster Data Services Developer’s Guide for Solaris OS, there is a Note that the Solaris software group Developer or Entire Distribution is required. This statement applies to the development machine. But because it is positioned after a statement about testing the data service on a cluster, it might be misread as a requirement for the cluster that the data service is being run on.

Quorum Server User's Guide

This section discusses errors and omissions in the Sun Cluster Quorum Server User’s Guide.

Supported Software and Hardware Platforms

The following installation requirements and guidelines are missing or unclear:

Man Pages

This section discusses errors, omissions, and additions in the Sun Cluster man pages.

ccp(1M)

The following revised Synopsis and added Options sections of the ccp(1M) man page document the addition of Secure Shell support to the Cluster Control Panel (CCP) utilities:

SYNOPSIS


$CLUSTER_HOME/bin/ccp [-s] [-l username] [-p ssh-port] {clustername | nodename}

OPTIONS

The following options are supported:

-l username

Specifies the user name for the ssh connection. This option is passed to the cconsole, crlogin, or cssh utility when the utility is launched from the CCP. The ctelnet utility ignores this option.

If the -l option is not specified, the user name that launched the CCP is effective.

-p ssh-port

Specifies the Secure Shell port number to use. This option is passed to the cssh utility when the utility is launched from the CCP. The cconsole, crlogin, and ctelnet utilities ignore this option.

If the -p option is not specified, the default port number 22 is used for secure connections.

-s

Specifies using Secure Shell connections to node consoles instead of telnet connections. This option is passed to the cconsole utility when the utility is launched from the CCP. The crlogin, cssh, and ctelnet utilities ignore this option.

If the -s option is not specified, the cconsole utility uses telnet connections to the consoles.

To override the -s option, deselect the Use SSH checkbox in the Options menu of the cconsole graphical user interface (GUI).

cconsole(1M), crlogin(1M), cssh(1M), and ctelnet(1M)

The following revised Synopsis and added Options sections of the combined cconsole, crlogin, cssh, and ctelnet man page document the addition of Secure Shell support to the Cluster Control Panel utilities:

SYNOPSIS


$CLUSTER_HOME/bin/cconsole [-s] [-l username] [clustername… | nodename…]
$CLUSTER_HOME/bin/crlogin [-l username] [clustername… | nodename…]
$CLUSTER_HOME/bin/cssh [-l username] [-p ssh-port] [clustername… | nodename…]
$CLUSTER_HOME/bin/ctelnet [clustername… | nodename…]

DESCRIPTION

cssh

This utility establishes Secure Shell connections directly to the cluster nodes.

OPTIONS

-l username

Specifies the ssh user name for the remote connections. This option is valid with the cconsole, crlogin, and cssh commands.

The argument value is remembered so that clusters and nodes that are specified later use the same user name when making connections.

If the -l option is not specified, the user name that launched the command is effective.

-p ssh-port

Specifies the Secure Shell port number to use. This option is valid with the cssh command.

If the -p option is not specified, the default port number 22 is used for secure connections.

-s

Specifies using Secure Shell connections instead of telnet connections to node consoles. This option is valid with the cconsole command.

If the -s option is not specified, the utility uses telnet connections to the consoles.

To override the -s option from the cconsole graphical user interface (GUI), deselect the Use SSH checkbox in the Options menu.

clnode(1CL)

clresource(1CL)

clresourcegroup(1CL)

r_properties(5)

rt_properties(5)

The description of the Failover resource-type property contains an incorrect statement concerning support of scalable services on non-global zones in the Sun Cluster 3.2 release. This applies to resources for which the Failover property of the resource type is set to FALSE and the Scalable property of the resource is set to TRUE.

serialports(4)

The following information is an addition to the Description section of the serialport(4) man page:

To support Secure Shell connections to node consoles, specify in the /etc/serialports file the name of the console-access device and the Secure Shell port number for each node. If you use the default Secure Shell configuration on the console-access device, specify port number 22.

SUNW.Event(5)

The SUNW.Event(5) man page is missing the statement that, on the Solaris 10 OS, the Cluster Reconfiguration Notification Protocol (CRNP) runs only in the global zone.