Sun Cluster 3.0 12/01 Release Notes

Chapter 1 Sun Cluster 3.0 12/01 Release Notes

This document provides the following information for SunTM Cluster 3.0 12/01 (Update 2) software.

The appendices to this document include installation planning worksheets and examples for planning the Sun Cluster 3.0 12/01 software and data services installation.

New Features and Functionality

The following table lists new features and functionality that require updates to the Sun Cluster documentation. The second column identifies the documentation that was updated. Contact your Sun sales representative for the complete list of supported hardware and software.

Table 1-1 New Features and Functionality

Feature or Functionality 

Documentation Updates 

Enhancements to installation 

The Sun Cluster 3.0 12/01 Software Installation Guide was updated to include new functionality added to the scinstall(1M) and scsetup(1M) commands.

  • During Sun Cluster software installation, sccheck checks and validates the node to ensure that it meets the very basic configuration required for it to be functional in a Sun Cluster configuration. The sccheck(1M) man page has also been updated to reflect this new functionality.

  • The interactive scinstall installation method also now provides optional autodiscovery of installed cluster transport adapters.

Support for the Remote Shared Memory Application Programming Interface (RSMAPI) 

The Sun Cluster 3.0 12/01 Software Installation Guide was updated with steps to install the software packages required to support the RSMAPI in a Sun Cluster configuration.

Dynamic reconfiguration support 

A new section was added to the Sun Cluster 3.0 12/01 Concepts that describes the initial phase of Sun Cluster 3.0 support for the dynamic reconfiguration feature. Considerations and manual actions required by the user for this phase are described.

PCI-SCI interconnect support 

The Sun Cluster 3.0 12/01 Hardware Guide chapter on interconnect hardware was updated to include sample cabling diagrams, considerations, and troubleshooting to support the use of PCI-SCI interconnect hardware in a cluster. The Sun Cluster 3.0 12/01 Software Installation Guide was also updated with steps to install PCI-SCI software packages.

Storage Area Network (SAN) support 

The Sun Cluster 3.0 12/01 Hardware Guide was updated with SAN information, including sample cabling diagrams, supported SAN features, and considerations in each of the four chapters for the storage arrays on which the SAN functionality is supported. The arrays that support SANs are the Sun StorEdge A5200 array, the Sun StorEdge A3500FC array, and the Sun StorEdge T3/T3+ arrays in single-controller configuration and in partner-group configuration.

Sun StorEdge T3+ qualification 

The two Sun Cluster 3.0 12/01 Hardware Guide chapters for Sun StorEdge T3 arrays in single-controller configuration and in partner-group configuration were updated to accommodate differences for the Sun StorEdge T3+ arrays. Some procedures in both chapters were also modified with improvements encountered during testing.

Sun Netra D130 and Sun StorEdge S1 qualification 

The Sun Cluster 3.0 12/01 Hardware Guide was updated with a new chapter that describes procedures for the Sun Netra D130 and Sun StorEdge S1 storage enclosures.

Support for VERITAS File System (VxFS) 

The Sun Cluster 3.0 12/01 Software Installation Guide and the Sun Cluster 3.0 12/01 System Administration Guide were updated to include instructions to create VxFS cluster file systems. See Guidelines to Support VxFS for more information.

Sun Cluster HA for BroadVision One-To-One Enterprise qualification 

The Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide was updated with a new chapter required to support Sun Cluster HA for BroadVision One-To-One Enterprise. This data service uses fault monitoring and automatic failover to eliminate single points of failure in a BroadVision One-To-One Enterprise installation.

Support for Sun Cluster HA for Oracle on Oracle 9i. 

The Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide was updated with new procedures required to support Sun Cluster HA for Oracle on Oracle 9i.

Support for Sun Cluster Security Hardening 

The Sun Cluster Security Hardening documentation is available at http://www.sun.com/security/blueprints. From this URL, scroll down to the Architecture heading to locate the article on Sun Cluster Security Hardening. See Sun Cluster Security Hardening for more information.

Notes on New Features and Functionality

This section includes additional information on new features and functionality.

Sun Cluster Security Hardening

Sun Cluster Security Hardening uses the Solaris Operating Environment hardening techniques recommended by the Sun BluePrints program to achieve basic security hardening for clusters. The Solaris Security Toolkit automates the implementation of Sun Cluster Security Hardening. Sun Cluster Security Hardening supports the following three agents.

The Sun Cluster Security Hardening documentation is available at http://www.sun.com/security/blueprints. From this URL, scroll down to the Architecture heading to locate the article on Sun Cluster Security Hardening.

Guidelines to Support VxFS

The following VxFS features are not supported in a Sun Cluster 3.0 configuration.

All other VxFS features and options that are supported in a cluster configuration are supported by Sun Cluster 3.0 software. See VxFS documentation and man pages for details about VxFS options that are or are not supported in a cluster configuration.

The following guidelines for how to use VxFS to create highly available cluster file systems are specific to a Sun Cluster 3.0 configuration.

The following guidelines for how to administer VxFS cluster file systems are not specific to Sun Cluster 3.0 software. However, they are different from the way you administer UFS cluster file systems.

Supported Products

This section describes the supported software and memory requirements for Sun Cluster 3.0 12/01 software.

Features Nearing End of Life

Public Network Management (PNM) will not be supported in the next Sun Cluster feature release. Network adapter monitoring and failover for Sun Cluster will instead be performed by Solaris IP Multipathing.

Public Network Management (PNM)

Use the PNM to configure and administer network interface card monitoring and failover. However, the user interfaces to the PNM daemon and PNM administration commands are obsolete and will be removed in the next Sun Cluster feature release. Users are strongly discouraged from developing tools that rely on these interfaces. The following interfaces are officially supported in the current release, but are expected to be removed in the next Sun Cluster feature release.

To prepare for the transition to IP Multipathing in the next Sun Cluster feature release, consider the following issues.

Sun Cluster AnswerBooks Installation

The Sun Cluster 3.0 12/01 user documentation is available online in AnswerBook2TM format for use with AnswerBook2 documentation servers. The Sun Cluster 3.0 12/01 AnswerBook2 documentation set consists of the following collections.

Setting Up the AnswerBook2 Documentation Server

The Solaris operating environment release includes AnswerBook2 documentation server software. The Solaris documentation CD-ROM, which is separate from the Solaris operating environment CD-ROM, includes the documentation server software. You need the Solaris documentation CD-ROM to install an AnswerBook2 documentation server.

If you have installed an AnswerBook2 documentation server at your site, you can use the same server for the Sun Cluster 3.0 12/01 AnswerBooks. Otherwise, install a documentation server on a machine at your site. We recommend that you use the administrative console as the administrative interface to your cluster for the documentation server. Do not use a cluster node as your AnswerBook2 documentation server.

For information on installing an AnswerBook2 documentation server, load the Solaris documentation CD-ROM on a server, and view the README files.

Viewing Sun Cluster AnswerBooks

Install the Sun Cluster AnswerBook2 documents on a file system on the same server on which you install the documentation server. The Sun Cluster 3.0 12/01 AnswerBooks include a post-install script that automatically adds the documents to your existing AnswerBook library.

To setup your AnswerBook2 servers:

How to Install the Sun Cluster AnswerBooks

Use this procedure to install the Sun Cluster AnswerBook packages for the Sun Cluster 3.0 12/01 Collection and Sun Cluster 3.0 12/01 Data Services Collection.

  1. Become superuser on the server that has an AnswerBook2 documentation server.

  2. If you have previously installed the Sun Cluster AnswerBooks, remove the old packages.


    # pkgrm SUNWscfab SUNWscdab
    

    If you have never installed Sun Cluster AnswerBooks, ignore this step.

  3. Insert the Sun Cluster 3.0 12/01 CD-ROM or Sun Cluster 3.0 Agents 12/01 CD-ROM into a CD-ROM drive attached to your documentation server.

    The Volume Management daemon, vold(1M), mounts the CD-ROM automatically.

  4. Change directory to the CD-ROM location that contains the Sun Cluster AnswerBook package.

    The AnswerBook packages reside at the following locations.

    • Sun Cluster 3.0 12/01 CD-ROM

      /cdrom/suncluster_3_0_u2/SunCluster_3.0/Packages

    • Sun Cluster 3.0 Agents 12/01 CD-ROM

      /cdrom/scdataservices_3_0_u2/components/SunCluster_Data_Service_Answer_Book_3.0/Packages

  5. Use the pkgadd(1) command to install the package.


    # pkgadd -d .
    
  6. Select the Sun Cluster 3.0 12/01 Collection (SUNWscfab) and the Sun Cluster 3.0 12/01 Data Services Collection (SUNWscdab) packages to install.

  7. From the pkgadd installation options menu, choose heavy to add the complete package to the system and update the AnswerBook2 catalog.

    Select either the Sun Cluster 3.0 12/01 Collection (SUNWscfab) or the Sun Cluster 3.0 12/01 Data Services Collection (SUNWscdab).

The document collection package on each CD-ROM includes a post-installation script that adds the collection to the documentation server's database and restarts the server. You can now view the Sun Cluster AnswerBooks from your documentation server.

PDF Files

The Sun Cluster CD-ROMs include a PDF file for each book in the Sun Cluster documentation set.

Similar to the Sun Cluster AnswerBooks, six PDF files reside on the Sun Cluster CD-ROM and one PDF file resides on the Agents CD-ROM. The PDF file names are abbreviations of the books (see Table 1-3).

The PDF files reside at the following locations.

Table 1-3 Mapping of PDF Abbreviations to Book Titles

CD-ROM 

PDF Abbreviation 

Book Title 

Sun Cluster 3.0 12/01 CD-ROM 

CLUSTINSTALL

Sun Cluster 3.0 12/01 Software Installation Guide

CLUSTNETHW

Sun Cluster 3.0 12/01 Hardware Guide

CLUSTAPIPG

Sun Cluster 3.0 12/01 Data Services Developer's Guide

CLUSTSYSADMIN

Sun Cluster 3.0 12/01 System Administration Guide

CLUSTCONCEPTS

Sun Cluster 3.0 12/01 Concepts

CLUSTERRMSG

Sun Cluster 3.0 12/01 Error Messages Manual

Sun Cluster 3.0 Agents 12/01 CD-ROM 

CLUSTDATASVC

Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide

Restrictions

The following restrictions apply to the Sun Cluster 3.0 12/01 release:

Patches and Required Firmware Levels

This section provides information about patches for Sun Cluster configurations.

PatchPro

Sun Cluster software is an early adopter of PatchPro, a state-of-the-art patch-management solution from Sun. This new tool is intended to dramatically ease the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides a Sun Cluster-specific Interactive Mode tool to make the installation of patches easier and an Expert Mode tool to maintain your configuration with the latest set of patches. Expert Mode is especially useful for those who want to get all of the latest patches, not just the high availability and security patches.


Note -

You must have a SunSolve account registered to view and download the required patches for the Sun Cluster product. If you don't have an account registered, contact your Sun service representative or sales engineer, or register through the SunSolve Online Web site.


To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click on "Sun Cluster," then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

SunSolve Online

The SunSolve OnlineSM Web site provides 24-hour access to the most up-to-date information regarding patches, software, and firmware for Sun products. Access the SunSolve Online site at http://sunsolve.sun.com for the most current matrixes of supported software, firmware, and patch revisions.


Note -

You must have a SunSolve account registered to view and download the required patches for the Sun Cluster product. If you don't have an account registered, contact your Sun service representative or sales engineer, or register through the SunSolve Online Web site.


You can find Sun Cluster 3.0 patch information by using the SunSolve EarlyNotifierSM Service. To view the EarlyNotifier information, log into SunSolve and access the Simple search selection from the top of the main page. From the Simple Search page, click on the EarlyNotifier box and type Sun Cluster 3.0 in the search criteria box. This will bring up the EarlyNotifier page for Sun Cluster 3.0 software.

Before you install Sun Cluster 3.0 software and apply patches to a cluster component (Solaris operating system, Sun Cluster software, volume manager or data services software, or disk hardware), review the EarlyNotifier information and any README files that accompany the patches. All cluster nodes must have the same patch level for proper cluster operation.

For specific patch procedures and tips on administering patches, see the Sun Cluster 3.0 12/01 System Administration Guide.

Required SAP Patches for Sun Cluster HA for SAP

The latest patch for the executable sapstart (see OSS note 0396321) protects you from multiple startups of SAP instances when an instance is already active on one node. The patch is important because duplication of SAP instances crashes the instance that was already active. Furthermore, the crash prevents SAP shutdown scripts from clean shut down of the SAP instances, which might cause data corruption.

To overcome this problem, install the latest patch for the sapstart executable, and configure the new parameter in the SAP startup profile for the application server and central instance.

For example, edit the profile SC3_DVEBMGS00 (the profile for the central instance) to add the new SAP parameter, sapstart/lockfile.


sapstart/lockfile =/usr/sap/SC3/DVEBMGS00/work/startup_lockfile
sapstart/lockfile

New parameter name.

/usr/sap/SC3/DVEBMGS00/work

Work directory for the central instance.

startup_lockfile

Lock file name that Sun Cluster HA for SAP uses.


Note -

You must locate the lock file path on the cluster file system. If you locate the lock file path locally on the nodes, start ups of the same instance from different nodes cannot be prevented.


Even if you configure the lock file in the SAP profile, you do not have to manually create the lock file. The Sun Cluster HA for SAP data service will create the lock file.

With this configuration, when you start the SAP instance, the SAP software locks the file startup_lockfile. If you start up the SAP instance outside of the Sun Cluster environment and then try to bring up SAP under the Sun Cluster environment, the Sun Cluster HA for SAP data service will attempt to start up the instance. However, because of the file-locking mechanism, this attempt will fail. The data service will log appropriate error messages in syslog.

SunPlex Agent Builder License Terms

SunPlex Agent Builder includes the following license terms.

Redistributables: The files in the directory /usr/cluster/lib/scdsbuilder/src are redistributable and subject to the terms and conditions of the Binary Code License Agreement and Supplemental Terms.

For more information on license terms, see the Binary Code License Agreement and Supplemental Terms that accompanies the Sun Cluster 3.0 media kit.

Sun Management Center Software Upgrade

This section describes how to upgrade from Sun Management Center 2.1.1 to Sun Management Center 3.0 software on a Sun Cluster 3.0 12/01 configuration.

How to Upgrade Sun Management Center Software

Perform this procedure to upgrade from Sun Management Center 2.1.1 to Sun Management Center 3.0 software on a Sun Cluster 3.0 12/01 configuration.

  1. Have available the following items.

    • Sun Cluster 3.0 12/01 CD-ROM or the path to the CD-ROM image. You will use the CD-ROM to reinstall the Sun Cluster module packages after you upgrade Sun Management Center software.

    • Sun Management Center 3.0 documentation.

    • Sun Management Center 3.0 patches and Sun Cluster module patches, if any. See Patches and Required Firmware Levels for the location of patches and installation instructions.

  2. Stop any Sun Management Center processes.

    1. If the Sun Management Center console is running, exit the console.

      In the console window, select File>Exit from the menu bar.

    2. On each Sun Management Center agent machine (cluster node), stop the Sun Management Center agent process.


      # /opt/SUNWsymon/sbin/es-stop -a
      

    3. On the Sun Management Center server machine, stop the Sun Management Center server process.


      # /opt/SUNWsymon/sbin/es-stop -S
      

  3. As superuser, remove Sun Cluster module packages from the locations listed in Table 1-4.

    You must remove all Sun Cluster module packages from all locations. Otherwise, the Sun Management Center software upgrade might fail because of package dependency problems. After you upgrade Sun Management Center software, you will reinstall these packages in Step 5.


    # pkgrm module-package
    

    Table 1-4 Locations to Remove Sun Cluster Module Packages

    Location 

    Package to Remove 

    Each cluster node 

    SUNWscsam, SUNWscsal

    Sun Management Center console machine 

    SUNWscscn

    Sun Management Center server machine 

    SUNWscssv

    Sun Management Center help server machine 

    SUNWscshl

  4. Upgrade to Sun Management Center 3.0 software.

    Follow the upgrade procedures in your Sun Management Center 3.0 documentation.

  5. As superuser, reinstall Sun Cluster module packages to the locations listed in Table 1-5.

    For Sun Management Center 3.0 software, you install the help server package SUNWscshl on the console machine as well as on the help server machine.


    # cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Packages
    # pkgadd module-package
    

    Table 1-5 Locations to Install Sun Cluster Module Packages

    Location 

    Package to Install 

    Each cluster node 

    SUNWscsam, SUNWscsal

    Sun Management Center console machine 

    SUNWscscn, SUNWscshl

    Sun Management Center server machine 

    SUNWscssv

    Sun Management Center help server machine 

    SUNWscshl

  6. Apply any Sun Management Center patches and any Sun Cluster module patches to each node of the cluster.

  7. Restart Sun Management Center agent, server, and console processes on all involved machines.

    Follow procedures in "How to Start Sun Management Center" in the Sun Cluster 3.0 12/01 Software Installation Guide.

  8. Load the Sun Cluster module.

    Follow procedures in "How to Load the Sun Cluster Module" in the Sun Cluster 3.0 12/01 Software Installation Guide.

    If the Sun Cluster module was previously loaded, unload the module and then reload it to clear all cached alarm definitions on the server. To unload the module, from the console's Details window select Module>Unload Module.

Known Problems

The following known problems affect the operation of the Sun Cluster 3.0 12/01 release. For the most current information, see the online Sun Cluster 3.0 12/01 Release Notes Supplement at http://docs.sun.com.

BugId 4419214

Problem Summary: The /etc/mnttab file does not show the most current largefile status of a globally mounted VxFS file system.

Workaround: Use the fsadm command, rather than using the /etc/mnttab entry, to verify file system largefile status.

BugId 4449437

Problem Summary: Global VxFS appears to allocate more disk blocks for the same filesize than Local VxFS. You observe this using the ls -ls command.

Workaround: Unmount and mount the file system. This results in the elimination of the extra disk blocks reported as allocated.

BugId 4490386

Problem Summary: When using Sun Enterprise 10000 servers in a cluster, panics have been observed in these servers when a certain configuration of I/O cards is used.

Workaround: Do not install UDWIS I/O cards in slot 0 of an SBus I/O board in Sun Enterprise 10000 servers in a cluster.

BugId 4492010

Problem Summary: In an N-node cluster configured with N interaction managers, if you bring down or halt the cluster node running an Interaction Manager (IM) that serves a client, you will cause the clients to lose the session. Further retries by the same client to reconnect to a different IM will take a long time. This is an issue with the BroadVision product and Broadvision engineers are working to resolve this problem. BroadVision does not support IM session failover capability.

Workaround: From a Netscape browser, click on the Stop/Reload button, and then click on Start Broadway Application button. The connection to BroadVision server should respond immediately. This workaround works most of the time for new connections, after halting the IM node. This workaround is less likely to work if you perform this workaround before halting the IM node. If this workaround does not work, clear the disk cache and memory cache in Netscape.

BugId 4493025

Problem Summary: In a two-node cluster if you switch oracle-rg from Node 1 to Node 2, BroadVision One-To-One tries three times before it successfully registers a new user. The first try displays Fail to create new user. The second try displays copyright information. The third try succeeds with no problem. This problem occurs in any N node cluster running a failover Oracle database, either within the cluster or outside the cluster, and in a two node cluster where Node 1 is the primary for http, oracle, roothost, backend, and backend2 and where Interaction Manager (IM) is running as scalable.

The problem is the new user's name is not displayed on the welcome page after login. This is a known issue with BroadVision One-To-One. There is a bug filed against BroadVision One-To-One to fix this problem: BVNqa20753.

Workaround: There is no workaround. The user will be created after three attempts.

BugId 4494165

Problem Summary: VERITAS File System patch 110435-05 changes the default logging option for mount_vxfs from the log option to the delaylog option. Logging is necessary for VxFS support on Sun Cluster.

Workaround: Manually add the log option to the VxFS options list in the vfstab file.

BugId 4499573

Problem Summary: When using data services that are I/O intensive and that are configured on a large number of disk in the cluster, the data services may time out because of the retries within the I/O subsystem during disk failures.

Workaround: Increase your data service's resource extension property value for Probe_timeout. If you need help determining the timeout value, contact your service representative.


# scrgadm -c -j resource -x Probe_timeout=timeout_value

BugId 4501655

Problem Summary: Record locking does not work correctly when the device to be locked is a global device (/dev/global/rdsk/d4s0). However, record locking works correctly when the program runs multiple times in the background on a specified node. After the first copy of the program locks a portion of the device, other copies of the program should block waiting for the device to be unlocked. However, when the program runs from a different node other than the node specified, the program locks the device again although it should block waiting for the device to be unlocked.

Workaround: There is no workaround.

BugId 4504311

Problem Summary: When a Sun Cluster configuration is upgraded to Solaris 8 10/01 software (required for Sun Cluster 3.0 12/01 upgrade), the Apache start and stop scripts are restored. If an Apache data service is already present on the cluster and configured in its default configuration (the /etc/apache/httpd.conf file exists and the /etc/rc3.d/S50apache file does not exist), Apache starts on its own. This prevents the Apache data service from starting because Apache is already running.

Workaround: Do the following for each node.

  1. Before you shut down a node to upgrade it, determine whether the following links already exist, and if so, whether the file names contain an uppercase K or S.


    /etc/rc0.d/K16apache
    /etc/rc1.d/K16apache
    /etc/rc2.d/K16apache
    /etc/rc3.d/S50apache
    /etc/rcS.d/K16apache

    If these links already exist and contain an uppercase K or S in the file name, no further action is necessary. Otherwise, perform the action in the next step after you upgrade the node to Solaris 8 10/01 software.

  2. After the node is upgraded to Solaris 8 10/01 software, but before you reboot the node, move aside the restored Apache links by renaming the files with a lowercase k or s.


    # mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache
    # mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache
    # mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache
    # mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache
    # mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache
    

BugId 4504385

Problem Summary: If you use interactive scinstall(1M), which provides the cluster transport autodiscovery features, you might see the following the error message during the probe:


scrconf:  /dev/clone: No such file or directory

This error message might result in the probe aborting and autodiscovery failing. The device might be a device that is not a network adapter. For example, the device might be /dev/llc20. If you encounter this problem, please ask your service representative to update the bug report with additional information that might be useful in reproducing this problem.

Workaround: Reboot the node, and then retry scinstall. If this does not solve the problem, select the non-autodiscovery options of scinstall.

BugId 4505391

Problem Summary: When you upgrade the Sun Cluster software from Sun Cluster 2.2 to Sun Cluster 3.0 12/01 using the scinstall -u begin -F command, the scinstall command fails to remove patches with dependencies and aborts with the following messages:


scinstall:  Failed to remove patch-id.rev
scinstall:  scinstall did NOT complete successfully!

A patch dependency is the cause of this failure.

Workaround: Manually back out the patch dependencies, then restart the upgrade process. Use the log file to identify the patch dependencies that caused the script to fail. Use can also use the showrev command to identify patch dependencies.


showrev -p | grep patch-id

BugId 4509832

Problem Summary: If a Cluster Configuration Repository (CCR) is invalid in a cluster, it is neither readable nor writable. If you run the ccradm -r -f command on the invalid CCR in question, this command should make the invalid CCR readable as well as writable. However, after you run the ccradm -r -f command the CCR table is still not writable.

Workaround: Reboot the entire cluster.

BugId 4511478

Problem Summary: When interactive scinstall(1M) runs a second time against the same JumpStart directory to set up a JumpStart server for installing a cluster, the cluster and JumpStart directory names might disappear. In the scinstall command-line invoked by this process, both of these names are missing.

Workaround: From your JumpStart directory, remove the .interactive.log.3 file, and then rerun scinstall.

BugId 4515780

Problem Summary: NLS files for Oracle 9.0.1 are not backward compatible for Oracle 8.1.6 and 8.1.7 software. Patch 110651-04 has been declared bad.

Workaround: Back out Patch 110651-04 and replace it with 110651-02.

BugId 4517304

Problem Summary: If syslogd dies and you cannot restart it on a cluster node (for example, as a result of Bugid 4477565), this can cause rgmd to hang on one or more nodes. This in turn causes other commands such as scstat(1M) -g, scswitch(1M) -g, scrgadm(1M), and scha_*_get(1HA,3HA) to hang and prevents resource group failovers from succeeding.

Workaround: Edit the /etc/init.d/syslog script, inserting a line to remove the symbolic link /etc/.syslog_door before the command that starts /usr/sbin/syslogd.Inserted Line:


rm -f /etc/.syslog_door

BugId 4517875

Problem Summary: After the installation of RSM (Remote Shared Memory) packages and the SUNWscrif package (the RSMAPI Path Manager package), some of the paths RSMAPI uses fail to come up to RSM_CONNECTION_ACTIVE state. If you dump the topology structure using rsm_get_interconnect_topology (3rsm), interface {rsmapi.h} shows the state of each path.


Caution - Caution -

Perform the following workaround on each path one at a time so that you do not isolate the node from the cluster.


Workaround: Run the following commands on any node of the cluster to bring up the paths that are in a state other than RSM_CONNECTION_ACTIVE (3).


# scconf -c -m endpoint=node:adpname,state=disabled
# scconf -c -m endpoint=node:adpname,state=enabled
node:adpname

An endpoint on the path that is experiencing this problem

BugId 4522648

Problem Summary: As of the VxVM 3.1.1 release, the man-page path has changed to /opt/VRTS/man. In previous releases the man-page path was /opt/VRTSvxvm/man. This new path is not documented in the Sun Cluster 3.0 12/01 Software Installation Guide.

Workaround: For VxVM 3.1.1 and later, add /opt/VRTS/man to the MANPATH on each node of the cluster.

Known Documentation Problems

This section discusses known errors or omissions for documentation and online help and steps to correct these problems.

Sun Cluster HA for Oracle Packages

The introductory paragraph to "Installing Sun Cluster HA for Oracle Packages" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide does not discuss the additional package needed for users with clusters running Sun Cluster HA for Oracle with 64-bit Oracle. The following section corrects the introductory paragraph to "Installing Sun Cluster HA for Oracle Packages" in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.

Installing Sun Cluster HA for Oracle Packages

Depending on your configuration, use the scinstall(1M) utility to install one or both of the following packages on your cluster. Do not use the -s option to non-interactive scinstall to install all of the data service packages.


Note -

SUNWscor is the prerequisite package for SUNWscorx.


If you installed the SUNWscor data service package as part of your initial Sun Cluster installation, proceed to "Registering and Configuring Sun Cluster HA for Oracle" on page 30. Otherwise, use the following procedure to install the SUNWscor and SUNWscorx packages.

Apache Packages Required for All Sun Cluster Software Installation Methods

SunPlex Manager software requires that Apache software packages already be installed on the node before you install SunPlex Manager. This is true whether you install SunPlex Manager manually or whether it is installed automatically by the interactive scinstall(1M) method or the scinstall JumpStart method. If the Apache software is not installed before SunPlex Manager is installed, you will see a message similar to the following.


NOTICE: To finish installing the SunPlex Manager, you must installthe SUNWapchr and SUNWapchu Solaris packages and any associatedpatches. Then run '/etc/init.d/initspm start' to start the server.

The Sun Cluster 3.0 12/01 Software Installation Guide procedure "How to Install SunPlex Manager Software" includes a step to ensure that Apache software packages are first installed. However, the procedures "How to Install Sun Cluster Software on the First Cluster Node (scinstall)," "How to Install Sun Cluster Software on Additional Cluster Nodes (scinstall)," and "How to Install Solaris and Sun Cluster Software (JumpStart)" do not include this step.

If you intend to use SunPlex Manager and you use either the interactive scinstall(1M) method or the scinstall JumpStart method to install Sun Cluster software, ensure that Apache software packages are installed on a node before you begin Sun Cluster software installation. See Step 3 of "How to Install SunPlex Manager Software" in the Sun Cluster 3.0 12/01 Software Installation Guide for instructions.

New Man Page Path for VxVM

The Sun Cluster 3.0 12/01 Software Installation Guide omits the new man page path for later releases of VERITAS Volume Manager (VxVM). The MANPATH currently documented, /opt/VRTSvxvm/man, is valid for VxVM 3.0.4 and 3.1. For VxVM 3.1.1 and 3.2, use /opt/VRTS/man for the MANPATH.

Generic Data Service Package Installation During Upgrade

Instructions to install the Sun Cluster 3.0 generic data service package, SUNWscgds, are missing from the upgrade procedures in the Sun Cluster 3.0 12/01 Software Installation Guide. This package is not installed automatically by the scinstall(1M) upgrade options. After you upgrade Sun Cluster software, use the pkgadd(1M) command to install the SUNWscgds package from the Sun Cluster 3.0 12/01 CD-ROM. You do not need to reboot the node after you install this package.

VERITAS File System (VxFS) Commands to Create a VxFS File System

In the procedure "How to Add Cluster File Systems" in the Sun Cluster 3.0 12/01 Software Installation Guide and in the Sun Cluster 3.0 12/01 System Administration Guide, the step to use the newfs(1M) command to create a new file system is only valid for UFS file systems. To create a new VxFS file system, follow procedures provided in your VxFS documentation.

How to Create More Than Three Disksets in a Cluster

If you intend to create more than three disksets in the cluster, perform the following steps before you create the disksets. Follow these steps regardless of whether you are installing disksets for the first time or you are adding more disksets to a fully configured cluster.

  1. Ensure that the value of the md_nsets variable is set high enough to accommodate the total number of disksets you intend to create in the cluster.

    1. On any node of the cluster, check the value of the md_nsets variable in the /kernel/drv/md.conf file.

    2. If the total number of disksets in the cluster will be greater than the existing value of md_nsets minus one, on each node increase the value of md_nsets to the desired value.

      The maximum permissible number of disksets is one less than the value of md_nsets. The maximum possible value of md_nsets is 32.

    3. Ensure that the /kernel/drv/md.conf file is identical on each node of the cluster.


      Caution - Caution -

      Failure to follow this guideline can result in serious Solstice DiskSuite errors and possible loss of data.


    4. From one node, shut down the cluster.


      # scshutdown -g0 -y
      

    5. Reboot each node of the cluster.


      ok> boot
      

  2. On each node in the cluster, run the devfsadm(1M) command.

    You can run this command on all nodes in the cluster at the same time.

  3. From one node of the cluster, run the scgdevs(1M) command.

  4. On each node, verify that the scgdevs command has completed before you attempt to create any disksets.

    The scgdevs command calls itself remotely on all nodes, even when the command is run from just one node. To determine whether the scgdevs command has completed processing, run the following command on each node of the cluster.


    % ps -ef | grep scgdevs
    

SunPlex Manager Online Help Correction

A note in the SunPlex Manager's online help is inaccurate. The note appears in the Oracle data service installation procedure. The correction is as follows.

Incorrect:

Note: If no entries exist for the shmsys and semsys variables in the /etc/system file when SunPlex Manager packages are installed, default values for these variables are automatically put in the /etc/system file. The system must then be rebooted. Check Oracle installation documentation to verify that these values are appropriate for your database.

Correct:

Note: If no entries exist for the shmsys and semsys variables in the /etc/system file when you install the Oracle data service, default values for these variables can be automatically put in the /etc/system file. The system must then be rebooted. Check Oracle installation documentation to verify that these values are appropriate for your database.