Sun Cluster 3.0 5/02 Release Notes

Chapter 1 Sun Cluster 3.0 5/02 Release Notes

This document provides the following information for SunTM Cluster 3.0 5/02 software.

The appendices to this document include installation planning worksheets and examples for planning the Sun Cluster 3.0 5/02 software and data services installation.

New Features and Functionality

The following table lists new features and functionality that require updates to the Sun Cluster documentation. The second column identifies the documentation that was updated. Contact your Sun sales representative for the complete list of supported hardware and software.

Table 1–1 New Features and Functionality

Feature or Functionality 

Documentation Updates 

HAStoragePlus 

The Sun Cluster 3.0 5/02 Supplement contains updates to the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide and the Sun Cluster 3.0 12/01 Data Services Developer's Guide to support the HAStoragePlus resource type. The HAStoragePlus resource type can be used to make a local file system highly available within a Sun Cluster environment. The Sun Cluster 3.0 5/02 Error Messages Guide documents new HAStoragePlus error messages.

Prioritized Service Management (RGOffload) 

The Sun Cluster 3.0 5/02 Supplement contains new procedures and updates to the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide to support the RGOffload resource type. RGOffload allows your cluster to automatically free a node's resources for critical data services by off-loading resource groups containing non-critical data services. The Sun Cluster 3.0 5/02 Error Messages Guide documents new RGOffload error messages.

Sun Cluster Security Hardening support for additional data services 

The Sun Cluster Security Hardening documentation is available at http://www.sun.com/security/blueprints. From this URL, scroll down to the Architecture heading to locate the article on Sun Cluster Security Hardening. See Sun Cluster Security Hardening for more information.

SunPlex Agent Builder enhancements 

The Sun Cluster 3.0 5/02 Supplement contains updates to the Sun Cluster 3.0 12/01 Data Services Developer's Guide to support creation of a generic data service (GDS), a single pre-compiled data service, by using SunPlex Agent Builder.

Uninstalling Sun Cluster software 

The Sun Cluster 3.0 5/02 Supplement contains new cluster-software uninstallation procedures and updates to related procedures in the Sun Cluster 3.0 12/01 Software Installation Guide and the Sun Cluster 3.0 12/01 System Administration Guide. The new -r option to scinstall(1M) removes Sun Cluster software from a node.

Upgrade to Sun Cluster 3.0 5/02 software from any previous release of Sun Cluster 3.0 software 

Follow procedures in “Upgrading to a Sun Cluster 3.0 Software Update Release” in the Sun Cluster 3.0 12/01 Software Installation Guide to upgrade from any previous release of Sun Cluster 3.0 software. See Upgrading to a Sun Cluster 3.0 Software Update Release for corrections to the Solaris 8 upgrade instructions.

Notes on New Features and Functionality

This section includes additional information on new features and functionality.

Sun Cluster Security Hardening

Sun Cluster Security Hardening uses the Solaris Operating Environment hardening techniques recommended by the Sun BluePrintsTM program to achieve basic security hardening for clusters. The Solaris Security Toolkit automates the implementation of Sun Cluster Security Hardening.

The Sun Cluster Security Hardening documentation is available at http://www.sun.com/security/blueprints. From this URL, scroll down to the Architecture heading to locate the article “Securing the Sun Cluster 3.0 Software.” This document describes how to secure Sun Cluster 3.0 deployments in a Solaris 8 environment. This description includes the use of the Solaris Security Toolkit and other best-practice security techniques recommended by Sun security experts.

Sun Cluster Security Hardening supports all Sun Cluster 3.0 5/02 data services listed in the table below, in a Solaris 8 environment.


Note –

Sun Cluster Security Hardening supports all Sun Cluster 3.0 5/02 data services on Solaris 8 only. Security hardening is not available for Sun Cluster 3.0 5/02 on Solaris 9.


Table 1–2 Data Services Supported by Sun Cluster Security Hardening

Data Service Agent 

Application Version: Failover 

Application Version: Scalable 

Sun Cluster HA for iPlanet Messaging Server 

6.0 

4.1 

Sun Cluster HA for iPlanet Web Server 

6.0 

4.1 

Sun Cluster HA for Apache 

1.3.9 

1.3.9 

Sun Cluster HA for SAP 

4.6D (32 and 64 bit) 

4.6D (32 and 64 bit) 

Sun Cluster HA for iPlanet Directory Server 

4.12 

N/A 

Sun Cluster HA for NetBackup 

3.4  

N/A 

Sun Cluster HA for Oracle  

8.1.7 and 9i (32 and 64 bit) 

N/A 

Sun Cluster HA for Sybase ASE  

12.0 (32 bit) 

N/A 

Sun Cluster Support for Oracle Parallel Server/Real Application Clusters 

8.1.7 and 9i (32 and 64 bit) 

N/A 

Sun Cluster HA for DNS 

with OS 

N/A 

Sun Cluster HA for NFS 

with OS 

N/A 

Supported Products

This section describes the supported software and memory requirements for Sun Cluster 3.0 5/02 software.

Features Nearing End of Life

Public Network Management (PNM) will not be supported in the next Sun Cluster feature release. Network adapter monitoring and failover for Sun Cluster will instead be performed by Solaris IP Multipathing.

Public Network Management (PNM)

Use the PNM to configure and administer network interface card monitoring and failover. However, the user interfaces to the PNM daemon and PNM administration commands are obsolete and will be removed in the next Sun Cluster feature release. Users are strongly discouraged from developing tools that rely on these interfaces. The following interfaces are officially supported in the current release, but are expected to be removed or changed in the next Sun Cluster feature release.

To prepare for the transition to IP Multipathing in the next Sun Cluster feature release, consider the following issues.

Sun Cluster AnswerBooks Installation

The Sun Cluster 3.0 5/02 user documentation is available online in AnswerBook2TM format for use with AnswerBook2 documentation servers. The Sun Cluster 3.0 5/02 AnswerBook2 documentation set consists of the following collections.


Note –

The Sun Cluster 3.0 5/02 Supplement contains additions and changes to the Sun Cluster 3.0 12/01 documentation set. Use this supplement in conjunction with the Sun Cluster 3.0 12/01 manuals that are also provided in the Sun Cluster 3.0 5/02 Collection and with the Sun Cluster 3.0 12/01 Data Services Collection.


In addition, the docs.sun.comSM web site enables you to access Sun Cluster documentation on the Web. You can browse the docs.sun.com archive or search for a specific book title or subject at the following Web site.

http://docs.sun.com

Setting Up the AnswerBook2 Documentation Server


Note –

AnswerBook2 documentation server software is not provided on the Solaris 9 documentation CD‐ROM. If you are using the Solaris 9 version of Sun Cluster 3.0 5/02 software and do not already have the AnswerBook2 server software, see http://www.sun.com/software/ab2 to download the AnswerBook2 software, installation instructions, and release notes. Alternately, use the PDF versions of the documentation, which are also provided on the Sun Cluster 3.0 5/02 CD‐ROMs. See PDF Files for more information.


The Solaris 8 operating environment release includes AnswerBook2 documentation server software. The Solaris 8 documentation CD‐ROM, which is separate from the Solaris operating environment CD‐ROM, includes the documentation server software. You need the Solaris 8 documentation CD‐ROM to install an AnswerBook2 documentation server.

If you have installed an AnswerBook2 documentation server at your site, you can use the same server for the Sun Cluster AnswerBooks. Otherwise, install a documentation server on a machine at your site. We recommend that you use the administrative console as the administrative interface to your cluster for the documentation server. Do not use a cluster node as your AnswerBook2 documentation server.

For information on installing an AnswerBook2 documentation server, load the Solaris 8 documentation CD‐ROM on a server and view the README files.

Viewing Sun Cluster AnswerBooks

Install the Sun Cluster AnswerBook2 documents on a file system on the same server on which you install the documentation server. The Sun Cluster AnswerBooks include a post‐install script that automatically adds the documents to your existing AnswerBook library.

Note the following requirements to set up your AnswerBook2 servers.

How to Install the Sun Cluster AnswerBooks

Use this procedure to install the Sun Cluster AnswerBook packages for the Sun Cluster 3.0 5/02 Collection and Sun Cluster 3.0 12/01 Data Services Collection.

  1. Become superuser on the server that has an AnswerBook2 documentation server.

  2. If you have previously installed the Sun Cluster AnswerBooks, remove the old packages.

    If you have never installed Sun Cluster AnswerBooks, skip this step.


    # pkgrm SUNWscfab SUNWscdab
    

  3. Insert the Sun Cluster 3.0 5/02 CD-ROM or Sun Cluster 3.0 Agents 5/02 CD-ROM into a CD‐ROM drive attached to your documentation server.

    The Volume Management daemon, vold(1M), mounts the CD‐ROM automatically.

  4. Change directory to the CD-ROM location that contains the Sun Cluster AnswerBook package.

    The AnswerBook packages reside at the following locations.

    • Sun Cluster 3.0 5/02 CD-ROM

      /cdrom/suncluster_3_0_u3/SunCluster_3.0/Packages

    • Sun Cluster 3.0 Agents 5/02 CD-ROM

      /cdrom/scdataservices_3_0_u3/components/SunCluster_Data_Service_Answer_Book_3.0/Packages

  5. Use the pkgadd(1) command to install the package.


    # pkgadd -d .
    

  6. Select the Sun Cluster 3.0 5/02 Collection (SUNWscfab) and the Sun Cluster 3.0 12/01 Data Services Collection (SUNWscdab) packages to install.

  7. From the pkgadd installation options menu, choose heavy to add the complete package to the system and update the AnswerBook2 catalog.

    Select either the Sun Cluster 3.0 5/02 Collection (SUNWscfab) or the Sun Cluster 3.0 12/01 Data Services Collection (SUNWscdab).

The document collection package on each CD‐ROM includes a post‐installation script that adds the collection to the documentation server's database and restarts the server. You can now view the Sun Cluster AnswerBooks from your documentation server.

PDF Files

The Sun Cluster CD‐ROMs include a PDF file for each book in the Sun Cluster documentation set.

Similar to the Sun Cluster AnswerBooks, seven PDF files reside on the Sun Cluster CD‐ROM and one PDF file resides on the Sun Cluster Agents CD‐ROM. The PDF file names are abbreviations of the books (see Table 1–4).

The PDF files reside at the following locations.

Table 1–4 Mapping of PDF Abbreviations to Book Titles

CD‐ROM 

PDF Abbreviation 

Book Title 

Sun Cluster 3.0 5/02 CD-ROM 

CLUSTSUPP

Sun Cluster 3.0 5/02 Supplement

CLUSTINSTALL

Sun Cluster 3.0 12/01 Software Installation Guide

CLUSTNETHW

Sun Cluster 3.0 12/01 Hardware Guide

CLUSTAPIPG

Sun Cluster 3.0 12/01 Data Services Developer's Guide

CLUSTSYSADMIN

Sun Cluster 3.0 12/01 System Administration Guide

CLUSTCONCEPTS

Sun Cluster 3.0 12/01 Concepts

CLUSTERRMSG

Sun Cluster 3.0 5/02 Error Messages Guide

Sun Cluster 3.0 Agents 5/02 CD-ROM 

CLUSTDATASVC

Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide

Restrictions

The following restrictions apply to the Sun Cluster 3.0 5/02 release:

Service and Application Restrictions

Hardware Restrictions

Volume Manager Restrictions

Cluster File System Restrictions

VxFS Restrictions

Network Adapter Failover (NAFO) Restrictions

Data Service Restrictions

This section describes restrictions for specific data services. There are no restrictions that apply to all data services.


Note –

Future Sun Cluster Release Notes will not include data service restrictions that apply to specific data services. However, Sun Cluster Release Notes will document any data service restrictions that apply to all data services.


For additional data service restrictions that apply to specific data services, see the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.

Sun Cluster HA for Oracle

Sun Cluster and NetBackup Restrictions

Sun Cluster HA for NetBackup Restrictions

Sun Cluster 3.0 HA for NFS Restrictions

Guidelines

The following guidelines apply to the Sun Cluster 3.0 5/02 release.

Data Service Timeout-Period Guideline

The following guideline addresses the problem reported in Bug 4499573. It was determined that the related functionality works as expected. As such, the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide needs to reflect the following guideline.

When using data services that are I/O intensive and that have a large number of disks configured in the cluster, the application may experience delays due to retries within the I/O subsystem during disk failures. An I/O subsystem may take several minutes to retry and recover from a disk failure. This delay can result in Sun Cluster failing over the application to another node, even though the disk may have eventually recovered on its own. To avoid failover during these instances, consider increasing the default probe timeout of the data service. If you need more information or help with increasing data service timeouts, contact your local support engineer.

Data Service Installation Guidelines

Identify requirements for all data services before you begin Solaris and Sun Cluster installation. If you do not inform yourself of these requirements, you might perform the installation process incorrectly and thereby need to completely reinstall the Solaris and Sun Cluster software.

For example, the Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application Clusters has special requirements for the hostnames/node names that you use in the cluster. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software. For more information on the special requirements for the hostnames/node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.

Patches and Required Firmware Levels

This section provides information about patches for Sun Cluster configurations.

PatchPro

Sun Cluster software is an early adopter of PatchPro, a patch-management solution from Sun. This new tool is intended to ease the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides a Sun Cluster-specific Interactive Mode tool to make the installation of patches easier and an Expert Mode tool to maintain your configuration with the latest set of patches. Expert Mode is especially useful for those who want to get all of the latest patches, not just the high availability and security patches.


Note –

You must have a SunSolveSM account registered to view and download the required patches for the Sun Cluster product. If you don't have an account registered, contact your Sun service representative or sales engineer, or register through the SunSolve Online Web site.


To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click on “Sun Cluster,” then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

SunSolve Online

The SunSolve OnlineSM Web site provides 24-hour access to the most up-to-date information regarding patches, software, and firmware for Sun products. Access the SunSolve Online site at http://sunsolve.sun.com for the most current matrixes of supported software, firmware, and patch revisions.


Note –

You must have a SunSolve account registered to view and download the required patches for the Sun Cluster product. If you don't have an account registered, contact your Sun service representative or sales engineer, or register through the SunSolve Online Web site.


You can find Sun Cluster 3.0 patch information by using the SunSolve EarlyNotifierSM Service. To view the EarlyNotifier information, log into SunSolve and access the Simple search selection from the top of the main page. From the Simple Search page, click on the EarlyNotifier box and type Sun Cluster 3.0 in the search criteria box. This will bring up the EarlyNotifier page for Sun Cluster 3.0 software.

Before you install Sun Cluster 3.0 software and apply patches to a cluster component (Solaris operating system, Sun Cluster software, volume manager or data services software, or disk hardware), review the EarlyNotifier information and any README files that accompany the patches. All cluster nodes must have the same patch level for proper cluster operation.

For specific patch procedures and tips on administering patches, see the Sun Cluster 3.0 12/01 System Administration Guide.

mod_ssl License Terms

To view license terms, attribution, and copyright statements for mod_ssl, refer to the Sun Cluster 3.0 README file on the Sun Cluster 3.0 5/02 CD-ROM.

Sun Management Center Software Upgrade

This section describes how to upgrade from Sun Management Center 2.1.1 to Sun Management Center 3.0 software on a Sun Cluster 3.0 5/02 configuration.

How to Upgrade Sun Management Center Software

Perform this procedure to upgrade from Sun Management Center 2.1.1 to Sun Management Center 3.0 software on a Sun Cluster 3.0 5/02 configuration.

  1. Have available the following items.

    • Sun Cluster 3.0 5/02 CD-ROM or the path to the CD‐ROM image. You will use the CD‐ROM to reinstall the Sun Cluster module packages after you upgrade Sun Management Center software.

    • Sun Management Center 3.0 documentation.

    • Sun Management Center 3.0 patches and Sun Cluster module patches, if any. See Patches and Required Firmware Levels for the location of patches and installation instructions.

  2. Stop any Sun Management Center processes.

    1. If the Sun Management Center console is running, exit the console.

      In the console window, select File>Exit from the menu bar.

    2. On each Sun Management Center agent machine (cluster node), stop the Sun Management Center agent process.


      # /opt/SUNWsymon/sbin/es-stop -a
      

    3. On the Sun Management Center server machine, stop the Sun Management Center server process.


      # /opt/SUNWsymon/sbin/es-stop -S
      

  3. As superuser, remove Sun Cluster module packages from the locations listed in Table 1–5.

    You must remove all Sun Cluster module packages from all locations. Otherwise, the Sun Management Center software upgrade might fail because of package dependency problems. After you upgrade Sun Management Center software, you will reinstall these packages in Step 5.


    # pkgrm module-package
    

    Table 1–5 Locations to Remove Sun Cluster Module Packages

    Location 

    Package to Remove 

    Each cluster node 

    SUNWscsam, SUNWscsal

    Sun Management Center console machine 

    SUNWscscn

    Sun Management Center server machine 

    SUNWscssv

    Sun Management Center help server machine 

    SUNWscshl

  4. Upgrade to Sun Management Center 3.0 software.

    Follow the upgrade procedures in your Sun Management Center 3.0 documentation.

  5. As superuser, reinstall Sun Cluster module packages to the locations listed in Table 1–6.

    For Sun Management Center 3.0 software, you install the help server package SUNWscshl on the console machine as well as on the help server machine.


    # cd /cdrom/suncluster_3_0_u3/SunCluster_3.0/Packages
    # pkgadd module-package
    

    Table 1–6 Locations to Install Sun Cluster Module Packages

    Location 

    Package to Install 

    Each cluster node 

    SUNWscsam, SUNWscsal

    Sun Management Center console machine 

    SUNWscscn, SUNWscshl

    Sun Management Center server machine 

    SUNWscssv

    Sun Management Center help server machine 

    SUNWscshl

  6. Apply any Sun Management Center patches and any Sun Cluster module patches to each node of the cluster.

  7. Restart Sun Management Center agent, server, and console processes on all involved machines.

    Follow procedures in “How to Start Sun Management Center” in the Sun Cluster 3.0 12/01 Software Installation Guide.

  8. Load the Sun Cluster module.

    Follow procedures in “How to Load the Sun Cluster Module” in the Sun Cluster 3.0 12/01 Software Installation Guide.

    If the Sun Cluster module was previously loaded, unload the module and then reload it to clear all cached alarm definitions on the server. To unload the module, from the console's Details window select Module>Unload Module.

Sun Cluster Module Resource and Resource Group Creation Wizards

This section describes undocumented information about the Sun Cluster 3.0 module to Sun Management Center 3.0. For information about upgrade to Sun Management Center 3.0, see Sun Management Center Software Upgrade.

From the Sun Cluster module console you can create, change the state of, or delete resources and resource groups. You can access them by opening the Sun Cluster Details window and choosing the options from the hierarchy (tree) or topology views.

Pop-Up Menu Items and Associated Tables

Access from the Resource Group Status table and Resource Group Properties table:

  • Bring Online

  • Take Offline

  • Delete Selected Resource Group

  • Create New Resource Group

  • Create New Resource

Access from the Resource Status table and Resource Configuration table:

  • Enable

  • Disable

  • Delete Resource

  • Create New Resource Group

  • Create New Resource

How to Access the Creation Wizards From the Tree View

Perform the following steps to access the wizards to create a resource or resource group.

  1. In either the hierarchy (tree) or topology view, double-click Operating System>Sun Cluster.

  2. Click the right mouse button on the Resource Groups item, or on any item within the Resource Groups subtree.

  3. Choose “Create New Resource Group” or “Create New Resource” from the pop-up menu.

How to Create Resources and Resource Groups

Perform the following procedure to use the creation wizard on the pop-up menus, accessible from the resource and resource group tables.

  1. Display either the resource table or the resource group table.

  2. Point to any cell entry in the table, excluding the header row.

  3. Click the right mouse button.

  4. Choose the action you want from the pop-up menu.

How to Delete or Modify Resources and Resource Groups

Perform the following steps to alter the state of a resource or to delete a resource or resource group. Use the pop-up menus from the resource and resource group tables to enable or disable a resource, or to bring a resource group online or take it offline.

  1. Display either the resource or resource group table.

  2. Select the item that you want to modify.

    • To delete an entry, select the resource or resource group to delete.

    • To change the state of an entry, select the state cell in the row of the resource or resource group to change.

  3. Click the right mouse button.

  4. Choose from the pop-up menu one of the following tasks to perform.

    • Bring Online

    • Take Offline

    • Enable

    • Disable

    • Delete Selected Resource Group

    • Delete Resource


Note –

When you delete or edit status of a resource or resource group, the Sun Cluster module launches a Probe Viewer window. If the Sun Cluster module successfully performs the task that you choose, the Probe Viewer window displays the message Probe command returned no data. If the task is not completed successfully, the window displays an error message.


See Sun Management Center documentation and online help for more information about Sun Management Center.

Known Problems

The following known problems affect the operation of the Sun Cluster 3.0 5/02 release. For the most current information, see the online Sun Cluster 3.0 5/02 Release Notes Supplement at http://docs.sun.com.

BugId 4490386

Problem Summary: When using Sun Enterprise 10000 servers in a cluster, panics have been observed in these servers when a certain configuration of I/O cards is used.

Workaround: Do not install UDWIS I/O cards in slot 0 of an SBus I/O board in Sun Enterprise 10000 servers in a cluster.

BugId 4501655

Problem Summary: Record locking does not work on another node(s) when the device trying to be locked is a global device, for example, /dev/global/rdsk/d4s0.

Record locking appears to work well when the program is run multiple times in the background on any particular node. The expected behavior is that after the first copy of the program locks a portion of the device, other copies of the program block waiting for the device to be unlocked. However, when the program is run from a different node, the program succeeds in locking the device again when in fact it should block waiting for the device to be unlocked.

Workaround: There is no workaround.

BugId 4504311

Problem Summary: When a Sun Cluster configuration is upgraded to Solaris 8 10/01 software (required for Sun Cluster 3.0 12/01 upgrade), the Apache application start and stop scripts are restored. If an Apache data service (Sun Cluster HA for Apache) is already present on the cluster and configured in its default configuration (the /etc/apache/httpd.conf file exists and the /etc/rc3.d/S50apache file does not exist), the Apache application starts on its own, independent of the Sun Cluster HA for Apache data service. This prevents the data service from starting because the Apache application is already running.

Workaround: Do the following for each node.

  1. Before you shut down a node to upgrade it, determine whether the following links already exist, and if so, whether the file names contain an uppercase K or S.


    /etc/rc0.d/K16apache
    /etc/rc1.d/K16apache
    /etc/rc2.d/K16apache
    /etc/rc3.d/S50apache
    /etc/rcS.d/K16apache

    If these links already exist and contain an uppercase K or S in the file name, no further action is necessary. Otherwise, perform the action in the next step after you upgrade the node to Solaris 8 10/01 software.

  2. After the node is upgraded to Solaris 8 10/01 software, but before you reboot the node, move aside the restored Apache links by renaming the files with a lowercase k or s.


    # mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache
    # mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache
    # mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache
    # mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache
    # mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache
    

BugId 4511699

Problem Summary: Sun Cluster HA for NFS requires files [SUCCESS=return] for the hosts lookup entry in the /etc/nsswitch.conf file, and requires that all cluster private IP addresses be present in the /etc/inet/hosts file on all cluster nodes.

Otherwise, Sun Cluster HA for NFS will not be able to fail over correctly in the presence of public network failures.

Workaround: Perform the following steps on each node of the cluster.

  1. Modify the hosts entry in the /etc/nsswitch.conf file so that, upon success in resolving a name locally, it returns success immediately and does not contact NIS or DNS.


    hosts: cluster files [SUCCESS=return] nis dns

  2. Add entries for all cluster private IP addresses to the /etc/inet/hosts file.

You only need to list the IP addresses plumbed on the physical private interfaces in the /etc/nsswitch.conf and /etc/inet/hosts files. The logical IP addresses are already resolvable through the cluster nsswitch library.

To list the physical private IP addresses, run the following command on any cluster node.


% grep ip_address /etc/cluster/ccr/infrastructure

Each IP address in this list must be assigned a unique hostname that does not conflict with any other hostname in the domain.


Note –

Sun Cluster software already requires that any HA IP addresses (LogicalHostname/SharedAddresses) be present in /etc/inet/hosts on all cluster nodes and that files is listed before nis or dns. The additional requirements mandated by this bug are to list [SUCCESS=return] after files and to list all cluster private IP addresses in the /etc/inet/hosts file.


BugId 4526883

Problem Summary: On rare occasions, private interconnect transport paths ending at a qfe adapter fail to come up.

Workaround: Perform the following steps.

  1. Identify the adapter that is at fault.

    Scstat -W output should show all transport paths with that adapter as one of the path endpoints in the “faulted” or the “waiting” states.

  2. Use scsetup(1M) to remove all cables connected to that adapter from the cluster configuration.

  3. Use scsetup again to remove that adapter from the cluster configuration.

  4. Add the adapter and the cables back to the cluster configuration.

  5. Verify whether these steps fixed the problem and whether the paths are able to come back up.

If removing the cables and the adapter and then adding them back does not work, repeat the procedure a few times. If that does not help, reboot the node that has the problem adapter. There is a good chance that the problem will be gone when the node boots up. Before you reboot the node, ensure that the remaining cluster has enough quorum votes to survive the node reboot.

BugId 4620185

Problem Summary: If the rpc.pmfd daemon monitors a process that forks a new process as the result of handling a signal, then using pmfadm -k tag signal might result in an infinite loop. This might occur because pmfadm(1M) attempts to kill all processes in the tag's process tree while the newly forked processes are being added to the tree (each one being added as a result of killing a previous one).


Note –

This bug should not occur with pmfadm -s tag signal.


Workaround: Use pmfadm -s tag signal instead of pmfadm -k. The -s option to pmfadm does not suffer from the same race condition as the -k option.

BugId 4629536

Problem Summary: Using the forcedirectio mount option and the mmap(2) function concurrently might cause data corruption and system hangs or panics.

Workaround: Observe the following restrictions.

If there is a need to use directio, mount the whole file system with directio options.

BugId 4634409

Problem Summary: If an attempt is made to mount the same device on different mount points, the system will catch this error most of the time and cause the second mount to fail. However, under certain rare conditions, the system might not be able to catch this error and could allow both mounts to succeed. This happens only when all four of the following conditions hold true.

Workaround: System administrator should exercise caution while mounting file systems on the cluster.

BugId 4638586

Problem Summary: The scconf(1M) command may not reminor the VxVM disk groups in some cases and will give out the error saying that the device is already in use in another device group.

Workaround: Perform the following steps to assign a new minor number to the disk group.

  1. Find the minor numbers already in use.

    Observe the minor numbers in use along with the major number listed in the following output.


    % ls -l /dev/vx/rdsk/*/*
     
    crw-------   1 root     root     210,107000 Mar 11 18:18 /dev/vx/rdsk/fix/vol-01
    crw-------   1 root     root     210,88000 Mar 15 16:31 /dev/vx/rdsk/iidg/vol-01
    crw-------   1 root     root     210,88001 Mar 15 16:32 /dev/vx/rdsk/iidg/vol-02
    crw-------   1 root     root     210,88002 Mar 15 16:33 /dev/vx/rdsk/iidg/vol-03
    crw-------   1 root     root     210,88003 Mar 15 16:49 /dev/vx/rdsk/iidg/vol-04
    crw-------   1 root     root     210,13000 Mar 18 16:09 /dev/vx/rdsk/sndrdg/vol-01
    crw-------   1 root     root     210,13001 Mar 18 16:08 /dev/vx/rdsk/sndrdg/vol-02

  2. Choose any other multiple of 1000 that is not in use as the base minor number for the new disk group.

  3. Assign the unused minor number to the disk group in error.

    Use the vxdg command's reminor option.

  4. Retry the failed scconf command.

BugId 4644289

Problem Summary: On Solaris 9, the Sun Cluster HA for Oracle data service's stop method can time out, in case of public network failure, if external name services are not available. The Sun Cluster HA for Oracle data service uses the su(1M) user command to start and stop the database.

Workaround: On each node that can be a primary for the oracle_server or oracle_listener resource, modify the /etc/nsswitch.conf file to include the following entries for passwd, group, publickey and project databases.


passwd:       files
group:        files
publickey:    files
project:      files

These modifications ensure that the su(1M) command does not refer to the NIS/NIS+ name services, and ensures that the data service starts and stops correctly in case of network failure.

BugId 4648767

Problem Summary: Use of sendfile(3EXT) will panic the node.

Workaround: There is no workaround for this problem except not to use sendfile.

BugId 4651392

Problem Summary: On Solaris 9, a cluster node that is being shut down might panic with the following message on its way down.


CMM: Shutdown timer expired. Halting

Workaround: There is no workaround for this problem. The node panic has no other side effects and can be treated as relatively harmless.

BugId 4653151

Problem Summary: Creation of an HAStoragePlus resource fails if the order of the file-system mount points specified in the FilesystemMountPoints extension property is not the same as the order specified in the /etc/vfstab file.

Workaround: Ensure that the mount point list specified in the FilesystemMountPoints extension property matches the sequence specified in the /etc/vfstab file. For example, if the /etc/vfstab file specifies file system entries in the sequence /a, /b, and /c, the FilesystemMountPoints sequence can be “/a,/b,/c” or “/a,/b” or “/a,/c” but not “/a,/c,/b.”

BugId 4653788

Problem Summary: If the Failover_enabled extension property is set to FALSE, this is supposed to prevent the resource monitor from initiating a resource group failover.

However, if the monitor is attempting a resource restart, and the START or STOP method fails or times out, then the monitor will attempt a giveover no matter what is the setting of Failover_enabled.

Workaround: There is no workaround for this bug.

BugId 4655194

Problem Summary: Solstice DiskSuite soft partition-based device groups on locally mounted VxFS can trigger errors if device group switchover commands (scswitch -D device-group) are issued.

Solstice DiskSuite internally performs mirror resync operations which may take a significant amount of time. Mirror resyncs degrade redundancy. VxFS reports errors at this juncture causing fault monitor/application IO failures, resulting in application restarts.

Workaround: For any Solstice DiskSuite device group configured with HAStoragePlus, do not switch over the device group manually. Instead, switch over the resource group, which in turn will cause error-free device switchovers.

Alternately, configure locally mounted VxFS file systems on VxVM disk groups.

BugId 4656367

Problem Summary: Some error messages were not included on the Sun Cluster 3.0 5/02 CD‐ROM.

Workaround: These error messages are documented in New Error Messages.

BugId 4656391

Problem Summary: fsck(1M) of a file system resident on a Sun Cluster global Solstice DiskSuite/VxVM device group fails if executed from a non-primary (secondary) node. This has been observed on Solaris 9, although it is possible that earlier Solaris releases could exhibit this behavior.

Workaround: Invoke the fsck command only on the primary node.

BugId 4656531

Problem Summary: A Sun Cluster HA for Oracle listener resource does not behave correctly if multiple listener resources are configured to start listeners with the same listener name.

Workaround: Do not use the same listener name for multiple listeners running on a cluster.

BugId 4657088

Problem Summary: Dissociating/detaching a plex from a VxVM disk group under Sun Cluster 3.0 might panic the cluster node with the following panic string.


  panic[cpu2]/thread=30002901460: BAD TRAP: type=31 rp=2a101b1d200 addr=40  
  mmu_fsr=0 occurred in module "vxfs" due to a NULL pointer dereference

Workaround: Before you dissociate/detach a plex, unmount the corresponding file system.

BugId 4657833

Problem Summary: Failover does not occur when the resource group property auto_start_on_new_cluster is set to false.

Workaround: Each time the whole cluster reboots, for resource groups that have the auto_start_on_new_cluster property set to false, set the auto_start_on_new_cluster property to true, then reset the auto_start_on_new_cluster property to false.


# scrgadm -c -g rgname -y auto_start_on_new_cluster=true
# scrgadm -c -g rgname -y auto_start_on_new_cluster=false

BugId 4659042

Problem Summary: For globally mounted VxFS file systems, the /etc/mnttab file system might not display the global option.

Workaround: If an /etc/mnttab entry is found on all the nodes of the cluster for the given file system, this shows that the file system is globally mounted.

BugId 4659091

Problem Summary: On remounting a globally mounted file system, /etc/mnttab is not updated.

Workaround: There is no workaround.

BugId 4660479

Problem Summary: When using Sun Cluster HA for NFS with HAStoragePlus, blocking locks are not recovered during failovers and switchovers. As a result, lockd cannot be restarted by Sun Cluster HA for NFS, which leads to failure of the nfs_postnet_stop method, causing the cluster node to crash.

Workaround: Do not use Sun Cluster HA for NFS on HAStoragePlus. Cluster file systems do not suffer from this problem, therefore configuring Sun Cluster HA for NFS on a cluster file system can be used as a workaround.

BugId 4660521

Problem Summary: When an HTTP server is killed on a node, it leaves a PID file on that node. Next time the HTTP server is started, it checks if the PID file exists and checks if any process with the PID is already running (kill -0). Since PIDs are recycled, there could be some other process with the same PID as the last HTTP server PID. This will cause the HTTP server startup to fail.

Workaround: If the HTTP server fails to start with an error like the following, manually remove the PID file for the HTTP server to restart correctly.


Mar 27 17:47:58 ppups4 uxwdog[939]: could not log PID to PidLog 
/app/iws/https-schost-5.example.com/logs/pid, server already running (No such file or directory)

BugId 4662264

Problem Summary: To avoid panics when using VERITAS products such as VxFS with Sun Cluster software, the default thread stack size needs to be increased.

Workaround: Increase the stack size by putting the following lines in the /etc/system file.


set lwp_default_stksize=0x6000
set svc_default_stksize 0x8000

The svc_default_stksize entry is needed for NFS operations.

After installing VERITAS packages, verify that VERITAS has not added similar statements to the /etc/system file. if so they should be resolved to one statement using the higher value.

BugId 4663876

Problem Summary: In a greater-than-two-node device group with an ordered node list, if the node being removed is not the last in the ordered list, then the scconf output will show partial information about the node list.

Workaround:

BugId 4664510

Problem Summary: After powering off one of the Sun StorEdge T3 Arrays then running scshutdown, rebooting both nodes puts the cluster in a non-working state.

Workaround: If half the replicas are lost, perform the following steps:

  1. Ensure that the cluster is in cluster mode.

  2. Forcibly import the diskset.


    # metaset -s set-name -f -C take
    

  3. Delete the broken replicas.


    # metadb -s set-name -fd /dev/did/dsk/dNsX
    

  4. Release the diskset.


    # metaset -s set-name -C release
    

    Now the file system can be mounted and used. However, the redundancy in the replicas has not been restored. If the other half of replicas is lost, then there will be no way to restore the mirror to a sane state.

  5. Recreate the databases after the above repair procedure is applied.

Known Documentation Problems

This section discusses known errors or omissions for documentation, online help, or man pages and steps to correct these problems.

SunPlex Manager Online Help Correction

A note in SunPlex Manager's online help is inaccurate. The note appears in the Oracle data service installation procedure. The correction is as follows.

Incorrect:

Note: If no entries exist for the shmsys and semsys variables in the /etc/system file when SunPlex Manager packages are installed, default values for these variables are automatically put in the /etc/system file. The system must then be rebooted. Check Oracle installation documentation to verify that these values are appropriate for your database.

Correct:

Note: If no entries exist for the shmsys and semsys variables in the /etc/system file when you install the Oracle data service, default values for these variables can be automatically put in the /etc/system file. The system must then be rebooted. Check Oracle installation documentation to verify that these values are appropriate for your database.

Sun Cluster HA for Oracle Packages

The introductory paragraph to “Installing Sun Cluster HA for Oracle Packages” in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide does not discuss the additional package needed for users with clusters running Sun Cluster HA for Oracle with 64‐bit Oracle. The following section corrects the introductory paragraph to “Installing Sun Cluster HA for Oracle Packages” in the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.

Installing Sun Cluster HA for Oracle Packages

Depending on your configuration, use the scinstall(1M) utility to install one or both of the following packages on your cluster. Do not use the -s option to non‐interactive scinstall to install all of the data service packages.


Note –

SUNWscor is the prerequisite package for SUNWscorx.


If you installed the SUNWscor data service package as part of your initial Sun Cluster installation, proceed to “Registering and Configuring Sun Cluster HA for Oracle” on page 30. Otherwise, use the following procedure to install the SUNWscor and SUNWscorx packages.

Simple Root Disk Groups With VERITAS Volume Manager

Simple root disk groups are not supported as disk types with VERITAS Volume Manager on Sun Cluster software. As a result, if you perform the procedure “How to Restore a Non-Encapsulated root (/) File System (VERITAS Volume Manager)” in the Sun Cluster 3.0 12/01 System Administration Guide, you should eliminate Step 9, which tells you to determine if the root disk group (rootdg) is on a single slice on the root disk. You would complete Step 1 through Step 8, skip Step 9, and proceed with Step 10 to the end of the procedure.

Upgrading to a Sun Cluster 3.0 Software Update Release

The following is a correction to Step 8 of “How to Upgrade to a Sun Cluster 3.0 Software Update Release” in the Sun Cluster 3.0 12/01 Software Installation Guide.

    (Optional)

    (Optional) Upgrade Solaris 8 software.

    1. Temporarily comment out all global device entries in the /etc/vfstab file.

      Do this to prevent the Solaris upgrade from attempting to mount the global devices.

    2. Shut down the node to upgrade.


      # shutdown -y -g0
      ok

    3. Follow instructions in the installation guide for the Solaris 8 Maintenance Update version you want to upgrade to.


      Note –

      Do not reboot the node when prompted to reboot.


    4. Uncomment all global device entries that you commented out in Step a in the /a/etc/vfstab file.

    5. Install any Solaris software patches and hardware-related patches, and download any needed firmware contained in the hardware patches.

      If any patches require rebooting, reboot the node in non-cluster mode as described in Step f.

    6. Reboot the node in non-cluster mode.

      Include the double dashes (--) and two quotation marks (") in the command.


      # reboot -- "-x"
      

Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 Software

The following upgrade procedures contain changes and corrections to the procedures since release of the Sun Cluster 3.0 12/01 Software Installation Guide.

To upgrade from Sun Cluster 2.2 to Sun Cluster 3.0 5/02 software, perform the following procedures instead of the versions documented in the Sun Cluster 3.0 12/01 Software Installation Guide.

How to Upgrade Cluster Software Packages

  1. Become superuser on a cluster node.

  2. If you are installing from the CD‐ROM, insert the Sun Cluster 3.0 5/02 CD-ROM  into the CD‐ROM drive on a node.

    If the volume daemon vold(1M) is running and configured to manage CD‐ROM devices, it automatically mounts the CD‐ROM on the /cdrom/suncluster_3_0_u3 directory.

  3. Change to the /cdrom/suncluster_3_0_u3/SunCluster_3.0/Packages directory.


    # cd /cdrom/suncluster_3_0_u3/SunCluster_3.0/Packages
    

  4. If your volume manager is Solstice DiskSuite, install the latest Solstice DiskSuite mediator package (SUNWmdm) on each node.

    1. Add the SUNWmdm package.


      # pkgadd -d . SUNWmdm
      

    2. Reboot the node.


      # shutdown -g0 -y -i6
      

    3. Repeat on the other node.

  5. Reconfigure mediators.

    1. Determine which node has ownership of the diskset to which you will add the mediator hosts.


      # metaset -s setname
      
      -s setname

      Specifies the diskset name

    2. If no node has ownership, take ownership of the diskset.


      # metaset -s setname -t
      
      -t

      Takes ownership of the diskset

    3. Recreate the mediators.


      # metaset -s setname -a -m mediator‐host‐list
      
      -a

      Adds to the diskset

      -m mediator‐host‐list

      Specifies the names of the nodes to add as mediator hosts for the diskset

    4. Repeat for each diskset.

  6. On each node, shut down the rpc.pfmd daemon.


    # /etc/init.d/initpmf stop
    

  7. Upgrade the first node to Sun Cluster 3.0 5/02 software.

    These procedures will refer to this node as the first-installed node.

    1. On the first node to upgrade, change to the /cdrom/suncluster_3_0_u3/SunCluster_3.0/Tools directory.


      # cd /cdrom/suncluster_3_0_u3/SunCluster_3.0/Tools
      

    2. Upgrade the cluster software framework.


      # ./scinstall ‐u begin ‐F
      
      -F

      Specifies that this is the first-installed node in the cluster

      See the scinstall(1M) man page for more information.

    3. Install any Sun Cluster patches on the first node.

      See the Sun Cluster 3.0 5/02 Release Notes for the location of patches and installation instructions.

    4. Reboot the node.


      # shutdown -g0 -y -i6
      

      When the first node reboots into cluster mode, it establishes the cluster.

  8. Upgrade the second node to Sun Cluster 3.0 5/02 software.

    1. On the second node, change to the /cdrom/suncluster_3_0_u3/SunCluster_3.0/Tools directory.


      # cd /cdrom/suncluster_3_0_u3/SunCluster_3.0/Tools
      

    2. Upgrade the cluster software framework.


      # ./scinstall ‐u begin ‐N node1
      
      ‐N node1

      Specifies the name of the first-installed node in the cluster, not the name of the second node to be installed

      See the scinstall(1M) man page for more information.

    3. Install any Sun Cluster patches on the second node.

      See the Sun Cluster 3.0 5/02 Release Notes for the location of patches and installation instructions.

    4. Reboot the node.


      # shutdown -g0 -y -i6
      

  9. After both nodes are rebooted, verify from either node that both nodes are cluster members.


    -- Cluster Nodes --
                       Node name      Status
                       ---------      ------
      Cluster node:    phys-schost-1  Online
      Cluster node:    phys-schost-2  Online

    See the scstat(1M) man page for more information about displaying cluster status.

  10. Choose a shared disk to be the quorum device.

    You can use any disk shared by both nodes as a quorum device. From either node, use the scdidadm(1M) command to determine the shared disk's device ID (DID) name. You specify this device name in Step 5, in the -q globaldev=DIDname option to scinstall.


    # scdidadm ‐L
    

  11. Configure the shared quorum device.

    1. Start the scsetup(1M) utility.


      # scsetup
      

      The Initial Cluster Setup screen is displayed.

      If the quorum setup process is interrupted or fails to complete successfully, rerun scsetup.

    2. At the prompt Do you want to add any quorum disks?, configure a shared quorum device.

      A two-node cluster remains in install mode until a shared quorum device is configured. After the scsetup utility configures the quorum device, the message Command completed successfully is displayed.

    3. At the prompt Is it okay to reset "installmode"?, answer Yes.

      After the scsetup utility sets quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed and the utility returns you to the Main Menu.

    4. Exit from the scsetup utility.

  12. From any node, verify the device and node quorum configurations.

    You do not need to be superuser to run this command.


    % scstat -q
    

  13. From any node, verify that cluster install mode is disabled.

    You do not need to be superuser to run this command.


    % scconf -p | grep "Cluster install mode:"
    Cluster install mode:                                  disabled

  14. Update the directory paths.

    Go to “How to Update the Root Environment” in the Sun Cluster 3.0 12/01 Software Installation Guide.

Example—Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 5/02 Software – Begin Process

The following example shows the beginning process of upgrading a two-node cluster from Sun Cluster 2.2 to Sun Cluster 3.0 5/02 software. The cluster node names are phys‐schost‐1, the first-installed node, and phys‐schost‐2, which joins the cluster that phys‐schost‐1 established. The volume manager is Solstice DiskSuite and both nodes are used as mediator hosts for the diskset schost‐1.


(Install the latest Solstice DiskSuite mediator package
on each node)
# cd /cdrom/suncluster_3_0_u3/SunCluster_3.0/Packagespkgadd -d . SUNWmdm
 
(Restore the mediator)
# metaset -s schost-1 -tmetaset -s schost‐1 -a -m phys‐schost‐1 phys‐schost‐2
 
(Shut down the rpc.pmfd daemon)
# /etc/init.d/initpmf stop
 
(Begin upgrade on the first node and reboot it)
phys‐schost‐1# cd /cdrom/suncluster_3_0_u3/SunCluster_3.0/Tools
phys‐schost‐1# ./scinstall ‐u begin ‐F
phys-schost-1# shutdown -g0 -y -i6
 
(Begin upgrade on the second node and reboot it)
phys‐schost‐2# cd /cdrom/suncluster_3_0_u3/SunCluster_3.0/Tools
phys‐schost‐2# ./scinstall ‐u begin ‐N phys‐schost‐1
phys-schost-2# shutdown -g0 -y -i6
 
(Verify cluster membership)
# scstat
 
(Choose a shared disk and configure it as the quorum
device)
# scdidadm -L
# scsetup
Select Quorum>Add a quorum disk
 
(Verify that the quorum device is configured)
# scstat -q
 
(Verify that the cluster is no longer in install
mode)
% scconf -p | grep "Cluster install mode:"
Cluster install mode:                                  disabled

How to Finish Upgrading Cluster Software

This procedure finishes the scinstall(1M) upgrade process begun in How to Upgrade Cluster Software Packages. Perform these steps on each node of the cluster.

  1. Become superuser on each node of the cluster.

  2. Is your volume manager VxVM?

    • If no, go to Step 3.

    • If yes, install VxVM and any VxVM patches and create the root disk group (rootdg) as you would for a new installation.

      • To install VxVM and encapsulate the root disk, perform the procedures in “How to Install VERITAS Volume Manager Software and Encapsulate the Root Disk” in the Sun Cluster 3.0 12/01 Software Installation Guide. To mirror the root disk, perform the procedures in “How to Mirror the Encapsulated Root Disk” in the Sun Cluster 3.0 12/01 Software Installation Guide.

      • To install VxVM and create rootdg on local, non-root disks, perform the procedures in “How to Install VERITAS Volume Manager Software Only” and in “How to Create a rootdg Disk Group on a Non-Root Disk” in the Sun Cluster 3.0 12/01 Software Installation Guide.

  3. Are you upgrading Sun Cluster HA for NFS?

    If yes, go to Step 4.

    If no, go to Step 5.

  4. Finish Sun Cluster 3.0 software upgrade and convert Sun Cluster HA for NFS configuration.

    If you are not upgrading Sun Cluster HA for NFS, perform Step 5 instead.

    1. Insert the Sun Cluster 3.0 Agents 5/02 CD-ROM into the CD‐ROM drive on a node.

      This step assumes that the volume daemon vold(1M) is running and configured to manage CD‐ROM devices.

    2. Finish the cluster software upgrade on that node.


      # scinstall ‐u finish ‐q globaldev=DIDname \
      -d /cdrom/scdataservices_3_0_u3 -s nfs
      
      -q globaldev=DIDname

      Specifies the device ID (DID) name of the quorum device

      -d /cdrom/scdataservices_3_0_u3

      Specifies the directory location of the CD‐ROM image

      -s nfs

      Specifies the Sun Cluster HA for NFS data service to configure


      Note –

      An error message similar to the following might be generated. You can safely ignore it.


      ** Installing Sun Cluster - Highly Available NFS Server **
      Skipping "SUNWscnfs" - already installed


    3. Eject the CD‐ROM.

    4. Repeat Step a through Step c on the other node.

      When completed on both nodes, cluster install mode is disabled and all quorum votes are assigned.

    5. Skip to Step 6.

  5. Finish Sun Cluster 3.0 software upgrade on each node.

    If you are upgrading Sun Cluster HA for NFS, perform Step 4 instead.


    # scinstall ‐u finish ‐q globaldev=DIDname
    
    -q globaldev=DIDname

    Specifies the device ID (DID) name of the quorum device

  6. If you are upgrading any data services other than Sun Cluster HA for NFS, configure resources for those data services as you would for a new installation.

    See the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide for procedures.

  7. If your volume manager is Solstice DiskSuite, from either node bring pre-existing disk device groups online.


    # scswitch ‐z ‐D disk-device-group ‐h node
    
    -z

    Performs the switch

    -D disk-device-group

    Specifies the name of the disk device group, which for Solstice DiskSuite software is the same as the diskset name

    -h node

    Specifies the name of the cluster node that serves as the primary of the disk device group

  8. From either node, bring pre-existing data service resource groups online.

    At this point, Sun Cluster 2.2 logical hosts are converted to Sun Cluster 3.0 5/02 resource groups, and the names of logical hosts are appended with the suffix -lh. For example, a logical host named lhost‐1 is upgraded to a resource group named lhost‐1‐lh. Use these converted resource group names in the following command.


    # scswitch ‐z ‐g resource-group ‐h node
    
    -g resource-group

    Specifies the name of the resource group to bring online

    You can use the scrgadm -p command to display a list of all resource types and resource groups in the cluster. The scrgadm -pv command displays this list with more detail.

  9. If you are using Sun Management Center to monitor your Sun Cluster configuration, install the Sun Cluster module for Sun Management Center.

    1. Ensure that you are using the most recent version of Sun Management Center.

      See your Sun Management Center documentation for installation or upgrade procedures.

    2. Follow guidelines and procedures in “Installation Requirements for Sun Cluster Monitoring” in the Sun Cluster 3.0 12/01 Software Installation Guide to install the Sun Cluster module packages.

  10. Verify that all nodes have joined the cluster.

    Go to “How to Verify Cluster Membership” in the Sun Cluster 3.0 12/01 Software Installation Guide.

Example—Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 5/02 Software – Finish Process

The following example shows the finish process of upgrading a two-node cluster upgraded from Sun Cluster 2.2 to Sun Cluster 3.0 5/02 software. The cluster node names are phys‐schost‐1 and phys‐schost‐2, the device group names are dg‐schost‐1 and dg‐schost‐2, and the data service resource group names are lh‐schost‐1 and lh‐schost‐2. The scinstall command automatically converts the Sun Cluster HA for NFS configuration.


(Finish upgrade on each node)
phys‐schost‐1# scinstall ‐u finish ‐q globaldev=d1 \
-d /cdrom/scdataservices_3_0_u3 -s nfs
phys‐schost‐2# scinstall ‐u finish ‐q globaldev=d1 \
-d /cdrom/scdataservices_3_0_u3 -s nfs
 
(Bring device groups and data service resource groups
on each node online)
phys‐schost‐1# scswitch ‐z ‐D dg‐schost‐1 ‐h phys‐schost‐1
phys‐schost‐1# scswitch ‐z ‐g lh-schost-1 ‐h phys‐schost‐1
phys‐schost‐1# scswitch ‐z ‐D dg‐schost‐2 ‐h phys‐schost‐2 
phys‐schost‐1# scswitch ‐z ‐g lh-schost-2 ‐h phys‐schost‐2

Bringing a Node Out of Maintenance State

The procedure “How to Bring a Node Out of Maintenance State” in the Sun Cluster 3.0 12/01 System Administration Guide does not apply to a two-node cluster. A procedure appropriate for a two-node cluster will be evaluated for the next release.

Man Pages

scgdevs(1M) Man Page

The following paragraph clarifies behavior of the scgdevs command. This information is not currently included in the scgdevs(1M) man page.

New Information:

scgdevs(1M) called from the local node will perform its work on remote nodes asynchronously. Therefore, command completion on the local node does not necessarily mean it has completed its work clusterwide.

SUNW.sap_ci(5) Man Page

SUNW.sap_as(5) Man Page

rg_properties(5) Man Page

The following new resource group property should be added to the rg_properties(5) man page.

Auto_start_on_new_cluster

This property controls whether the Resource Group Manager starts the resource group automatically when a new cluster is forming.

The default is TRUE. If set to TRUE, the Resource Group Manager attempts to start the resource group automatically to achieve Desired_primaries when all nodes of the cluster are simultaneously rebooted. If set to FALSE, the Resource Group does not start automatically when the cluster is rebooted.

Category: Optional Default: True Tunable: Any time

New Error Messages

The following error messages were not included on the Sun Cluster 3.0 5/02 CD-ROM.


360600:Oracle UDLM package wrong instruction set architecture.

Description:

The Oracle UDLM package that is currently installed is the incorrect instruction set architecture for the mode that the node is currently booted in, (e.g., Oracle UDLM is 64-bit (sparc9) and the node is currently boot in 32-bit mode (sparc)).

Solution:

Obtain and install the proper Oracle UDLM package from Oracle for the instruction set architecture of the system, or boot the node in an instruction set architecture that is compatible with the current version of the Oracle UDLM.


800320:Fencing %s from shared disk devices.

Description:

A reservation has been performed to fence off nonmember nodes from disks that are shared between the cluster nodes.

Solution:

None.


558777:Enabling failfast on all shared disk devices.

Description:

A reservation failfast will be set so nodes which share these disk groups will be brought down if they are fenced off by other nodes.

Solution:

None.


309875:Error encountered enabling failfast.

Description:

An error occurred while attempting to enable the reservation failfast on the disks that are shared by other nodes.

Solution:

This is an internal error. Save the contents of /var/adm/messages, /var/cluster/ucmm/ucmm_reconf.log, and /var/cluster/ucmm/dlm*/logs/* from all the nodes and contact your Sun service representative.