Sun Cluster 3.1 Release Notes

Sun Cluster 3.1 Release Notes

This document provides the following information for SunTM Cluster 3.1 software.


Note –

For information about Sun Cluster 3.1 data services, refer to the Sun Cluster 3.1 Data Service 5/03 Release Notes.


What's New in Sun Cluster 3.1

This section provides information related to new features, functionality, and supported products in Sun Cluster 3.1.

New Features and Functionality

Sun Cluster Security Hardening

Sun Cluster Security Hardening uses the Solaris Operating Environment hardening techniques recommended by the Sun BluePrintsTM program to achieve basic security hardening for clusters. The Solaris Security Toolkit automates the implementation of Sun Cluster Security Hardening.

The Sun Cluster Security Hardening documentation is available at http://www.sun.com/blueprints/0203/817-1079.pdf. You can also access the article from http://wwws.sun.com/software/security/blueprints. From this URL, scroll down to the Architecture heading to locate the article “Securing the Sun Cluster 3.x Software.” The documentation describes how to secure Sun Cluster 3.1 deployments in a Solaris 8 and Solaris 9 environment. The description includes the use of the Solaris Security Toolkit and other best-practice security techniques recommended by Sun security experts.

Flexible Topologies

Sun Cluster 3.1 software now supports open topologies. You are no longer limited to the storage topologies listed in the Sun Cluster 3.1 Concepts document.

Use the following guidelines to configure your cluster.

Diskless Clusters

Sun Cluster 3.1 now supports greater than three-node cluster configurations without shared storage devices. Two-node clusters are still required to have a shared storage device to maintain quorum. This storage device does not need to perform any other function.

Support for Data Service Project Configuration

Data services may now be configured to launch under a Solaris project name when brought online using the RGM—See “Data Service Project Configuration” section in “Key Concepts – Administration and Application Development” in Sun Cluster 3.1 Concepts Guide for detailed information about planning project configuration for your data service.

Support for the Solaris Implementation of Internet Protocol (IP) Network Multipathing on Public Networks

For more information on the support for the Solaris implementation of IP network multipathing on public networks, see“Planning the Sun Cluster Configuration” in Sun Cluster 3.1 Software Installation Guide and “Administering the Public Network” in Sun Cluster 3.1 System Administration Guide.

Set Secondary Nodes for a Disk Device Group

For more information on how to set a desired number of secondary nodes for a disk device group, see “Administering Disk Device Groups” in Sun Cluster 3.1 System Administration Guide (refer to the procedures for Setting the Desired Number of Secondaries and Changing Disk Device Group Properties). Additional information can also be found in “Cluster Administration and Application Development” in Sun Cluster 3.1 Concepts Guide (See the section on Multi-Ported Disk Failover).

Data Services

For information on data services enhancements, see “What's New in Sun Cluster 3.1 Data Services 5/03” in Sun Cluster 3.1 Data Service 5/03 Release Notes.

Supported Products

This section describes the supported software and memory requirements for Sun Cluster 3.1 software.

Restrictions

The following restrictions apply to the Sun Cluster 3.1 release:

Service and Application Restrictions

Hardware Restrictions

Volume Manager Restrictions

Cluster File System Restrictions

VxFS Restrictions

Internet Protocol (IP) Network Multipathing Restrictions

This section identifies any restrictions on using IP Network Multipathing that apply only in a Sun Cluster 3.1 environment, or are different than information provided in the Solaris documentation for IP Network Multipathing.

Most procedures, guidelines, and restrictions identified in the Solaris documentation for IP Network Multipathing are the same in a cluster or non-cluster environment. Therefore, see the appropriate Solaris document for additional information about IP Network Multipathing restrictions.

Operating Environment Release 

For Instructions, Go To... 

Solaris 8 operating environment 

IP Network Multipathing Administration Guide

Solaris 9 operating environment 

“IP Network Multipathing Topics” in System Administration Guide: IP Series

Data Service Restrictions

There are no restrictions that apply to all data services. For information about restrictions for specific data services, see Sun Cluster 3.1 Data Service 5/03 Release Notes.

Running Sun Cluster HA for Oracle 3.0 on Sun Cluster 3.1

The Sun Cluster HA for Oracle 3.0 data service can run on Sun Cluster 3.1 only when used with the following versions of the Solaris operating environment:

Known Issues and Bugs

The following known issues and bugs affect the operation of the Sun Cluster 3.1 release. For the most current information, see the online Sun Cluster 3.1 Release Notes Supplement at http://docs.sun.com.

Incorrect Largefile Status (4419214)

Problem Summary: The /etc/mnttab file does not show the most current largefile status of a globally mounted VxFS filesystem.

Workaround: Use the fsadm command to verify the filesystem largefile status, instead of the /etc/mnttab entry.

Global VxFS File System Lists Block Allocations Differently Than Local VxFS (4449437)

Problem Summary: For a given file size, global VxFS file system appears to allocate more disk blocks than the local VxFS file system.

Workaround: Unmounting and mounting the filesystem eliminates the extra disk blocks that were reported as allocated to the given file.

Nodes Unable to Bring up qfe Paths (4526883)

Problem Summary: Sometimes, private interconnect transport paths ending at a qfe adapter fail to come online.

Workaround: Follow the steps shown below:

  1. Using scstat -W, identify the adapter that is at fault. The output will show all transport paths with that adapter as one of the path endpoints in the faulted or the waiting states.

  2. Use scsetup to remove from the cluster configuration all the cables connected to that adapter.

  3. Use scsetup again to remove that adapter from the cluster configuration.

  4. Add back the adapter and the cables.

  5. Verify if the paths appear. If the problem persists, repeat steps 1–5 a few times.

  6. Verify if the paths appear. If the problem still persists, reboot the node with the at-fault adapter. Before the node is rebooted, make sure that the remaining cluster has enough quorum votes to survive the node reboot.

File Blocks Not Updated Following Writes to Sparse File Holes (4607142)

Problem Summary: A file's block count is not always consistent across cluster nodes following block-allocating write operations within a sparse file. For a cluster file system layered on UFS (or VxFS 3.4), the block inconsistency across cluster nodes disappears within 30 seconds or so.

Workaround: File metadata operations which update the inode (touch, etc.) should synchronize the st_blocks value so that subsequent metadata operations will ensure consistent st_blocks values.

Concurrent use of forcedirectio and mmap(2) may Cause Panics (4629536)

Problem Summary: Using the forcedirectio mount option and the mmap(2) function concurrently might cause data corruption, system hangs, or panics.

Workaround: Observe the following restrictions:

If there is a need to use directio, mount the whole file system with directio options.

Unmounting of a Cluster File System Fails (4656624)

Problem Summary: The unmounting of a cluster file system fails sometimes even though the fuser command shows that there are no users on any node.

Workaround: Retry the unmounting after all asynchronous I/O to the underlying file system has been completed.

Rebooting Puts Cluster Nodes in a Non–Working State (4664510)

Problem Summary: After powering off one of the Sun StorEdge T3 Arrays and running scshutdown, rebooting both nodes puts the cluster in a non-working state.

Workaround: If half the replicas are lost, perform the following steps:

  1. Ensure the cluster is in cluster mode.

  2. Forcibly import the diskset.


    # metaset -s set-name -f -C take
    
  3. Delete the broken replicas.


    # metadb -s set-name -fd /dev/did/dsk/dNsX
    
  4. Release the diskset.


    # metaset -s set-name -C release
    

    The file system can now be mounted and used. However, the redundancy in the replicas has not been restored. If the other half of replicas is lost, then there will be no way to restore the mirror to a sane state.

  5. Recreate the databases after the above repair procedure is applied.

Dissociating a Plex from a Disk Group Causes Panic (4657088)

Problem Summary: Dissociating or detaching a plex from a disk group under Sun Cluster may panic the cluster node with following panic string:

panic[cpu2]/thread=30002901460: BAD TRAP: type=31 rp=2a101b1d200 addr=40 mmu_fsr=0 occurred in module "vxfs" due to a NULL pointer dereference

Workaround: Before dissociating or detaching a plex from a disk group, unmount the corresponding file system.

scvxinstall -i Fails to Install a License Key (4706175)

Problem Summary: The scvxinstall -i command accepts a license key with the -L option. However, the key is ignored and does not get installed.

Workaround: Do not provide a license key with the -i form of scvxinstall. The key will not be installed. The license keys should be installed with the interactive form or with the -e option. Before proceeding with the encapsulation of root, examine the license requirements and provide the desired keys either with the -e option or in the interactive form.

Sun Cluster HA–Siebel Fails to Monitor Siebel Components (4722288)

Problem Summary: The Sun Cluster HA-Siebel agent will not monitor individual Siebel components. If failure of a Siebel component is detected, only a warning message would be logged in syslog.

Workaround: Restart the Siebel server resource group in which components are offline using the command scswitch -R -h node-g resource_group.

The remove Script Fails to Unregister SUNW.gds Resource Type (4727699)

Problem Summary: The remove script fails to unregister SUNW.gds resource type and displays the following message:


Resource type has been un-registered already.

Workaround: After using the remove script, manually unregister SUNW.gds. Alternatively, use the scsetup command or the SunPlex Manager.

Create IPMP Group Option Overwrites hostname.int (4731768)

Problem Summary: The Create IPMP group option in SunPlex Manager should only be used with adapters that are not already configured. If an adapter is already configured with an IP address, the adapter must be manually configured for IPMP.

Workaround: The Create IPMP group option in SunPlex Manager must be used only with adapters that are not already configured. If an adapter is already configured with an IP address, the adapter should be manually configured using Solaris IPMP management tools.

Using the Solaris shutdown Command May Result in Node Panic (4745648)

Problem Summary: Using the Solaris shutdown command or similar commands (for example, uadmin) to bring down a cluster node may result in node panic and display the following message:

CMM: Shutdown timer expired. Halting.

Workaround: Contact your Sun service representative for support. The panic is necessary to provide a guaranteed safe way for another node in the cluster to take over the services that were being hosted by the shutting-down node.

Administrative Command to Add a Quorum Device to the Cluster Fails (4746088)

Problem Summary: If a cluster has the minimum votes required for quorum, an administrative command to add a quorum device to the cluster fails with the following message:

Cluster could lose quorum

.

Workaround: Contact your Sun service representative for support.

Path Timeouts When Using ce Adapters on the Private Interconnect (4746175)

Problem Summary: Clusters using ce adapters on the private interconnect may notice path timeouts and subsequent node panics if one or more cluster nodes have more than four processors.

Workaround: Set the ce_taskq_disable parameter in the ce driver by adding set ce:ce_taskq_disable=1 to /etc/system file on all cluster nodes and then rebooting the cluster nodes. This ensures that heartbeats (and other packets) are always delivered in the interrupt context, eliminating path timeouts and the subsequent node panics. Quorum considerations should be observed while rebooting cluster nodes.

Siebel Gateway Probe May Time Out When a Public Network Fails (4764204)

Problem Summary: Failure of a public network may cause the Siebel gateway probe to time out and eventually cause the Siebel gateway resource to go offline. This may occur if the node on which the Siebel gateway is running has a path beginning with /home which depends on network resources such as NFS and NIS. Without the public network, the Siebel gateway probe hangs while trying to open a file on/home, causing the probe to time out.

Workaround: Complete the following steps for all nodes of the cluster which can host the Siebel gateway.

  1. Ensure that the passwd, group, and project entries in /etc/nsswitch.conf refer only to files and not to nis.

  2. Ensure that there are no NFS or NIS dependencies for any path starting with /home.

    You may have either a locally mounted /home path or rename the /home mount point to /export/home or some name which does not start with /home.

  3. In the /etc/auto_master file, comment out the line containing the entry +auto_master. Also comment out any /home entries using auto_home.

  4. In etc/auto_home, comment out the line containing +auto_home.

Flushing Gateway Routes Breaks Per–Node Logical IP Communication (4766076)

Problem Summary: To provide highly available, per-node, logical IP communication over a private interconnect, Sun Cluster software relies on gateway routes on the cluster nodes. Flushing the gateway routes will break the per-node logical IP communication.

Workaround: Reboot the cluster nodes where the routes were inadvertently flushed. To restore the gateway routes, it is sufficient to reboot the cluster nodes one at a time. Per-node logical IP communication will remain broken until the routes have been restored. Quorum considerations must be observed while rebooting cluster nodes.

Unsuccessful Failover Results in Error (4766781)

Problem Summary: An unsuccessful failover/switchover of a file system might leave the file system in an errored state.

Workaround: Unmount and remount the file system.

Enabling TCP-Selective Acknowledgments may Cause Data Corruption (4775631)

Problem Summary: Enabling TCP-selective acknowledgements on cluster nodes may cause data corruption.

Workaround:No user action is required. To avoid causing data corruption on the global file system, do not reenable TCP selective acknowledgements on cluster nodes.

scinstall Incorrectly Shows Some Data Services as Unsupported (4776411)

Problem Summary: scinstall incorrectly shows that the following data services are not supported on Solaris 9:

Workaround: Solaris 8 and 9 support both Sun Cluster HA for SAP and Sun Cluster HA for SAP liveCache; ignore the unsupported feature list in scinstall.

scdidadm Exits With an Error if /dev/rmt is Missing (4783135)

Problem Summary: The current implementation of scdidadm(1M) relies on the existence of both /dev/rmt and /dev/(r)dsk to successfully execute scdiadm -r. Solaris installs both, regardless of the existence of the actual underlying storage devices. If /dev/rmt is missing, scdidadm exits with the following error:

Cannot walk /dev/rmt" during execution of 'scdidadm -r

.

Workaround: On any node where /dev/rmt is missing, use mkdir to create a directory /dev/rmt. Then, run scgdevs from one node.

Data Corruption When Node Failure Causes the Cluster File System Primary to Die (4804964)

Problem Summary: Data corruption may occur with Sun Cluster 3.x systems running patches 113454-04, 113073-02 and 113276-02 (or a subset of these patches). The problem only occurs with globally mounted UFS file systems. The data corruption results in missing data (that is, you will see zero's where data should exist), and the amount of missing data is always a multiple of a disk block. The data loss can occur any time a node failure causes the cluster file system primary to die soon after the cluster file systemclient completes— or reports that it has just completed—a write operation. The period of vulnerability is limited and does not occur every time.

Workaround: Use the -o syncdir mount option to force UFS to use synchronous UFS log transactions.

Node Hangs After Rebooting When Switchover is in Progress (4806621)

Problem Summary: If a device group switchover is in progress when a node joins the cluster, the joining node and the switchover operation may hang. Any attempts to access any device service will also hang. This is more likely to happen on a cluster with more than two nodes and if the file system mounted on the device is a VxFS file system.

Workaround: To avoid this situation, do not initiate device group switchovers while a node is joining the cluster. If this situation occurs, then all the cluster nodes must be rebooted to restore access to device groups.

File System Panics When Cluster File System is Full (4808748)

Problem Summary: When a cluster file system is full, there are instances where the filesystem might panic with one of the following messages: 1)

assertion failed: cur_data_token & PXFS_WRITE_TOKEN or PXFS_READ_TOKEN

or 2)

vp->v_pages == NULL

. These panics are intended to prevent data corruption when a filesystem is full.

Workaround: To reduce the likelihood of this problem, use a cluster file system with UFS as far as possible. It is extremely rare for one of these panics to occur when using a cluster file system with UFS, but the risk is greater when using a cluster file system with VxFS.

Cluster Node Hangs While Booting Up (4809076)

Problem Summary: When a device service switchover request, using scswitch -z -D <device-group> -h <node> , is concurrent with a node reboot and there are global file systems configured on the device service, the global file systems might become unavailable and subsequent configuration changes involving any device service or global file system may also hang. Additionally, subsequent cluster node joins might hang.

Workaround: Recovery requires a reboot of all the cluster nodes.

Removing a Quorum Device Using scconf -rq Causes Cluster Panic (4811232)

Problem Summary: If you execute the scconf -rq command to remove a quorum device in a vulnerable configuration, all nodes of the cluster will panic with the message

CMM lost operational quorum

.

Workaround: To remove a quorum device from a cluster, first check the output of scstat -q. If the quorum device is listed as having more than one vote in the Present column, then the device should first be put into maintenance mode using scconf -cq globaldev=QD,maintstate. After the command completes and the quorum device is shown in scstat -q as having 0 votes present, the device can be removed using scconf -rq.

Mirrored Volume Fails When Using O_EXCL Flag (4820273)

Problem Summary: If Solstice DiskSuite/Solaris Volume Manager is being used and a mirrored volume is opened with O_EXCL flag, the failover of the device group containing this volume will fail. This will panic the new device group primary, when the volume is first accessed after the failover.

Workaround: When using Solstice DiskSuite/Solaris Volume Manager, do not open mirrored volumes with O_EXCL flag.

Cluster Hangs After a Node is Rebooted During Switchover (4823195)

Problem Summary: If a device service failover request is concurrent with a node reboot or a node join, and there are cluster file systems configured on the device service, the cluster file systems might become unavailable and subsequent configuration changes involving any device service or cluster file system may also hang. Additionally, subsequent cluster node joins might hang.

Workaround: Recovery requires a reboot of all the cluster nodes.

Untranslated Text in the French Locale (4840085)

Problem Summary: Some untranslated text appears when using the SunPlex Manager to install Sun Cluster in the French locale.

Workaround: This error does not affect SunPlex Manager's functionality. You may either ignore the untranslated text or set your browser's language to English to avoid mixed translation.

Patches and Required Firmware Levels

This section provides information about patches for Sun Cluster configurations.


Note –

You must be a registered SunSolveTM user to view and download the required patches for the Sun Cluster product. If you do not have a SunSolve account, contact your Sun service representative or sales engineer, or register online at http://sunsolve.sun.com.


PatchPro

PatchPro is a patch-management tool designed to ease the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides a Sun Cluster-specific Interactive Mode tool to make the installation of patches easier and an Expert Mode tool to maintain your configuration with the latest set of patches. Expert Mode is especially useful for those who want to get all of the latest patches, not just the high availability and security patches.

To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click on “Sun Cluster,” then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

SunSolve Online

The SunSolveTM Online Web site provides 24-hour access to the most up-to-date information regarding patches, software, and firmware for Sun products. Access the SunSolve Online site athttp://sunsolve.sun.com for the most current matrixes of supported software, firmware, and patch revisions.

You can find Sun Cluster 3.1 patch information by using the Info Docs. To view the Info Docs, log on to SunSolve and access the Simple search selection from the top of the main page. From the Simple Search page, click on the Info Docs box and type Sun Cluster 3.1 in the search criteria box. This will bring up the Info Docs page for Sun Cluster 3.1 software.

Before you install Sun Cluster 3.1 software and apply patches to a cluster component (Solaris operating environment, Sun Cluster software, volume manager or data services software, or disk hardware), review the Info Docs and any README files that accompany the patches. All cluster nodes must have the same patch level for proper cluster operation.

For specific patch procedures and tips on administering patches, see the Sun Cluster 3.1 System Administration Guide.

End-of-Feature-Support Statements

Public Network Management (PNM)

Public Network Management (PNM) is not supported in Sun Cluster 3.1. Software Network adapter monitoring and failover for Sun Cluster software is instead performed by the Solaris implementation of Internet Protocol (IP) Network Multipathing. See What's New in Sun Cluster 3.1.

HAStorage

HAStorage might not be supported in a future release of Sun Cluster software. Near-equivalent functionality is supported by HAStoragePlus. Complete one of the following procedures to migrate from HAStorage to HAStoragePlus.

How to Upgrade from HAStorage to HAStoragePlus When Using Device Groups or Cluster File Systems

HAStorage might not be supported in a future release of Sun Cluster software. Equivalent functionality is supported by HAStoragePlus. To upgrade from HAStorage to HAStoragePlus when you use cluster file systems or device groups, complete the following steps.

The following example uses a simple HA-NFS resource active with HAStorage. The ServicePaths are the disk group nfsdg and the AffinityOn property is TRUE. Furthermore, the HA-NFS Service has Resource_Dependencies set to the HAStorage resource.

  1. Remove the dependencies the application resources has on HAStorage resource.


    # scrgadm -c -j nfsserver-rs -y Resource_Dependencies=""
    
  2. Disable the HAStorage resource.


    # scswitch -n -j nfs1storage-rs
    
  3. Remove the HAStorage resource from the application resource group.


    # scrgadm -r -j nfs1storage-rs
    
  4. Unregister the HAStorage resource type.


    # scrgadm -r -t SUNW.HAStorage
    
  5. Register the HAStoragePlus resource type.


    # scrgadm -a -t SUNW.HAStoragePlus
    
  6. Create the HAStoragePlus resource.

    To specify a file-system mount point, input the following text.


    # scrgadm -a -j nfs1-hastp-rs -g nfs1-rg -t \
    SUNW.HAStoragePlus -x FilesystemMountPoints=/global/nfsdata -x \
    AffinityOn=True
    

    To specify global device paths, input the following text.


    # scrgadm -a -j nfs1-hastp-rs -g nfs1-rg -t \
    SUNW.HAStoragePlus -x GlobalDevicePaths=nfsdg -x AffinityOn=True
    

    Note –

    Instead of using the ServicePaths property for HAStorage, you must use the GlobalDevicePaths or FilesystemMountPoints property for HAStoragePlus. The FilesystemMountPoints extension property must match the sequence specified in the /etc/vfstab file.


  7. Enable the HAStoragePlus resource.


    # scswitch -e -j nfs1-hastp-rs
    
  8. Set up the dependencies between the application server and HAStoragePlus.


    # scrgadm -c -j nfsserver-rs -y \
    Resource_Depencencies=nfs1=hastp-rs
    

How to Upgrade from HAStorage with Cluster File Systems to HAStoragePlus with Failover Filesystem

HAStorage might not be supported in a future release of Sun Cluster. Equivalent functionality is supported by HAStoragePlus. To upgrade from HAStorage to HAStoragePlus when using Failover Filesystem (FFS), complete the following steps.

The following example uses a simple NFS service active with HAStorage. The ServicePaths are the diskgroup nfsdg and the AffinityOn property is TRUE. Furthermore, the HA-NFS service has Resource_Dependencies set to the HAStorage resource.

  1. Remove the dependencies the application resource has on HAStorage.


    # scrgadm -c -j nfsserver-rs -y Resource_Dependencies=""'
  2. Disable the HAStorage resource.


    # scswitch -n -j nfs1storage-rs
    
  3. Remove the HAStorage resource from the application resource group.


    # scrgadm -r -j nfs1storage-rs
    
  4. Unregister the HAStorage resource type.


    # scrgadm -r -t SUNW.HAStorage
    
  5. Modify the /etc/vfstab file to remove the global flag and change mount at bootto no. This should be done on all nodes which are potential primaries for the resource group.

  6. Register the HAStoragePlus resource type.


    # scrgadm -a -t SUNW.HAStoragePlus
    
  7. Create the HAStoragePlus resource.

    To specify a file-system mount point, input the following text.


    # scrgadm -a -j nfs1-hastp-rs -g nfs1-rg -t \
    SUNW.HAStoragePlus -x FilesystemMountPoints=/global/nfsdata -x \
    AffinityOn=True
    

    To specify global device paths, input the following text.


    # scrgadm -a -j nfs1-hastp-rs -g nfs1-rg -t \
    SUNW.HAStoragePlus -x GlobalDevicePaths=nfsdg -x AffinityOn=True
    

    Note –

    Instead of using the ServicePaths property for HAStorage, you must use the GlobalDevicePaths or FilesystemMountPoints property for HAStoragePlus. The FilesystemMountPoints extension property must match the sequence specified in the /etc/vfstab file.


  8. Switch the application resource group offline.


    # scswitch -F -g nfs1-rg
    
  9. Disable the application resource.


    # scswitch -n -j nfsserver-rs
    
  10. Unmount the CFS file systems.

  11. Enable the HAStoragePlus resource.


    # scswitch -e -j nfs1-hastp-rs
    
  12. Bring the application resource group online on a given host.


    # scswitch -z -g nfs1-rg -h hostname
    
  13. Set up the dependencies between the application resource and HAStoragePlus.


    # scrgadm -c -j nfsserver-rs -y \
    Resource_Depencencies=nfs1=hastp-rs
    
  14. Enable the application resource.


    # scswitch -e -j nfs1-hastp-rs
    

Sun Cluster 3.1 Software Localization

Localization is available for selected Sun Cluster software components in the following languages:

Language 

Localized Sun Cluster Component 

French 

Installation 

Cluster Control Panel (CCP) 

Sun Cluster Software 

Sun Cluster Data Services 

Sun Cluster module for Sun Management Center 

SunPlex Manager 

Japanese 

Installation 

Cluster Control Panel (CCP)  

Sun Cluster Software 

Sun Cluster Data Services 

Sun Cluster module for Sun Management Center 

SunPlex Manager 

Sun Cluster man pages 

Cluster Control Panel man pages 

Sun Cluster Data Service messages man pages 

Simplified Chinese 

Sun Cluster module for Sun Management Center 

SunPlex Manager 

Traditional Chinese  

Sun Cluster module for Sun Management Center (online help only) 

SunPlex Manager (online help only) 

Korean 

Sun Cluster module for Sun Management Center (online help only) 

SunPlex Manager (online help only) 

The following sections provide instructions on how to install the localization packages for various Sun Cluster components:

Cluster Control Panel (CCP)

To use the localized Cluster Control Panel (CCP), you must first install the following packages on your administrative console by using the pkgadd(1M) command.

Language 

Package Name 

Package Description 

French 

SUNWfccon

French Sun Cluster Console 

Japanese 

SUNWjccon

Japanese Sun Cluster Console 

Simplified Chinese 

SUNWcccon

Simplified Chinese Sun Cluster Console 

Installation Tools

To use the localized scinstall(1M) utility to install Sun Cluster 3.1 software, install the following packages on the cluster nodes by using the pkgadd(1M) command before you run scinstall.

Language 

Package Name 

Package Description 

French 

SUNWfsc

French Sun Cluster messages 

Japanese 

SUNWjsc

SUNWjscman

Japanese Sun Cluster messages 

Japanese Sun Cluster man pages 

To use the localized SunPlex Manager to install Sun Cluster 3.1 software, seeSunPlex Manager for more information.

SunPlex Manager

To use the localized SunPlex Manager, the following packages are required on the cluster nodes.

Language 

Package Name 

Package Description 

French 

SUNWfsc 

SUNWfscvw 

French Sun Cluster messages 

French SunPlex Manager online help 

Japanese 

SUNWjsc 

SUNWjscvw 

Japanese Sun Cluster messages 

Japanese SunPlex Manager online help 

Simplified Chinese 

SUNWcsc 

SUNWcscvw 

Simplified Chinese Sun Cluster messages 

Simplified Chinese SunPlex Manager online help 

Traditional Chinese 

SUNWhscvw 

Traditional Chinese SunPlex Manager online help 

Korean 

SUNWkscvw 

Korean SunPlex Manager online help 

After you install the localized SunPlex Manager packages, set your browser's language preference. If you are using Netscape, you can check and change browser languages by performing the following steps:

  1. Start Netscape.

  2. Select Edit > Preferences from the main menu.

  3. Select Navigator > Languages from the Preferences dialog box.

  4. Click Add, then select the language you want from the Add Language dialog box.

  5. Click OK.

Sun Cluster Module for Sun Management Center

To use the localized Sun Cluster module for Sun Management Center, install the following packages to the Sun Management Center server layer by using the pkgadd(1M) command.

Language 

Package Name 

Package Description 

French 

SUNWfscssv 

French Sun Cluster SyMON server add-on 

Japanese 

SUNWjscssv 

Japanese Sun Cluster SyMON server add-on 

Simplified Chinese 

SUNWcscssv 

Simplified Chinese Sun Cluster SyMON server add-on 

To use the localized online help on the Sun Cluster module for Sun Management Center, install the following packages to the Sun Management Center console layer by using the pkgadd(1M) command.

Language 

Package Name 

Package Description 

French 

SUNWfscshl 

French Sun Cluster SyMON modules 

Japanese 

SUNWjscshl 

Japanese Sun Cluster SyMON modules 

Simplified Chinese 

SUNWcscshl 

Simplified Chinese Sun Cluster SyMON modules 

Traditional Chinese 

SUNWhscshl 

Traditional Chinese Sun Cluster SyMON modules 

Korean 

SUNWkscshl 

Korean Sun Cluster SyMON modules 

Sun Cluster Software

The following Sun Cluster localization packages will be automatically installed on the cluster node when you install or upgrade to Sun Cluster 3.1.

Language 

Package Name 

Package Description 

French 

SUNWfsc

French Sun Cluster messages 

Japanese 

SUNWjsc

SUNWjscman

Japanese Sun Cluster messages 

Japanese Sun Cluster man pages 

Simplified Chinese 

SUNWcsc

Simplified Chinese Sun Cluster messages 

Sun Cluster Data Services

When you install or upgrade to Sun Cluster 3.1, the localization packages will be automatically installed for the data services you have selected. Fore more information, see Sun Cluster 3.1 Data Service 5/03 Release Notes.

Sun Cluster 3.1 Documentation

The complete Sun Cluster 3.1 user documentation set is available in PDF and HTML format on both the Sun Cluster 3.1 CD-ROM and the Sun Cluster 3.1 Agents CD-ROM. AnswerBook2TM server software is not needed to read Sun Cluster 3.1 documentation. See the index.html file at the top level of either CD-ROM for more information. This index.html file enables you to read the PDF and HTML manuals directly from the disc and to access instructions to install the documentation packages.


Note –

The SUNWsdocs package must be installed before you install any Sun Cluster documentation packages. You can use pkgadd to install the SUNWsdocs package. The SUNWsdocs package is located in the SunCluster_3.1/Sol_N/Packages/ directory of the Sun Cluster 3.1 CD-ROM, where N is either 8 for Solaris 8 or 9 for Solaris 9. The SUNWsdocs package is also automatically installed when you run the installer from the Solaris 9 Documentation CD.


The Sun Cluster 3.1 documentation set consists of the following collections:

In addition, the docs.sun.comSM web site enables you to access Sun Cluster documentation on the Web. You can browse the docs.sun.com archive or search for a specific book title or subject at the following Web site:

http://docs.sun.com

Documentation Issues

This section discusses known errors or omissions for documentation, online help, or man pages and steps to correct these problems.

Software Installation Guide

This section discusses known errors or omissions from the Sun Cluster 3.1 Software Installation Guide.

Quorum-Device Connection

In the Sun Cluster 3.1 Software Installation Guide, the following statement about quorum devices is incorrect:

Connection - Do not connect a quorum device to more than two nodes.

The statement should instead read as follows:

Connection – You must connect a quorum device to at least two nodes.

Node Authentication For scvxinstall Is Not Required

When you use the scvxinstall command to install VERITAS Volume Manager (VxVM), it is no longer necessary to first add the node to the cluster node authentication list. When you perform the procedures in “How to Install VERITAS Volume Manager Software and Encapsulate the Root Disk“ or “How to Install VERITAS Volume Manager Software Only“, ignore Step 3, “Add all nodes in the cluster to the cluster node authentication list.”

Upgrade Procedure Refers to Unavailable scsetup Functionality

In “How to Prepare the Cluster for Upgrade” in Sun Cluster 3.1 Software Installation Guide, the procedure states that, if you are upgrading from Sun Cluster 3.0 5/02 software, you can use the scsetup utility to disable resources rather than use the scswitch command. This statement is incorrect and should be ignored.

SunPlex Manager Online Help

A note that appears in the Oracle data service installation procedure is incorrect.

Incorrect:

Note: If no entries exist for the shmsys and semsys variables in the /etc/system file when SunPlex Manager packages are installed, default values for these variables are automatically put in the /etc/system file. The system must then be rebooted. Check Oracle installation documentation to verify that these values are appropriate for your database.

Correct:

Note: If no entries exist for the shmsys and semsys variables in the /etc/system file when you install the Oracle data service, default values for these variables can be automatically put in the /etc/system file. The system must then be rebooted. Check Oracle installation documentation to verify that these values are appropriate for your database.

System Administration Guide

This section discusses errors and omissions from the Sun Cluster 3.1 System Administration Guide.

Simple Root Disk Groups With VERITAS Volume Manager

Simple root disk groups are not supported as disk types with VERITAS Volume Manager on Sun Cluster software. As a result, if you perform the procedure “How to Restore a Non-Encapsulated root (/) File System (VERITAS Volume Manager)” in the Sun Cluster 3.1 System Administration Guide, you should ignore Step 9, which asks you to determine if the root disk group (rootdg) is on a single slice on the root disk. You would complete Step 1 through Step 8, skip Step 9, and proceed with Step 10 to the end of the procedure.

Changing the Number of Node Attachments to a Quorum Device

When increasing or decreasing the number of node attachments to a quorum device, the quorum vote count is not automatically recalculated. You can re-establish the correct quorum vote if you remove all quorum devices and then add them back into the configuration.

Data Services Collection

Errors and omissions related to the Data Services documentation are described in the Sun Cluster 3.1 Data Service 5/03 Release Notes.

Man Pages

Sun Cluster 3.0 Data Service Man Pages

To display Sun Cluster 3.0 data service man pages, install the latest patches for the Sun Cluster 3.0 data services that you installed on Sun Cluster 3.1 software. See Patches and Required Firmware Levels for more information.

After you have applied the patch, access the Sun Cluster 3.0 data service man pages by issuing the man -M command with the full man page path as the argument. The following example opens the Apache man page.


% man -M /opt/SUNWscapc/man SUNW.apache

Consider exporting your MANPATH to enable access to Sun Cluster 3.0 data service man pages without specifying the full path. The following example describes command input for adding the Apache man page path to your MANPATH and displaying the Apache man page.


% MANPATH=/opt/SUNWscapc/man:$MANPATH; export MANPATH
% man SUNW.apache

scconf_transp_adap_wrsm(1M)

The following scconf_transp_adap_wrsm(1M) man page replaces the existing scconf_transp_adap_wrsm(1M) man page.

NAME

scconf_transp_adap_wrsm.1m- configure the wrsm transport adapter

DESCRIPTION

wrsm adapters may be configured as cluster transport adapters. These adapters can only be used with transport types dlpi.

The wrsm adapter connects to a transport junction or to another wrsm adapter on a different node. In either case, the connection is made through a transport cable.

Although you can connect the wrsm adapters directly by using a point-to-point configuration, Sun Cluster software requires that you specify a transport junction, a virtual transport junction. For example, if node1:wrsm1 is connected to node2:wsrm1 directly through a cable, you must specify the following configuration information.


node1:wrsm1 <--cable1--> Transport Junction sw_wrsm1 <--cable2--> node2:wrsm1

The transport junction, whether a virtual switch or a hardware switch, must have a specific name. The name must be sw_wrsmN where the adapter is wrsmN. This requirement reflects a Wildcat restriction that requires that all wrsm controllers on the same Wildcat network have the same instance number.

When a transport junction is used and the endpoints of the transport cable are configured using scconf, scinstall, or other tools, you are asked to specify a port name on the transport junction. You can provide any port name, or accept the default, as long as the name is unique for the transport junction.

The default sets the port name to the node ID that hosts the adapter at the other end of the cable.

Refer to scconf(1M) for more configuration details.

There are no user configurable properties for cluster transport adapters of this type.

SEE ALSO

scconf(1M), scinstall(1M), wrsmconf(1M), wrsmstat(1M), wrsm(7D), wrsmd(7D)

scconf_transp_adap_sci(1M)

The scconf_transp_adap_sci(1M) man page states that SCI transport adapters can be used with the rsm transport type. This support statement is incorrect. SCI transport adapters do not support the rsm transport type. SCI transport adapters support the dlpi transport type only.

scconf_transp_adap_sci(1M)

The following sentence clarifies the name of an SCI–PCI adapter. This information is not currently included in the scconf_transp_adap_sci(1M) man page.

New Information:

Use the name sciN to specify an SCI adapter.

scgdevs(1M)

The following paragraph clarifies behavior of the scgdevs command. This information is not currently included in the scgdevs(1M) man page.

New Information:

scgdevs(1M) called from the local node will perform its work on remote nodes asynchronously. Therefore, command completion on the local node does not necessarily mean it has completed its work cluster wide.

SUNW.sap_ci(5)

SUNW.sap_as(5)

rg_properties(5)

The following new resource group property should be added to the rg_properties(5) man page.

Auto_start_on_new_cluster

This property controls whether the Resource Group Manager starts the resource group automatically when a new cluster is forming.

The default is TRUE. If set to TRUE, the Resource Group Manager attempts to start the resource group automatically to achieve Desired_primaries when all nodes of the cluster are simultaneously rebooted. If set to FALSE, the Resource Group does not start automatically when the cluster is rebooted. .If set to FALSE, the Resource Group does not automatically start when the cluster is rebooted. The resource group will remain offline until the first time it is manually switched onlin using scswitch (1M). After that, it will resume normal failover behavior.

Category: Optional Default: True Tunable: Any time

rt_properties(5)

In this release, the current API_version has been incremented to 3 from its previous value of 2. To prevent a resource type from registering on an earlier version of Sun Cluster software, declare API_version=3. For more information, see rt_reg (4) and rt_properties (5).