This document provides the following information for SunTM Cluster 3.1 4/04 software.
This section provides information related to new features, functionality, and supported products in Sun Cluster 3.1 software.
Sun Cluster is now available for use on SolarisTM Operating System (x86 Platform Edition). You can now use Sun Cluster 3.1 4/04 software on a Sun FireTM V65x server that is running Update 6 of Solaris 9 Operating System (x86 Platform Edition).
The following resource types are enhanced in Sun Cluster 3.1:
SUNW.oracle_listener (see Sun Cluster Data Service for Oracle Guide for Solaris OS)
SUNW.sap_xserver (see Sun Cluster Data Service for SAP liveCache Guide for Solaris OS)
For general information about upgrading a resource type, see “Upgrading a Resource Type” in Sun Cluster Data Services Planning and Administration Guide for Solaris OS
Sun Cluster 3.1 4/04 (SPARC Platform Edition) supports the following data services:
HA Java System Application Server EE 7.0
HA SAP DB 7.4
HA Oracle 10G
Sun Cluster 3.1 4/04 (x86 Platform Edition) supports the following data services:
HA NFS (Solaris 9 12/03)
HA DNS (Solaris 9 12/03)
HA Samba 2.2.8a and 3.0
HA Java System Directory Server 5.2.1 Agent
HA Java System Web Server 6.1
HA Java System Application Server EE 7.0 U3
HA Java System Message Queue 3.5
HA DHCP
HA MySQL
Scalable Java System Web Server
This section describes the supported software and memory requirements for Sun Cluster 3.1 software.
Operating environment and patches – Supported Solaris versions and patches are available at the following URL:
For more details, see Patches and Required Firmware Levels.
Volume managers
On Solaris 8 – Solstice DiskSuiteTM 4.2.1 and VERITAS Volume Manager 3.5.
On Solaris 9 – Solaris Volume Manager and VERITAS Volume Manager 3.5.
File systems
On Solaris 8 – Solaris UFS and VERITAS File System 3.4 and 3.5.
On Solaris 9 – Solaris UFS and VERITAS File System 3.5.
Data services (agents) – Contact your Sun sales representative for the complete list of supported data services and application versions. Specify the resource type names when you install the data services by using the scinstall(1M) utility. You should also specify the resource type names when you register the resource types associated with the data service using the scsetup(1M) utility.
Procedures for the version of Sun Cluster HA for Sun Java System Directory Server that uses Sun Java System Directory Server 5.0 and 5.1 (plus Netscape HTTP, versions 4.11, 4.12, 4.13, and 4.16) are located in the Sun Cluster 3.1 Data Service for Sun ONE Directory Server Guide. For later versions of Sun Java System Directory Server, previously known as SunTM Open Net Environment (Sun ONE) Directory Server, see the Sun Java System Directory Server product documentation.
All occurrences of "Sun One" in the names and descriptions of the data services for the JES applications should be read as "Sun Java System." Example: "Sun Cluster Data Service for Sun One Application Server" should read "Sun Cluster Data Service for Sun Java System Application Server."
Data Service |
Sun Cluster Resource Type |
---|---|
Sun Cluster HA for Apache |
SUNW.apache |
Sun Cluster HA for Apache Tomcat |
SUNW.sctomcat |
Sun Cluster HA for BroadVision One-To-One Enterprise |
SUNW.bv |
Sun Cluster HA for DHCP |
SUNW.gds |
Sun Cluster HA for DNS |
SUNW.dns |
Sun Cluster HA for MySQL |
SUNW.gds |
Sun Cluster HA for NetBackup |
SUNW.netbackup_master |
Sun Cluster HA for NFS |
SUNW.nfs |
Sun Cluster HA for Oracle E-Business Suite |
SUNW.gds |
Sun Cluster HA for Oracle |
SUNW.oracle_server SUNW.oracle_listener |
Sun Cluster Support for Oracle Parallel Server/Real Application Clusters |
SUNW.rac_framework SUNW.rac_udlm SUNW.rac_cvm SUNW.rac_hwraid |
Sun Cluster HA for SAP |
SUNW.sap_ci SUNW.sap_ci_v2 SUNW.sap_as SUNW.sap_as_v2 |
Sun Cluster HA for SAP liveCache |
SUNW.sap_livecache SUNW.sap_xserver |
Sun Cluster HA for SAP DB |
SUNW.sapdb SUNW.sap_xserver |
Sun Cluster HA for SWIFTAlliance Access |
SUNW.gds |
Sun Cluster HA for Samba |
SUNW.gds |
Sun Cluster HA for Siebel |
SUNW.sblgtwy SUNW.sblsrvr |
Sun Cluster HA for Sun Java System Application Server |
SUNW.s1as |
Sun Cluster HA for Sun Java System HADB |
SUNW.hadb |
Sun Cluster HA for Sun Java System Message Queue |
SUNW.s1mq |
Sun Cluster HA for Sun Java System Web Server (This data service was formerly known as Sun Cluster HA for Sun ONE Web Server) |
SUNW.iws |
Sun Cluster HA for Sybase ASE |
SUNW.sybase |
Sun Cluster HA for WebLogic Server |
SUNW.wls |
Sun Cluster HA for WebSphere MQ |
SUNW.gds |
Sun Cluster HA for WebSphere MQ Integrator |
SUNW.gds |
Memory Requirements – Sun Cluster 3.1 software requires extra memory beyond what is configured for a node under a normal workload. The extra memory equals 128 Mbytes plus ten percent. For example, if a standalone node normally requires 1 Gbyte of memory, you need an extra 256 Mbytes to meet memory requirements.
RSMAPI – Sun Cluster 3.1 software supports the Remote Shared Memory Application Programming Interface (RSMAPI) on RSM-capable interconnects, such as PCI-SCI.
Sun Cluster Security Hardening uses the Solaris Operating Environment hardening techniques recommended by the Sun BluePrintsTM program to achieve basic security hardening for clusters. The Solaris Security Toolkit automates the implementation of Sun Cluster Security Hardening.
The Sun Cluster Security Hardening documentation is available at http://www.sun.com/blueprints/0203/817–1079.pdf. You can also access the article from http://wwws.sun.com/software/security/blueprints. From this URL, scroll down to the Architecture heading to locate the article “Securing the Sun Cluster 3.x Software.” The documentation describes how to secure Sun Cluster 3.1 deployments in a Solaris 8 and Solaris 9 environment. The description includes the use of the Solaris Security Toolkit and other best-practice security techniques recommended by Sun security experts.
Table 1–2 Data Services Supported by Sun Cluster Security Hardening
Data Service Agent |
Application Version: Failover |
Application Version: Scalable |
Solaris Version |
---|---|---|---|
Sun Cluster HA for Apache |
1.3.9 |
1.3.9 |
Solaris 8, Solaris 9 (version 1.3.9) |
Sun Cluster HA for Apache Tomcat |
3.3, 4.0, 4.1 |
3.3, 4.0, 4.1 |
Solaris 8, Solaris 9 |
Sun Cluster HA for DHCP |
S8U7+ |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for DNS |
with OS |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for Sun Java System Messaging Server |
6.0 |
4.1 |
Solaris 8 |
Sun Cluster HA for MySQL |
3.23.54a - 4.0.15 |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for NetBackup |
3.4 |
N/A |
Solaris 8 |
Sun Cluster HA for NFS |
with OS |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for Oracle E-Business Suite |
11.5.8 |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for Oracle |
8.1.7 and 9i (32 and 64 bit) |
N/A |
Solaris 8, Solaris 9 (HA Oracle 9iR2) |
Sun Cluster Support for Oracle Parallel Server/Real Application Clusters |
8.1.7 and 9i (32 and 64 bit) |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for SAP |
4.6D (32 and 64 bit) and 6.20 |
4.6D (32 and 64 bit) and 6.20 |
Solaris 8, Solaris 9 |
Sun Cluster HA for SWIFTAlliance Access |
4.1, 5.0 |
N/A |
Solaris 8 |
Sun Cluster HA for Samba |
2.2.2, 2.2.7, 2.2.7a, 2.2.8, 2.2.8a |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for Siebel |
7.5 |
N/A |
Solaris 8 |
Sun Cluster HA for Sun Java System Application Server |
7.0, 7.0 update 1 |
N/A |
Solaris 8,Solaris 9 |
Sun Cluster HA for Sun Java System Directory Server |
4.12 |
N/A |
Solaris 8, Solaris 9 (version 5.1) |
Sun Cluster HA for Sun Java System Message Queue |
3.0.1 |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for Sun Java System Web Server |
6.0 |
4.1 |
Solaris 8, Solaris 9 (version 4.1) |
Sun Cluster HA for Sybase ASE |
12.0 (32 bit) |
N/A |
Solaris 8 |
Sun Cluster HA for BEA WebLogic Server |
7.0 |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for WebSphere MQ |
5.2, 5.3 |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for WebSphere MQ Integrator |
2.0.2, 2.1 |
N/A |
Solaris 8, Solaris 9 |
The following restrictions apply to the Sun Cluster 3.1 release:
For other known problems or restrictions, see Known Issues and Bugs.
Multihost tape, CD-ROM, and DVD-ROM are not supported.
Alternate Pathing (AP) is not supported.
Storage devices with more than a single path from a given cluster node to the enclosure are not supported except on the following storage devices:
Sun StorEdgeTM A3500, for which two paths are supported to each of two nodes
Any device that supports Sun StorEdge Traffic Manager
EMC storage devices that use EMC PowerPath software
If you are using a Sun EnterpriseTM 420R server with a PCI card in slot J4701, the motherboard must be at dash-level 15 or higher (501-5168-15 or higher). To find the motherboard part number and revision level, look at the edge of the board closest to PCI slot 1.
System panics have been observed in clusters when UDWIS I/O cards are used in slot 0 of a board in a Sun Enterprise 10000 server; do not install UDWIS I/O cards in slot 0 of a board in this server.
When you increase or decrease the number of node attachments to a quorum device, the quorum vote count is not automatically recalculated. You can reestablish the correct quorum vote if you remove all quorum devices and then add them back into the configuration.
SunVTSTM is not supported.
IPv6 is not supported.
Remote Shared Memory (RSM) transport types are mentioned in the documentation, but are not supported. If you use the RSMAPI, specify dlpi as the transport type.
The SBus Scalable Coherent Interface (SCI) is not supported as a cluster interconnect. However, the PCI-SCI interface is supported.
Logical network interfaces are reserved for use by Sun Cluster software.
Client applications that run on cluster nodes should not map to logical IP addresses of an HA data service. During failover, these logical IP addresses might go away, leaving the client without a connection.
If you are upgrading from VERITAS Volume Manager (VxVM) 3.2 to 3.5, the Cluster Volume Manger (CVM) feature will not be available until you install the CVM license key for version 3.5. In VxVM 3.5, the CVM license key for version 3.2 does not enable CVM and must be upgraded to the CVM license key for version 3.5.
In Solstice DiskSuite/Solaris Volume Manager configurations that use mediators, the number of mediator hosts configured for a diskset must be exactly two.
DiskSuite Tool (Solstice DiskSuite metatool) and the Enhanced Storage module of Solaris Management Console (Solaris Volume Manager) are not compatible with Sun Cluster 3.1 software.
With VxVM 3.2 or later, Dynamic Multipathing (DMP) cannot be disabled with the scvxinstall command during VxVM installation. This procedure is described in the chapter,“Installing and Configuring VERITAS Volume Manager” in Sun Cluster Software Installation Guide for Solaris OS. The use of Veritas Dynamic Multipathing is supported in the following configurations.
A single I/O path per node to the cluster's shared storage.
A supported multipathing solution (Sun Traffic Manager, EMC PowerPath, Hiatchi HDLM) that manages multiple I/O paths per node to the shared cluster storage.
Simple root disk groups (rootdg created on a single slice of the root disk) are not supported as disk types with VxVM on Sun Cluster 3.1 software.
Software RAID 5 is not supported.
Quotas are not supported on cluster file systems.
Sun Cluster 3.1 software does not support the use of the loopback file system (LOFS) on cluster nodes.
The command umount -f behaves in the same manner as the umount command without the -f option. It does not support forced unmounts.
The command unlink(1M) is not supported on non-empty directories.
The command lockfs -d is not supported. Use lockfs -n as a workaround.
The cluster file system does not support any of the file-system features of Solaris software by which one would put a communication end-point in the file-system name space. Therefore, although you can create a UNIX domain socket whose name is a path name into the cluster file system, the socket would not survive a node failover. In addition, any fifos or named pipes you create on a cluster file system would not be globally accessible, nor should you attempt to use fattach from any node other than the local node.
It is not supported to execute binaries off cluster file systems that are mounted by using the forcedirectio mount option.
You cannot remount a cluster file system with the directio mount option added at remount time.
You cannot set the directio mount option on a single file by using the directio ioctl.
The following VxFS features are not supported in a Sun Cluster 3.1 configuration.
Quick I/O
Snapshots
Storage checkpoints
Cache advisories (these can be used, but the effect will be observed on the given node only)
VERITAS CFS (requires VERITAS cluster feature and VCS)
All other VxFS features and options that are supported in a cluster configuration are supported by Sun Cluster 3.1 software. See VxFS documentation and man pages for details about VxFS options that are or are not supported in a cluster configuration.
The following VxFS-specific mount options are not supported in a Sun Cluster 3.1 configuration.
convosync (Convert O_SYNC)
mincache
qlog, delaylog, tmplog
For information about administering VxFS cluster file systems in a Sun Cluster configuration, see “Administering Cluster File Systems” in Sun Cluster System Administration Guide for Solaris OS.
This section identifies any restrictions on using IP Network Multipathing that apply only in a Sun Cluster 3.1 environment, or are different than information provided in the Solaris documentation for IP Network Multipathing.
IPv6 is not supported.
All public network adapters must be in IP Network Multipathing groups.
In the /etc/default/mpathd file, do not change TRACK_INTERFACES_ONLY_WITH_GROUPS from yes to no.
Most procedures, guidelines, and restrictions that are identified in the Solaris documentation for IP Network Multipathing are the same in a cluster or a noncluster environment. Therefore, see the appropriate Solaris document for additional information about IP Network Multipathing restrictions.
Operating Environment Release |
For Instructions, Go To... |
---|---|
Solaris 8 operating environment |
IP Network Multipathing Administration Guide |
Solaris 9 operating environment |
“IP Network Multipathing Topics” in System Administration Guide: IP Series |
Do not configure cluster nodes as routers (gateways). If the system goes down, the clients cannot find an alternate router and cannot recover.
Do not configure cluster nodes as NIS or NIS+ servers. However, cluster nodes can be NIS or NIS+ clients.
Do not use a Sun Cluster configuration to provide a highly available boot or installation service on client systems.
Do not use a Sun Cluster configuration to provide an rarpd service.
If you install an RPC service on the cluster, the service must not use the following program numbers: 100141, 100142, and 100248. These numbers are reserved for the Sun Cluster daemons rgmd_receptionist, fed, and pmfd, respectively. If the RPC service you install also uses one of these program numbers, you must change that RPC service to use a different program number.
Currently, Sun StorEdge Network Data Replicator (SNDR) can only be used with HAStorage. This restriction only applies to the light weight resource group that includes the logical host that SNDR is using for replication. Application resource groups can still use HAStoragePlus with SNDR. You can use failover file system with HAStoragePlus and SNDR by using HAStorage for the SNDR resource group, and HAStoragePlus for the application resource group, where the HAStorage and HAStoragePlus resources point at the same underlying DCS device. A patch is being developed to enable SNDR to work with HAStoragePlus.
Running high-priority process scheduling classes on cluster nodes is not supported. Processes that run in the time-sharing scheduling class with a high priority, or processes that run in the real-time scheduling class should not be run on cluster nodes. Sun Cluster software relies on kernel threads that do not run in the real-time scheduling class. Other time-sharing processes that run at higher-than-normal priority or real-time processes can prevent the Sun Cluster kernel threads from acquiring needed CPU cycles.
Sun Cluster 3.1 software can only provide service for those data services that are either supplied with the Sun Cluster product or set up with the Sun Cluster data services API.
Sun Cluster software currently does not have an HA data service for the sendmail(1M) subsystem. The sendmail subsystem can run on the individual cluster nodes, but the sendmail functionality will not be highly available, including the functionality of mail delivery and mail routing, queuing, or retry.
If you are using Sun Cluster HA for Oracle with Oracle 10g, do not install the Oracle binary files on a highly available local file system. Sun Cluster HA for Oracle does not support such a configuration. However, you may install data files, log files, and configuration files on a highly available file system.
If you have installed Oracle 10g binary files on the cluster file system, error messages for the Oracle cssd daemon might appear on the system console during the booting of a node. When the cluster file system is mounted, these messages no longer appear.
These error messages are as follows:
INIT: Command is respawning too rapidly. Check for possible errors. id: h1 "/etc/init.d/init.cssd run >/dev/null 2>&1 >/dev/null" |
Sun Cluster HA for Oracle does not require the Oracle cssd daemon. Therefore, you may ignore these error messages.
The Sun Cluster HA for Oracle 3.0 data service can run on Sun Cluster 3.1 software only when used with the following versions of the Solaris operating environment:
Solaris 8, 32-bit version
Solaris 8, 64-bit version
Solaris 9, 32-bit version
The Sun Cluster HA for Oracle 3.0 data service cannot run on Sun Cluster 3.1 software when used with the 64-bit version of Solaris 9.
Adhere to the documentation of Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application clusters because you cannot change hostnames after you install Sun Cluster software.
For more information on this restriction on hostnames and node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.
If the VERITAS NetBackup client is a cluster, only one logical host can be configured as the client because there is only one bp.conf file.
If the NetBackup client is a cluster and if one of the logical hosts on the cluster is configured as the NetBackup client, NetBackup cannot back up the physical hosts.
On the cluster running the master server, the master server is the only logical host that can be backed up.
Backup media cannot be attached to the master server, so one or more media servers are required.
In a Sun Cluster environment, robotic control is only supported on media servers and not on the NetBackup master server running on Sun Cluster.
No Sun Cluster node may be an NFS client of a Sun Cluster HA for NFS-exported file system being mastered on a node in the same cluster. Such cross-mounting of Sun Cluster HA for NFS is prohibited. Use the cluster file system to share files among cluster nodes.
Applications running locally on the cluster must not lock files on a file system exported via NFS. Otherwise, local blocking (for example, flock(3UCB) or fcntl(2)) might interfere with the ability to restart the lock manager (lockd). During restart, a blocked local process may be granted a lock which may be intended to be reclaimed by a remote client. This would cause unpredictable behavior.
Sun Cluster HA for NFS requires that all NFS client mounts be “hard” mounts.
Sun Cluster 3.1 software does not support Secure NFS or the use of Kerberos with NFS, in particular, the secure and kerberos options to the share_nfs(1M) subsystem. However, Sun Cluster 3.1 software does support the use of secure ports for NFS by adding the entry set nfssrv:nfs_portmon=1 to the /etc/system file on cluster nodes.
Do not use NIS for naming services in a cluster running Sun Cluster HA for SAP liveCache because the NIS entry is only used if files are not available.
For more procedural information about the nssswitch.conf password requirements related to this restriction, see “Preparing the Nodes and Disks” in Sun Cluster Data Service for SAP liveCache Guide for Solaris OS.
The following known issues and bugs affect the operation of the Sun Cluster 3.1 release.
Identify requirements for all data services before you begin Solaris and Sun Cluster installation. If you do not determine these requirements, you might perform the installation process incorrectly and thereby need to completely reinstall the Solaris and Sun Cluster software.
For example, the Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application Clusters has special requirements for the hostnames/node names that you use in the cluster. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software. For more information on the special requirements for the hostnames/node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.
Problem Summary: Sometimes, private interconnect transport paths ending at a qfe adapter fail to come online.
Workaround: Follow the steps shown below:
Using scstat -W, identify the adapter that is at fault. The output will show all transport paths with that adapter as one of the path endpoints in the faulted or the waiting states.
Use scsetup to remove from the cluster configuration all the cables connected to that adapter.
Use scsetup again to remove that adapter from the cluster configuration.
Add back the adapter and the cables.
Verify if the paths appear. If the problem persists, repeat steps 1–5 a few times.
Verify if the paths appear. If the problem still persists, reboot the node with the at-fault adapter. Before the node is rebooted, make sure that the remaining cluster has enough quorum votes to survive the node reboot.
Problem Summary: The remove script fails to unregister SUNW.gds resource
type and displays the following message:
Resource type has been un-registered already.
Workaround: After using the remove script, manually unregister SUNW.gds. Alternatively, use the scsetup command or the SunPlex Manager.
Problem Summary: Clusters using ce adapters on the private interconnect may notice path timeouts and subsequent node panics if one or more cluster nodes have more than four processors.
Workaround: Set the ce_taskq_disable parameter in the ce driver by adding set ce:ce_taskq_disable=1 to /etc/system file on all cluster nodes and then rebooting the cluster nodes. This ensures that heartbeats (and other packets) are always delivered in the interrupt context, eliminating path timeouts and the subsequent node panics. Quorum considerations should be observed while rebooting cluster nodes.
Problem Summary: If a device group switchover is in progress when a node joins the cluster, the joining node and the switchover operation may hang. Any attempts to access any device service will also hang. This is more likely to happen on a cluster with more than two nodes and if the file system mounted on the device is a VxFS file system.
Workaround: To avoid this situation, do not initiate device group switchovers while a node is joining the cluster. If this situation occurs, then all the cluster nodes must be rebooted to restore access to device groups.
Problem Summary: SunPlex Manager includes a data service installation wizard that sets up a highly available DNS service on the cluster. If the user does not supply an existing DNS configuration, such as a named.conf file, the wizard attempts to generate a valid DNS configuration by autodetecting the existing network and nameservice configuration. However, it fails in some network environments, causing the wizard to fail without issuing an error message.
Workaround: When prompted, supply the SunPlex Manager DNS data service install wizard with an existing, valid named.conf file. Otherwise, follow the documented DNS data service procedures to manually configure highly available DNS on the cluster.
Problem Summary: SunPlex Manager includes a data service installation wizard which sets up a highly available Oracle service on the cluster by installing and configuring the Oracle binaries as well as creating the cluster configuration. However, this installation wizard is currently not working, and results in a variety of errors based on the users' software configuration.
Workaround: Manually install and configure the Oracle data service on the cluster, using the procedures provided in the Sun Cluster documentation.
Problem Summary: If SunPlex Manager is used to remove an adapter from a multi-adapter IPMP group, it may not always be possible to immediately add the adapter back to the same group again.
Workaround: Remove /etc/hostname.adapter before attempting to add the adapter back to the same IPMP group.
Problem Summary: Due to an internal error, most Sun-supplied cluster agents are writing messages to the system log (see syslog(3C)) using the LOG_USER facility instead of using LOG_DAEMON. On a cluster that is configured with the default syslog settings (see syslog.conf(4)), messages with a severity of LOG_WARNING or LOG_NOTICE, which would ordinarily be written to the system log, are not being output.
Workaround: Add the following line near the front of the /etc/syslog.conf file on all cluster nodes:
user.warning /var/adm/messages |
Problem Summary: The requirements for the nssswitch.conf file in “Preparing the Nodes and Disks” in Sun Cluster Data Service for SAP liveCache Guide for Solaris OS do not apply to the entry for the passwd database. If these requirements are met, the su command might hang on each node that can master the liveCache resource when the public network is down.
Workaround: On each node that can master the liveCache resource, ensure that the entry in the /etc/nsswitch.conf file for the passwd database is as follows:
passwd: files nis [TRYAGAIN=0]
Problem Summary: The SunPlex Manager data service installation wizards for Apache and Oracle do not support Solaris 9 and above.
Workaround: Manually install Oracle on the cluster using, using Sun Cluster documentation. If installing Apache on Solaris 9 (or higher), manually add the Solaris Apache packages SUNWapchr and SUNWapchu before running the installation wizard.
Problem Summary: Improper timing of cluster-node reboots during rootdisk encapsulation can cause node panics.
Workaround: Run scvxinstall on one node at a time, waiting until one node has completed all of its reboots before starting scvxinstall on another node.
Problem Summary: When running SunPlex Agent Builder in a non-English locale, the default window size is too small and some controls may not appear in the window. This problem has been observed in the German and Spanish locales.
Workaround: Manually resize the SunPlex Agent Builder window as needed.
Problem Summary: sccheck may hang if launched simultaneously from multiple nodes.
Workaround: Do not launch sccheck from any multi-console which passes commands to multiple nodes. sccheck runs may overlap but should not be launched simultaneously.
Problem Summary: scinstall -r does not remove locale-specific data service packages.
Workaround: Once the node comes up, run pkginfo | grep -i cluster to make sure all data service packages have been removed. To remove the listed packages, run pkgrm on each package.
Problem Summary: Certain SunPlex Agent Builder messages in the Traditional Chinese locale are displayed in Simplified Chinese.
Workaround: Run SunPlex Agent Builder in the zh_TW locale to correctly display the messages in Traditional Chinese.
Problem Summary: When hadbm is invoked from the HADB agent, it takes the java binaries from /usr/bin. The HADB agent fails to work properly since the java binaries in /usr/bin need to be linked to the appropriate version of Java 1.4 (or above).
Workaround: Assign JAVA_HOME environment variable with the appropriate version of Java 1.4 (or above) in the script /opt/SUNWappserver7/SUNWhadb/4/bin/hadbm.
Problem Summary: If scsetup is used in an attempt to add the first adapter to a single-node cluster, the following error messsage results: Unable to determine transport type.
Workaround: Configure at least the first adapter manually:
# scconf -a -A trtype=type,name=nodename,node=nodename |
After the first adapter is configured, further use of scsetup to configure the interconnects works as expected.
Problem Summary: The data services for the following applications cannot be upgraded by using the scinstall utility:
Apache Tomcat
DHCP
mySQL
Oracle E-Business Suite
Samba
SWIFTAlliance Access
WebLogic Server
WebSphere MQ
WebSphere MQ Integrator
Workaround: If you plan to upgrade a data service for an application in the preceding list, replace the step for upgrading data services in “Upgrading to Sun Cluster 3.1 4/04 Software (Rolling)” in Sun Cluster Software Installation Guide for Solaris OS with the steps that follow. Perform these steps for each node where the data service is installed.
Remove the software package for the data service that you are upgrading.
# pkgrm pkg-inst |
pkg-inst specifies the software package name for the data service that you are upgrading as listed in the following table.
Application |
Data Service Software Package |
---|---|
Apache Tomcat |
SUNWsctomcat |
DHCP |
SUNWscdhc |
mySQL |
SUNWscmys |
Oracle E-Business Suite |
SUNWscebs |
Samba |
SUNWscsmb |
SWIFTAlliance Access |
SUNWscsaa |
WebLogic Server (English locale) |
SUNWscwls |
WebLogic Server (French locale) |
SUNWfscwls |
WebLogic Server (Japanese locale) |
SUNWjscwls |
WebSphere MQ |
SUNWscmqs |
WebSphere MQ Integrator |
SUNWscmqi |
Install the software package for the version of the data service to which you are upgrading.
To install the software package, follow the instructions in the Sun Cluster documentation for the data service that you are upgrading. This documentation is available at http://docs.sun.com.
Problem Summary: The Sun Cluster HA for Oracle data service uses the super user command, su(1M), to start and stop the database. If you are running Solaris 8 or Solaris 9, the network service might become unavailable when a cluster node's public network fails.
Workaround: Include the following entries in the /etc/nsswitch.conf configuration files on each node that can be the primary for oracle_server or oracle_listener resource:
passwd: files groups: files publickey: files project: files
These entries ensure that the su command does not refer to the NIS/NIS+ name services, so that the data service starts and stops correctly during a network failure.
Problem Summary: The Sun Cluster HA for SAP liveCache data service uses the dbmcli command to start and stop liveCache. If you are running Solaris 9, the network service might become unavailable when a cluster node's public network fails.
Workaround: Include one of the following entries for the publickey database in the /etc/nsswitch.conf configuration files on each node that can be the primary for liveCache resources:
publickey: publickey: files publickey: files [NOTFOUND=return] nis publickey: files [NOTFOUND=return] nisplus
Adding one of the above entries, in addition to updates documented in Sun Cluster Data Service for SAP liveCache Guide for Solaris OS ensures that the su command and the dbmcli command do not refer to the NIS/NIS+ name services. Bypassing the NIS/NIS+ name services ensures that the data service starts and stops correctly during a network failure.
Problem Summary: Sun Cluster HA for Siebel does not monitor individual Siebel components. If the failure of a Siebel component is detected, only a warning message is logged in syslog.
Workaround: Restart the Siebel server resource group in which components are offline by using the command scswitch -R -h node -g resource_group.
This section provides information about patches for Sun Cluster configurations.
You must be a registered SunSolveTM user to view and download the required patches for the Sun Cluster product. If you do not have a SunSolve account, contact your Sun service representative or sales engineer, or register online at http://sunsolve.sun.com.
PatchPro is a patch-management tool designed to ease the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides a Sun Cluster-specific Interactive Mode tool to make the installation of patches easier and an Expert Mode tool to maintain your configuration with the latest set of patches. Expert Mode is especially useful for those who want to get all of the latest patches, not just the high availability and security patches.
To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click on “Sun Cluster,” then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.
The SunSolveTM Online Web site provides 24-hour access to the most up-to-date information regarding patches, software, and firmware for Sun products. Access the SunSolve Online site athttp://sunsolve.sun.com for the most current matrixes of supported software, firmware, and patch revisions.
You can find Sun Cluster 3.1 patch information by using the Info Docs. To view the Info Docs, log on to SunSolve and access the Simple search selection from the top of the main page. From the Simple Search page, click on the Info Docs box and type Sun Cluster 3.1 in the search criteria box. This will bring up the Info Doc page for Sun Cluster 3.1 software.
Before you install Sun Cluster 3.1 software and apply patches to a cluster component (Solaris operating environment, Sun Cluster software, volume manager or data services software, or disk hardware), review the Info Docs and any README files that accompany the patches. All cluster nodes must have the same patch level for proper cluster operation.
For specific patch procedures and tips on administering patches, see “Patching Sun Cluster Software and Firmware” in Sun Cluster System Administration Guide for Solaris OS.
The Sun Cluster 3.1 user documentation set consists of the following collections:
Sun Cluster 3.1 4/04 Release Notes Collection for Solaris OS |
Sun Cluster 3.1 4/04 Software Collection for Solaris OS (SPARC Platform Edition) |
Sun Cluster 3.1 4/04 Software Collection for Solaris OS (x86 Platform Edition) |
Sun Cluster 3.1 4/04 Reference Collection for Solaris OS |
Sun Cluster 3.x Hardware Collection for Solaris OS (SPARC Platform Edition) |
Sun Cluster 3.x Hardware Collection for Solaris OS (x86 Platform Edition) |
The Sun Cluster 3.1 user documentation is available in PDF and HTML format on the Sun Java Enterprise System 2004Q2 2 of 2 CD-ROM. See the index.html file at the top level of the CD-ROM for more information. This index.html file enables you to read the PDF and HTML manuals directly from the CD-ROM and to access instructions to install the documentation packages.
The SUNWsdocs package must be installed before you install any Sun Cluster documentation packages. You can use pkgadd to install the SUNWsdocs package. The SUNWsdocs package is located in the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory of the Sun Cluster 3.1 4/04 CD-ROM, where arch is sparc or x86, and ver is either 8 for Solaris 8 or 9 for Solaris 9. The SUNWsdocs package is also automatically installed when you run the installer program from the Solaris 9 Documentation CD-ROM.
In addition, the docs.sun.comSM web site enables you to access Sun Cluster documentation on the Web. You can browse the docs.sun.com archive or search for a specific book title or subject at the following Web site:
Part Number |
Book Title |
---|---|
817–4226 | |
817–3892 | |
817–4229 | |
817–4230 | |
817–4227 | |
817–4228 | |
817–4231 | |
817–4638 |
Sun Cluster Data Services Planning and Administration Guide for Solaris OS |
817–4575 |
Sun Cluster Data Service for Apache Tomcat Guide for Solaris OS |
817–4582 | |
817–4645 |
Sun Cluster Data Service for Domain Name Service (DNS) Guide for Solaris OS |
817–4574 | |
817–4646 |
Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS |
817–4581 | |
817–3920 |
Sun Cluster Data Service for Sun Java System Application Server Guide for Solaris OS |
817–4643 |
Sun Cluster Data Service for Sun Java System Message Queue Guide for Solaris OS |
817–4641 |
Sun Cluster Data Service for Sun Java System Web Server Guide for Solaris OS |
Part Number |
Book Title |
---|---|
817–0168 |
Sun Cluster 3.x Hardware Administration Manual for Solaris OS |
817–0180 |
Sun Cluster 3.x With Sun StorEdge 3310 Array Manual for Solaris OS |
This section discusses known errors or omissions for documentation, online help, or man pages and steps to correct these problems.
All occurrences of "Sun One" in the names and descriptions of the data services for the JES applications should be read as "Sun Java System." Example: "Sun Cluster Data Service for Sun One Application Server" should read "Sun Cluster Data Service for Sun Java System Application Server."
This section discusses errors and omissions from the Sun Cluster Software Installation Guide for Solaris OS.
The procedure “How to Configure Sun Cluster Software on All Nodes (scinstall)” does not include instructions to install the Sun Cluster software packages that support the RSMAPI and SCI-PCI adapters. The installer utility does not automatically install these packages.
Follow these steps to install these additional packages from the Sun Cluster 3.1 CD-ROM. Install these packages before you install Sun Cluster framework software.
Determine which packages you must install.
The following table lists the Sun Cluster 3.1 packages that each feature requires and the order in which you must install each group of packages.
Feature |
Additional Sun Cluster 3.1 Packages to Install |
---|---|
RSMAPI |
SUNWscrif |
SCI-PCI adapters |
SUNWsci SUNWscid SUNWscidx |
Use the following command to install the additional packages.
Replace arch with sparc or x86 and replace ver with 8 (for Solaris 8) or 9 (for Solaris 9).
# cd /cdrom/suncluster_3_1Packages # pkgadd -d . packages |
This section discusses errors and omissions in SunPlex Manager online help.
In the online help file that is titled “Sun Cluster HA for Oracle,” in the section titled “Before Starting,” a note is incorrect.
Incorrect:
If no entries exist for shmsys and semsys in /etc/system, default values for these variables are automatically inserted in/etc/system. The system must then be rebooted. Check Oracle installation documentation to verify that these values are correct for your database.
Correct:
If no entries exist for the shmsys and semsys variables in the /etc/system file when you install the Oracle data service, you can open /etc/system and insert default values for these variables. You must then reboot the system. Check Oracle installation documentation to verify that the values that you insert are correct for your database.
In the table under "Sun Cluster RBAC Rights Profiles," the authorizations solaris.cluster.appinstall and solaris.cluster.install should be listed under the Cluster Management profile rather than the Cluster Operation profile.
In the table under “Sun Cluster RBAC Rights Profiles,” under the profile Sun Cluster Commands, sccheck(1M) should also be included in the list of commands.
This section discusses errors and omissions from the Sun Cluster Concepts Guide for Solaris OS.
In chapter 3, the section on “Using the Cluster Interconnect for Data Service Traffic“ should read as follows:
A cluster must have multiple network connections between nodes, forming the cluster interconnect. The clustering software uses multiple interconnects both for high availability and to improve performance. For both internal and external traffic (for example, file system data or scalable services data), messages are striped across all available interconnects.
The cluster interconnect is also available to applications, for highly available communication between nodes. For example, a distributed application might have components running on different nodes that need to communicate. By using the cluster interconnect rather than the public transport, these connections can withstand the failure of an individual link.
To use the cluster interconnect for communication between nodes, an application must use the private hostnames configured when the cluster was installed. For example, if the private hostname for node 1 is clusternode1-priv, use that name to communicate over the cluster interconnect to node 1. TCP sockets opened using this name are routed over the cluster interconnect and can be transparently re-routed in the event of network failure. Application communication between any two nodes is striped over all interconnects. The traffic for a given TCP connection flows on one interconnect at any point. Different TCP connections are striped across all interconnects. Additionally, UDP traffic is always striped across all interconnects.
Note that because the private hostnames can be configured during installation, the cluster interconnect can use any name chosen at that time. The actual name can be obtained from scha_cluster_get(3HA) with thescha_privatelink_hostname_node argument.
This section discusses errors and omissions from the Sun Cluster System Administration Guide for Solaris OS.
Simple root disk groups are not supported as disk types with VERITAS Volume Manager on Sun Cluster software. As a result, if you perform the procedure “How to Restore a Non-Encapsulated root (/) File System (VERITAS Volume Manager)” in the Sun Cluster System Administration Guide for Solaris OS, you should ignore Step 9, which asks you to determine if the root disk group (rootdg) is on a single slice on the root disk. You would complete Step 1 through Step 8, skip Step 9, and proceed with Step 10 to the end of the procedure.
When increasing or decreasing the number of node attachments to a quorum device, the quorum vote count is not automatically recalculated. You can re-establish the correct quorum vote if you remove all quorum devices and then add them back into the configuration.
This section discusses errors and omissions from the Data Service Guides.
In the Sun Cluster Data Service for Sun Java System Application Server Guide for Solaris OS, the example given for the asadmin command is incorrect and should be ignored. Step 15 of the procedure “How to Install and Configure the Sun Java System Application Server” should read as follows:
Change the location of the accesslog parameter to reflect the directory that you created in Step 11. To change this parameter, use the asadmin utility. See Sun Java System Application Server 7 Administration Guide for instructions.
This section discusses errors and omissions from the Sun Cluster man pages.
Thescconf_transp_adap_sci(1M) man page states that SCI transport adapters can be used with the rsm transport type. This support statement is incorrect. SCI transport adapters do not support the rsm transport type. SCI transport adapters support the dlpi transport type only.
The following sentence clarifies the name of an SCI–PCI adapter. This information is not currently included in thescconf_transp_adap_sci(1M) man page.
New Information:
Use the name sciN to specify an SCI adapter.
The following paragraph clarifies behavior of the scgdevs command. This information is not currently included in thescgdevs(1M) man page.
New Information:
scgdevs(1M) called from the local node will perform its work on remote nodes asynchronously. Therefore, command completion on the local node does not necessarily mean it has completed its work cluster wide.
In this release, the current API_version has been incremented to 3 from its previous value of 2. If you are developing a new Sun Cluster agent and wish to prevent your new resource type from being registered on an earlier version of Sun Cluster software, declare API_version=3 in your agent's RTR file. For more information, seert_properties(5).
To display Sun Cluster 3.0 data service man pages, install the latest patches for the Sun Cluster 3.0 data services that you installed on Sun Cluster 3.1 software. See Patches and Required Firmware Levels for more information.
After you have applied the patch, access the Sun Cluster 3.0 data service man pages by issuing the man -M command with the full man page path as the argument. The following example opens the Apache man page.
% man -M /opt/SUNWscapc/man SUNW.apache |
Consider modifying your MANPATH to enable access to Sun Cluster 3.0 data service man pages without specifying the full path. The following example describes command input for adding the Apache man page path to your MANPATH and displaying the Apache man page.
% MANPATH=/opt/SUNWscapc/man:$MANPATH; export MANPATH % man SUNW.apache |
The tunability of the Restart_if_Parent_Terminated extension property is any time, and not as incorrectly stated in theSUNW.sapdb(5) man page.
There is an error in the See Also section of this man page. Instead of referencing the Sun Cluster 3.1 Data Services Installation and Configuration Guide, you should reference the Sun Cluster Data Service for WebLogic Server Guide for Solaris OS.