The following restrictions apply to the Sun Cluster 3.1 release:
For other known problems or restrictions, see Known Issues and Bugs.
Multihost tape, CD-ROM, and DVD-ROM are not supported.
Alternate Pathing (AP) is not supported.
Storage devices with more than a single path from a given cluster node to the enclosure are not supported except on the following storage devices:
Sun StorEdgeTM A3500, for which two paths are supported to each of two nodes
Any device that supports Sun StorEdge Traffic Manager
EMC storage devices that use EMC PowerPath software
If you are using a Sun EnterpriseTM 420R server with a PCI card in slot J4701, the motherboard must be at dash-level 15 or higher (501-5168-15 or higher). To find the motherboard part number and revision level, look at the edge of the board closest to PCI slot 1.
System panics have been observed in clusters when UDWIS I/O cards are used in slot 0 of a board in a Sun Enterprise 10000 server; do not install UDWIS I/O cards in slot 0 of a board in this server.
When you increase or decrease the number of node attachments to a quorum device, the quorum vote count is not automatically recalculated. You can reestablish the correct quorum vote if you remove all quorum devices and then add them back into the configuration.
SunVTSTM is not supported.
IPv6 is not supported.
Remote Shared Memory (RSM) transport types are mentioned in the documentation, but are not supported. If you use the RSMAPI, specify dlpi as the transport type.
The SBus Scalable Coherent Interface (SCI) is not supported as a cluster interconnect. However, the PCI-SCI interface is supported.
Logical network interfaces are reserved for use by Sun Cluster software.
Client applications that run on cluster nodes should not map to logical IP addresses of an HA data service. During failover, these logical IP addresses might go away, leaving the client without a connection.
If you are upgrading from VERITAS Volume Manager (VxVM) 3.2 to 3.5, the Cluster Volume Manger (CVM) feature will not be available until you install the CVM license key for version 3.5. In VxVM 3.5, the CVM license key for version 3.2 does not enable CVM and must be upgraded to the CVM license key for version 3.5.
In Solstice DiskSuite/Solaris Volume Manager configurations that use mediators, the number of mediator hosts configured for a diskset must be exactly two.
DiskSuite Tool (Solstice DiskSuite metatool) and the Enhanced Storage module of Solaris Management Console (Solaris Volume Manager) are not compatible with Sun Cluster 3.1 software.
With VxVM 3.2 or later, Dynamic Multipathing (DMP) cannot be disabled with the scvxinstall command during VxVM installation. This procedure is described in the chapter,“Installing and Configuring VERITAS Volume Manager” in Sun Cluster Software Installation Guide for Solaris OS. The use of Veritas Dynamic Multipathing is supported in the following configurations.
A single I/O path per node to the cluster's shared storage.
A supported multipathing solution (Sun Traffic Manager, EMC PowerPath, Hiatchi HDLM) that manages multiple I/O paths per node to the shared cluster storage.
Simple root disk groups (rootdg created on a single slice of the root disk) are not supported as disk types with VxVM on Sun Cluster 3.1 software.
Software RAID 5 is not supported.
Quotas are not supported on cluster file systems.
Sun Cluster 3.1 software does not support the use of the loopback file system (LOFS) on cluster nodes.
The command umount -f behaves in the same manner as the umount command without the -f option. It does not support forced unmounts.
The command unlink(1M) is not supported on non-empty directories.
The command lockfs -d is not supported. Use lockfs -n as a workaround.
The cluster file system does not support any of the file-system features of Solaris software by which one would put a communication end-point in the file-system name space. Therefore, although you can create a UNIX domain socket whose name is a path name into the cluster file system, the socket would not survive a node failover. In addition, any fifos or named pipes you create on a cluster file system would not be globally accessible, nor should you attempt to use fattach from any node other than the local node.
It is not supported to execute binaries off cluster file systems that are mounted by using the forcedirectio mount option.
You cannot remount a cluster file system with the directio mount option added at remount time.
You cannot set the directio mount option on a single file by using the directio ioctl.
The following VxFS features are not supported in a Sun Cluster 3.1 configuration.
Quick I/O
Snapshots
Storage checkpoints
Cache advisories (these can be used, but the effect will be observed on the given node only)
VERITAS CFS (requires VERITAS cluster feature and VCS)
All other VxFS features and options that are supported in a cluster configuration are supported by Sun Cluster 3.1 software. See VxFS documentation and man pages for details about VxFS options that are or are not supported in a cluster configuration.
The following VxFS-specific mount options are not supported in a Sun Cluster 3.1 configuration.
convosync (Convert O_SYNC)
mincache
qlog, delaylog, tmplog
For information about administering VxFS cluster file systems in a Sun Cluster configuration, see “Administering Cluster File Systems” in Sun Cluster System Administration Guide for Solaris OS.
This section identifies any restrictions on using IP Network Multipathing that apply only in a Sun Cluster 3.1 environment, or are different than information provided in the Solaris documentation for IP Network Multipathing.
IPv6 is not supported.
All public network adapters must be in IP Network Multipathing groups.
In the /etc/default/mpathd file, do not change TRACK_INTERFACES_ONLY_WITH_GROUPS from yes to no.
Most procedures, guidelines, and restrictions that are identified in the Solaris documentation for IP Network Multipathing are the same in a cluster or a noncluster environment. Therefore, see the appropriate Solaris document for additional information about IP Network Multipathing restrictions.
Operating Environment Release |
For Instructions, Go To... |
---|---|
Solaris 8 operating environment |
IP Network Multipathing Administration Guide |
Solaris 9 operating environment |
“IP Network Multipathing Topics” in System Administration Guide: IP Series |
Do not configure cluster nodes as routers (gateways). If the system goes down, the clients cannot find an alternate router and cannot recover.
Do not configure cluster nodes as NIS or NIS+ servers. However, cluster nodes can be NIS or NIS+ clients.
Do not use a Sun Cluster configuration to provide a highly available boot or installation service on client systems.
Do not use a Sun Cluster configuration to provide an rarpd service.
If you install an RPC service on the cluster, the service must not use the following program numbers: 100141, 100142, and 100248. These numbers are reserved for the Sun Cluster daemons rgmd_receptionist, fed, and pmfd, respectively. If the RPC service you install also uses one of these program numbers, you must change that RPC service to use a different program number.
Currently, Sun StorEdge Network Data Replicator (SNDR) can only be used with HAStorage. This restriction only applies to the light weight resource group that includes the logical host that SNDR is using for replication. Application resource groups can still use HAStoragePlus with SNDR. You can use failover file system with HAStoragePlus and SNDR by using HAStorage for the SNDR resource group, and HAStoragePlus for the application resource group, where the HAStorage and HAStoragePlus resources point at the same underlying DCS device. A patch is being developed to enable SNDR to work with HAStoragePlus.
Running high-priority process scheduling classes on cluster nodes is not supported. Processes that run in the time-sharing scheduling class with a high priority, or processes that run in the real-time scheduling class should not be run on cluster nodes. Sun Cluster software relies on kernel threads that do not run in the real-time scheduling class. Other time-sharing processes that run at higher-than-normal priority or real-time processes can prevent the Sun Cluster kernel threads from acquiring needed CPU cycles.
Sun Cluster 3.1 software can only provide service for those data services that are either supplied with the Sun Cluster product or set up with the Sun Cluster data services API.
Sun Cluster software currently does not have an HA data service for the sendmail(1M) subsystem. The sendmail subsystem can run on the individual cluster nodes, but the sendmail functionality will not be highly available, including the functionality of mail delivery and mail routing, queuing, or retry.
If you are using Sun Cluster HA for Oracle with Oracle 10g, do not install the Oracle binary files on a highly available local file system. Sun Cluster HA for Oracle does not support such a configuration. However, you may install data files, log files, and configuration files on a highly available file system.
If you have installed Oracle 10g binary files on the cluster file system, error messages for the Oracle cssd daemon might appear on the system console during the booting of a node. When the cluster file system is mounted, these messages no longer appear.
These error messages are as follows:
INIT: Command is respawning too rapidly. Check for possible errors. id: h1 "/etc/init.d/init.cssd run >/dev/null 2>&1 >/dev/null" |
Sun Cluster HA for Oracle does not require the Oracle cssd daemon. Therefore, you may ignore these error messages.
The Sun Cluster HA for Oracle 3.0 data service can run on Sun Cluster 3.1 software only when used with the following versions of the Solaris operating environment:
Solaris 8, 32-bit version
Solaris 8, 64-bit version
Solaris 9, 32-bit version
The Sun Cluster HA for Oracle 3.0 data service cannot run on Sun Cluster 3.1 software when used with the 64-bit version of Solaris 9.
Adhere to the documentation of Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application clusters because you cannot change hostnames after you install Sun Cluster software.
For more information on this restriction on hostnames and node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.
If the VERITAS NetBackup client is a cluster, only one logical host can be configured as the client because there is only one bp.conf file.
If the NetBackup client is a cluster and if one of the logical hosts on the cluster is configured as the NetBackup client, NetBackup cannot back up the physical hosts.
On the cluster running the master server, the master server is the only logical host that can be backed up.
Backup media cannot be attached to the master server, so one or more media servers are required.
In a Sun Cluster environment, robotic control is only supported on media servers and not on the NetBackup master server running on Sun Cluster.
No Sun Cluster node may be an NFS client of a Sun Cluster HA for NFS-exported file system being mastered on a node in the same cluster. Such cross-mounting of Sun Cluster HA for NFS is prohibited. Use the cluster file system to share files among cluster nodes.
Applications running locally on the cluster must not lock files on a file system exported via NFS. Otherwise, local blocking (for example, flock(3UCB) or fcntl(2)) might interfere with the ability to restart the lock manager (lockd). During restart, a blocked local process may be granted a lock which may be intended to be reclaimed by a remote client. This would cause unpredictable behavior.
Sun Cluster HA for NFS requires that all NFS client mounts be “hard” mounts.
Sun Cluster 3.1 software does not support Secure NFS or the use of Kerberos with NFS, in particular, the secure and kerberos options to the share_nfs(1M) subsystem. However, Sun Cluster 3.1 software does support the use of secure ports for NFS by adding the entry set nfssrv:nfs_portmon=1 to the /etc/system file on cluster nodes.
Do not use NIS for naming services in a cluster running Sun Cluster HA for SAP liveCache because the NIS entry is only used if files are not available.
For more procedural information about the nssswitch.conf password requirements related to this restriction, see “Preparing the Nodes and Disks” in Sun Cluster Data Service for SAP liveCache Guide for Solaris OS.