The following restrictions apply to the Sun Cluster 3.0 5/02 release:
Remote Shared Memory (RSM) transport types – These transport types are mentioned in the documentation, but they are not supported. If you use the RSMAPI, specify dlpi as the transport type.
Scalable Coherent Interface (SCI) – The SBus SCI interface is not supported as a cluster interconnect. However, the PCI-SCI interface is supported.
Logical network interfaces – These interfaces are reserved for use by Sun Cluster 3.0 software.
Disk path monitoring – Only active disk paths (from the current primary node) are monitored for failures by Sun Cluster software. You must manually monitor disk paths to avoid double failures or loss of path to a quorum device.
SunVTSTM – This is not supported.
Multihost tape, CD-ROM, and DVD‐ROM – This is not supported.
Loopback File System – Sun Cluster 3.0 software does not support the use of the loopback file system (LOFS) on cluster nodes.
Running client applications on the cluster nodes – Client applications running on cluster nodes should not map to logical IP-addresses that are part of an HA data service. During failover, these logical IP-addresses might go away, leaving the client without a connection.
Running high‐priority process scheduling classes on cluster nodes – This is not supported. Do not run, on any cluster node, any processes that run in the time‐sharing scheduling class with a higher‐than‐normal priority or any processes that run in the real‐time scheduling class. Sun Cluster 3.0 relies on kernel threads that do not run in the real‐time scheduling class. Other time‐sharing processes that run at higher‐than‐normal priority or real‐time processes can prevent the Sun Cluster kernel threads from acquiring needed CPU cycles.
File system quotas – Quotas are not supported in Sun Cluster 3.0 configurations.
Sun Cluster 3.0 software can only provide service for those data services that are either supplied with the Sun Cluster product or set up with the Sun Cluster data services API.
Do not use cluster nodes as mail servers because the Sun Cluster environment does not support the sendmail(1M) subsystem. Mail directories must reside on non-Sun Cluster nodes.
Do not configure cluster nodes as routers (gateways). If the system goes down, the clients cannot find an alternate router and cannot recover.
Do not configure cluster nodes as NIS or NIS+ servers. However, cluster nodes can be NIS or NIS+ clients.
Do not use a Sun Cluster configuration to provide a highly available boot or install service on client systems.
Do not use a Sun Cluster 3.0 configuration to provide an rarpd service.
RAID level 5 is supported on only the following hardware platforms at this time:
Sun StorEdge A5x00/A3500FC arrays.
Sun StorEdge T3 and T3+ arrays. However, note that if you are using these arrays in a single-controller configuration, an additional mechanism for data redundancy, such as host-based mirroring, must also be used. If these arrays are used in a partner-group configuration, the controllers are redundant and you can use RAID 5 without host-based mirroring.
Alternate Pathing (AP) is not supported.
If you are using a Sun EnterpriseTM 420R server with a PCI-card in slot J4701, the motherboard must be at dash-level 15 or higher (501-5168-15 or higher). To find the motherboard part number and revision level, look at the edge of the board closest to PCI slot 1.
System panics have been observed in clusters when UDWIS I/O cards are used in slot 0 of a board in a Sun Enterprise 10000 server; do not install UDWIS I/O cards in slot 0 of a board in this server (see BugId 4490386 .)
In Solstice DiskSuite configurations that use mediators, the number of mediator hosts configured for a diskset must be exactly two.
DiskSuite Tool (metatool) is not compatible with Sun Cluster 3.0 software.
Use of VxVM Dynamic Multipathing (DMP) with Sun Cluster 3.0 software to manage multiple paths from the same node is not supported.
Simple root disk groups (rootdg created on a single slice of the root disk) are not supported as disk types with VxVM on Sun Cluster software.
Software RAID 5 is not supported.
The command umount -f behaves in the same manner as the umount command without the -f option. It does not support forced unmounts.
The command unlink (1M) is not supported on non-empty directories.
The command lockfs -d is not supported. Use lockfs -n as a workaround.
The cluster file system does not support any of the file-system features of Solaris software by which one would put a communication end-point in the file-system name space. Therefore, although you can create a UNIX domain socket whose name is a path name into the cluster file system, the socket would not survive a node failover. In addition, any fifos or named pipes you create on a cluster file system would not be globally accessible, nor should you attempt to use fattach from any node other than the local node.
It is not supported to execute binaries off file systems mounted by using the forcedirectio mount option.
The following VxFS features are not supported in a Sun Cluster 3.0 configuration.
Quick I/O
Snapshots
Storage checkpoints
Cache advisories (these can be used, but the effect will be observed on the given node only)
VERITAS CFS (requires VERITAS cluster feature & VCS)
All other VxFS features and options that are supported in a cluster configuration are supported by Sun Cluster 3.0 software. See VxFS documentation and man pages for details about VxFS options that are or are not supported in a cluster configuration.
The following VxFS-specific mount options are not supported in a Sun Cluster 3.0 configuration.
convosync (Convert O_SYNC)
mincache
qlog, delaylog, tmplog
For a VxFS cluster file system, you must globally mount and unmount the cluster file system from the primary node (the node that masters the disk on which the VxFS file system resides) to ensure that the operation succeeds. A VxFS cluster file system mount or unmount operation that is performed from a secondary node might fail.
For a VxFS cluster file system, you must issue ioctls only from the primary node. If you do not know whether an administration command involves ioctls, issue the command from the primary node.
To administer a VxFS cluster file system, you must perform all VxFS administration commands from the primary node of the VxFS cluster file system.
All public networking adapters must be in NAFO groups.
Only one NAFO group exists per IP subnet for each node. Sun Cluster 3.0 software does not support even the weak form of IP striping, in which multiple IP addresses exist on the same subnet.
Only one adapter in a NAFO group can be active at any time.
Sun Cluster 3.0 software does not support setting local‐mac‐address?=true in the OpenBootTM PROM.
This section describes restrictions for specific data services. There are no restrictions that apply to all data services.
Future Sun Cluster Release Notes will not include data service restrictions that apply to specific data services. However, Sun Cluster Release Notes will document any data service restrictions that apply to all data services.
For additional data service restrictions that apply to specific data services, see the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.
Adhere to the Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application Clusters because you cannot change hostnames after you install Sun Cluster software.
For more information on this hostnames/node names restriction, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.
If the VERITAS NetBackup client is a cluster, only one logical host can be configured as the client because there is only one bp.conf file.
If the NetBackup client is a cluster and if one of the logical hosts on the cluster is configured as the NetBackup client, NetBackup cannot back up the physical hosts.
On the cluster running the master server, the master server is the only logical host that can be backed up.
Backup media cannot be attached to the master server, so one or more media servers are required.
No Sun Cluster node may be an NFS client of a Sun Cluster HA for NFS‐exported file system being mastered on a node in the same cluster. Such cross-mounting of Sun Cluster HA for NFS is prohibited. Use the cluster file system to share files among cluster nodes.
Applications running locally on the cluster must not lock files on a file system exported via NFS. Otherwise, local blocking (for example, flock(3UCB) or fcntl(2)) might interfere with the ability to restart the lock manager (lockd). During restart, a blocked local process may be granted a lock which may be intended to be reclaimed by a remote client. This would cause unpredictable behavior.
Sun Cluster HA for NFS requires that all NFS client mounts be “hard” mounts.
For Sun Cluster HA for NFS, do not use hostname aliases for network resources. NFS clients mounting cluster file systems using hostname aliases might experience statd lock recovery problems.
Sun Cluster 3.0 software does not support Secure NFS or the use of Kerberos with NFS, in particular, the secure and kerberos options to the share_nfs(1M) subsystem. However, Sun Cluster 3.0 software does support the use of secure ports for NFS by adding the entry set nfssrv:nfs_portmon=1 to the /etc/system file on cluster nodes.