The following restrictions apply to the Sun Cluster 3.0 12/01 release:
Remote Shared Memory (RSM) transport types - These transport types are mentioned in the documentation, but they are not supported. If you use the RSMAPI, specify dlpi as the transport type.
Scalable Coherent Interface (SCI) - The SBus SCI interface is not supported as a cluster interconnect. However, the PCI-SCI interface is now supported.
Automatic disk path monitoring - The disk path monitoring functionality is not supported. You must manually monitor disk paths to avoid double failures or loss of path to a quorum device. The monitor detects active disk path failures only, but not inactive disk paths.
Storage devices with more than two physical paths to the enclosure - More than two paths are not supported. The Sun StorEdge A3500, for which two paths are supported to each of two nodes, is an exception.
SunVTSTM - This is not supported.
Multihost tape, CD-ROM, and DVD-ROM - This is not supported.
Loopback File System - Sun Cluster 3.0 software does not support the use of the loopback file system (LOFS) on cluster nodes.
Running client applications on the cluster nodes - This is not supported. Switchover or failover of a resource group might cause a TCP (telnet/rlogin) connection to be broken. This switchover or failover includes connections that the cluster nodes initiated and connections that client hosts outside the cluster initiated.
Running high-priority process scheduling classes on cluster nodes - This is not supported. Do not run, on any cluster node, any processes that run in the time-sharing scheduling class with a higher-than-normal priority or any processes that run in the real-time scheduling class. Sun Cluster 3.0 relies on kernel threads that do not run in the real-time scheduling class. Other time-sharing processes that run at higher-than-normal priority or real-time processes can prevent the Sun Cluster kernel threads from acquiring needed CPU cycles.
File system quotas - Quotas are not supported in Sun Cluster 3.0 12/01 configuration.
Logical network interfaces - These interfaces are reserved for use by Sun Cluster 3.0 12/01 software.
Cluster file system restrictions
The command umount -f behaves in the same manner as the umount command without the -f option. It does not support forced unmounts.
The command unlink (1M) is not supported on non-empty directories.
The command lockfs -d is not supported. Use lockfs -n as a workaround.
The cluster file system does not support any of the file system features of Solaris by which one would put a communication end-point in the file system name space. Therefore, you cannot create a UNIX domain socket whose name is a path name into the cluster file system, nor can you create fifos or named pipes, nor should you attempt to use fattach.
It is not supported to execute binaries off file systems mounted by using the forcedirectio mount option.
Network Adapter Failover (NAFO) restrictions
All public networking adapters must be in NAFO groups.
Only one NAFO group exists per IP subnet for each node. Sun Cluster 3.0 software does not support even the weak form of IP striping, in which multiple IP addresses exist on the same subnet.
Only one adapter in a NAFO group can be active at any time.
Sun Cluster 3.0 software does not support setting local-mac-address?=true in the OpenBootTM PROM.
Service and application restrictions
Sun Cluster 3.0 software can only provide service for those data services that are either supplied with the Sun Cluster product or set up with the Sun Cluster data services API.
Do not use cluster nodes as mail servers because the Sun Cluster environment does not support the sendmail(1M) subsystem. Mail directories must reside on non-Sun Cluster nodes.
Do not configure cluster nodes as routers (gateways). If the system goes down, the clients cannot find an alternate router and cannot recover.
Do not configure cluster nodes as NIS or NIS+ servers. However, cluster nodes can be NIS or NIS+ clients.
Do not use a Sun Cluster configuration to provide a highly available boot or install service on client systems.
Do not use a Sun Cluster 3.0 configuration to provide an rarpd service.
Sun Cluster 3.0 HA for NFS restrictions
No Sun Cluster node may be an NFS client of an HA-NFS exported file system being mastered on a node in the same cluster. Such cross-mounting of HA-NFS is prohibited. Use the cluster file system to share files among cluster nodes.
Applications must not locally access file systems that are exported via NFS. Otherwise, local blocking (for example, flock(3UCB), fcntl (2)) might interfere with the ability to restart the lock manager (lockd). During restart, a blocked local process may be granted a lock which may be intended to be reclaimed by a remote client. This would cause unpredictable behavior.
Sun Cluster HA for NFS requires that all NFS client mounts be "hard" mounts.
For Sun Cluster HA for NFS, do not use hostname aliases for network resources. NFS clients mounting cluster file systems using hostname aliases might experience statd lock recovery problems.
Sun Cluster 3.0 software does not support Secure NFS or the use of Kerberos with NFS, in particular, the secure and kerberos options to the share_nfs(1M) subsystem.
Sun Cluster HA for NetBackup restrictions
On the cluster running the master server, the master server is the only logical host that can be backed up.
Backup media cannot be attached to the master server, so one or more media servers are required.
Sun Cluster and NetBackup restrictions
If the NetBackup client is a cluster, only one logical host can be configured as the client because there is only one bp.conf file.
If the NetBackup client is a cluster and if one of the logical hosts on the cluster is configured as the NetBackup client, NetBackup cannot back up the physical hosts.
Volume manager restrictions
In Solstice DiskSuite configurations that use mediators, the number of mediator hosts configured for a diskset must be exactly two.
DiskSuite Tool (metatool) is not compatible with Sun Cluster 3.0 software.
Use of VxVM Dynamic Multipathing (DMP) with Sun Cluster 3.0 software to manage multiple paths from the same node is not supported.
Software RAID 5 is not supported.
Hardware restrictions
With the exception of clusters using Sun StorEdge A3x00, a pair of cluster nodes must have at least two multihost disk enclosures.
RAID level 5 is supported on only the following hardware platforms at this time:
- Sun StorEdge A5x00/A3500FC arrays.
- Sun StorEdge T3 and T3+ arrays. However, note that if you are using these arrays in a single-controller configuration, an additional mechanism for data redundancy, such as host-based mirroring, must also be used. If these arrays are used in a partner-group configuration, the controllers are redundant and you can use RAID 5 without host-based mirroring.
Alternate Pathing (AP) is not supported.
If you are using a Sun Enterprise 420R server with a PCI-card in slot J4701, the motherboard must be at dash-level 15 or higher (501-5168-15 or higher). To find the motherboard part number and revision level, look at the edge of the board closest to PCI slot 1.
System panics have been observed in clusters when UDWIS I/O cards are used in slot 0 of a board in a Sun Enterprise 10000 server; do not install UDWIS I/O cards in slot 0 of a board in this server (see BugId 4490386.)
Data Service Timeout Period recommendation
When using data services that are I/O intensive and that have a large number of disks configured in the cluster, it is recommended that you increase the default timeout of the data service. This is to account for the large amount of time resulting from retries within the I/O subsystem during disk failures. If you need more information or help with increasing data service timeouts, contact your local support engineer.
Data Service restrictions
Identify requirements for all data services before you begin Solaris and Sun Cluster installation. If you do not inform yourself of these requirements, you might perform the installation process incorrectly and thereby need to completely reinstall the Solaris and Sun Cluster software.
For example, the Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application Clusters has special requirements for the hostnames/node names that you use in the cluster. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software. For more information on the special requirements for the hostnames/node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.