The following restrictions apply to the Sun Cluster 3.0 Update 1 release:
Remote Shared Memory (RSM) transport types - These transport types are mentioned in the documentation, but they are not supported.
Scalable Coherent Interface (SCI) - The SCI interface is not supported as a cluster interconnect.
Automatic disk path monitoring - The disk path functionality is not supported. You must manually monitor disk paths to avoid double failures or loss of path to a quorum device. The monitor detects active disk path failures only, but not inactive disk paths.
Storage devices with more than two physical paths to the enclosure - More than two paths are not supported. The Sun StorEdge A3500, for which two paths are supported to each of two nodes, is an exception.
SunVTSTM - This is not supported.
Framework and data service upgrades - Upgrades are only supported between major Sun Cluster releases, not update releases. Therefore, there is no automated upgrade between Sun Cluster 3.0 GA and Sun Cluster 3.0 Update 1. The manual upgrade procedure can be found in the UPGRADE_README file on the Sun Cluster 3.0 7/01 CD-ROM at the following location: /cdrom/suncluster_3_0u1/SunCluster_3.0/Tools/Upgrade/
Multihost tape and CD-ROM - This is not supported.
Loopback File System - The software does not support the use of the loopback file system (LOFS) on cluster nodes.
Running client applications on the cluster nodes - This is not supported. Switchover or failover of a resource group might cause a TCP (telnet/rlogin) connection to be broken. This switchover or failover includes connections that the cluster nodes initiated and connections that client hosts outside the cluster initiated.
Running high priority process scheduling classes on cluster nodes - This is not supported. Do not run, on any cluster node, any processes that run in the time-sharing scheduling class with a higher-than-normal priority or any processes that run in the real-time scheduling class. Sun Cluster 3.0 relies on kernel threads that do not run in the real-time scheduling class. Other time-sharing processes that run at higher-than-normal priority or real-time processes can prevent the Sun Cluster kernel threads from acquiring needed CPU cycles.
File system quotas - Quotas are not supported in Sun Cluster 3.0 Update 1.
Logical network interfaces - These interfaces are reserved for use by Sun Cluster 3.0 Update 1.
Cluster file system restrictions
The command umount -f behaves in the same manner as the umount command without the -f option. It does not support forced unmounts.
The command unlink (1M) is not supported on non-empty directories.
The command lockfs -d is not supported. Use lockfs -n as a workaround.
The cluster file system does not support any of the file system features of Solaris by which one would put a communication end-point in the file system name space. Therefore, you cannot create a UNIX domain socket whose name is a pathname into the cluster file system. Nor can you create fifos or named pipes. Nor should you attempt to use fattach.
It is not supported to execute binaries off of file systems mounted using the forcedirectio mount option.
Network Adapter Failover (NAFO) restrictions
All public networking adapters must be in NAFO groups.
Only one NAFO group exists per IP subnet for each node. Sun Cluster 3.0 does not support even the weak form of IP striping, in which multiple IP addresses exist on the same subnet.
Only one adapter in a NAFO group can be active at any time.
Sun Cluster 3.0 does not support setting local-mac-address?=true in the OpenBootTM PROM.
Service and application restrictions
Sun Cluster 3.0 can provide service for only those data services that are either supplied with the Sun Cluster product or set up with the Sun Cluster data services API.
Do not use cluster nodes as mail servers because the Sun Cluster environment does not support the sendmail(1M) subsystem. Mail directories must reside on non-Sun Cluster nodes.
Do not configure cluster nodes as routers (gateways). If the system goes down, the clients cannot find an alternate router and cannot recover.
Do not configure cluster nodes as NIS or NIS+ servers. However, cluster nodes can be NIS or NIS+ clients.
Do not use a Sun Cluster configuration to provide a highly available boot or install service on client systems.
Do not use a Sun Cluster 3.0 configuration to provide an rarpd service.
The Sun Cluster 3.0 data services API supports only 32-bit data services. The application on which the Sun Cluster data service depends can be a 64-bit application, but the data services' methods and monitors that support the application in a cluster must be 32-bit programs.
Sun Cluster 3.0 HA for NFS restrictions
On any cluster node, do not run, any application that accesses a Sun Cluster HA for NFS file system on any other node. Access such file systems through the cluster file system only. Using an NFS exported file system from a cluster node might lead to unpredictable locking behavior.
Sun Cluster HA for NFS requires that all NFS client mounts be "hard" mounts.
For Sun Cluster HA for NFS, do not use hostname aliases for network resources. NFS clients mounting cluster file systems using hostname aliases might experience statd lock recovery problems.
Sun Cluster 3.0 does not support Secure NFS or the use of Kerberos with NFS. In particular, the secure and kerberos options to the share_nfs(1M) subsystem.
Volume manager restrictions
In Solstice DiskSuite configurations that use mediators, the number of mediator hosts configured for a diskset must be exactly two.
DiskSuite Tool (metatool) is not compatible with Sun Cluster 3.0.
VxVM Dynamic Multipathing (DMP) with Sun Cluster 3.0 software is not supported.
Software RAID 5 is not supported.
Hardware restrictions
With the exception of clusters using Sun StorEdge A3x00, a pair of cluster nodes must have at least two multihost disk enclosures.
RAID 5 is only supported in hardware with the Sun StorEdge A3x00.
Alternate Pathing (AP) is not supported.