This section describes Sun Cluster configuration restrictions.
Note the following restrictions related to services and applications.
Sun Cluster can be used to provide service for only those data services that either are supplied with Sun Cluster or set up using the Sun Cluster data services API.
Do not configure the Sun Cluster nodes as mail servers, because sendmail(1M) is not supported in a Sun Cluster environment. No mail directories should reside on Sun Cluster nodes.
Do not configure Sun Cluster systems as routers (gateways). If the system goes down, the clients cannot find an alternate router and recover.
Do not configure Sun Cluster systems as NIS or NIS+ servers. Sun Cluster nodes can be NIS or NIS+ clients, however.
A Sun Cluster configuration cannot be used to provide a highly available boot or install service to client systems.
A Sun Cluster configuration cannot be used to provide highly available rarpd service.
Note the following restrictions related to Sun Cluster HA for NFS.
Do not run, on any Sun Cluster node, any applications that access the Sun Cluster HA for NFS file system locally. For example, on Sun Cluster systems, users should not locally access any Sun Cluster file systems that are NFS exported. This is because local locking interferes with the ability to kill and restart lockd(1M). Between the kill and the restart, a blocked local process is granted the lock, which prevents reclamation of the lock by the client machine.
Sun Cluster does not support cross-mounting of Sun Cluster HA for NFS resources.
Sun Cluster HA for NFS requires that all NFS client mounts be "hard" mounts.
For Sun Cluster HA for NFS, do not use host name aliases for the logical hosts. NFS clients mounting HA file systems using host name aliases for the logical hosts might experience statd lock recovery problems.
Sun Cluster does not support Secure NFS or the use of Kerberos with NFS. In particular, the -secure and -kerberos options to share_nfs(1M) are not supported.
Note the following hardware-related restrictions.
A pair of Sun Cluster nodes must have at least two multihost disk enclosures, with one exception: if you use Sun StorEdge A3000 disks, you can use only one such expansion unit.
The SS1000 and SC2000 hardware platforms are not supported with Sun Cluster 2.2 and Solaris 7. They are supported under Solaris 2.6. This restriction is due to the removal of support for the SFE 1.0 be(7D) driver under Solaris 7. That driver is used for the cluster interconnect on the SS1000 and SC2000 machines.
The following restrictions apply only to Ultra 2 Series configurations:
The Sun Cluster node must be reinstalled to migrate from one basic hardware configuration to another. For example, a configuration with three FC/S cards and one SQEC card must be reinstalled to migrate to a configuration with two FC/S cards, one SQEC card, and one SFE or SunFDDITM card.
Dual FC/OMs per FC/S card is supported only when used with the SFE or SunFDDI card.
In the SFE or SunFDDI 0 card configuration, the recovery mode of a dual FC/OM FC/S card failure is by a failover, not by mirroring or hot sparing.
Note the following restrictions related to Solstice DiskSuite.
In Solstice DiskSuite configurations using mediators, the number of mediator hosts configured for a diskset must be an even number.
The RAID5 feature in the Solstice DiskSuite product is not supported. RAID5 is supported under Sun StorEdge Volume Manager and Cluster Volume Manager. A hardware implementation of RAID5 is also supported by the Sun StorEdge A3000 disk expansion unit.
In the event of a power failure that brings down the entire cluster, user intervention is required to restart the cluster. The administrator must determine the last node that went down (by examining /var/adm/messages) and run scadmin startcluster on that node. Then the administrator must run scadmin startnode on the other cluster nodes to bring the cluster back online.
Sun Cluster does not support the use of the loopback file system (lofs) on Sun Cluster nodes.
Do not run client applications on the Sun Cluster nodes. Because of local interface group semantics, a switchover or failover of a logical host may cause a TCP (telnet/rlogin) connection to be broken. This includes both connections that were initiated by the server hosts of the cluster, as well as connections that were initiated by client hosts outside the cluster.
Do not run, on any Sun Cluster node, any processes that run in the real-time scheduling class.
Do not access the /logicalhost directories from shells on any nodes. If you have shell connections to any /logicalhost directories when a switchover or failover is attempted, the switchover or failover will be blocked.
The Sun Cluster HA administrative file system cannot be grown using the Solstice DiskSuite growfs(1M) command.
Logical network interfaces are reserved for use by Sun Cluster.
Sun Prestoserve is not supported. Prestoserve works within the host system, which means that any data contained in the Prestoserve memory would not be available to the Sun Cluster sibling in the event of a switchover.