This section describes Sun Cluster configuration restrictions.
Note the following restrictions related to services and applications.
Sun Cluster can be used to provide service for only those data services that either are supplied with Sun Cluster or set up using the Sun Cluster data services API.
Do not configure the Sun Cluster nodes as mail servers, because sendmail(1M) is not supported in a Sun Cluster environment. No mail directories should reside on Sun Cluster nodes.
Do not configure Sun Cluster systems as routers (gateways). If the system goes down, the clients cannot find an alternate router and recover.
Do not configure Sun Cluster systems as NIS or NIS+ servers. Sun Cluster nodes can be NIS or NIS+ clients, however.
A Sun Cluster configuration cannot be used to provide a highly available boot or install service to client systems.
A Sun Cluster configuration cannot be used to provide highly available rarpd service.
The Solaris interface groups feature is not supported on Sun Cluster, as it disrupts cluster switchover and failover behavior. You must disable Solaris interface groups on all cluster nodes. See "Disabling Solaris Interface Groups" for more information.
Sun Cluster reserves certain port numbers for internal use. These port numbers are stored in the clustername.cdb file. Note the following reserved port numbers when planning your configuration, data services, and applications.
In addition, note that Solaris reserves ports 6000 to 6031 for UNIX Distributed Lock Manager (UDLM). UDLM is used with Oracle Parallel Server configurations.
Table 2-4 Default Ports Reserved by Sun Cluster
Port Number |
Reserved For... |
---|---|
5556 |
Cluster Membership Monitor |
5568-5599 |
VERITAS Volume Manager cluster feature, vxclust |
5559 |
VERITAS Volume Manager cluster feature, vxkmsgd |
5560 |
VERITAS Volume Manager cluster feature, vxconfigd |
603 |
sm_configd and smad (for TCP and UDP) |
Changing these port numbers is not allowed, except in the case of sm_configd and smad. To change the port numbers used by sm_configd or smad, edit the /etc/services files on all nodes. The files must be identical on all nodes. For all cases other than sm_configd and smad, you must change the application's port use rather than the cluster's.
Note the following restrictions related to Sun Cluster HA for NFS.
Do not run, on any Sun Cluster node, any applications that access the Sun Cluster HA for NFS file system locally. For example, on Sun Cluster systems, users should not locally access any Sun Cluster file systems that are NFS exported. This is because local locking interferes with the ability to kill and restart lockd(1M). Between the kill and the restart, a blocked local process is granted the lock, which prevents reclamation of the lock by the client machine.
Sun Cluster does not support cross-mounting of Sun Cluster HA for NFS resources.
Sun Cluster HA for NFS requires that all NFS client mounts be "hard" mounts.
For Sun Cluster HA for NFS, do not use host name aliases for the logical hosts. NFS clients mounting HA file systems using host name aliases for the logical hosts might experience statd lock recovery problems.
Sun Cluster does not support Secure NFS or the use of Kerberos with NFS. In particular, the secure and kerberos options to share_nfs(1M) are not supported.
Note the following hardware-related restrictions.
A pair of Sun Cluster nodes must have at least two multihost disk enclosures, with one exception: if you use Sun StorEdge A3x00 disks, you can use only one.
The SS1000 and SC2000 hardware platforms are supported with Sun Cluster 2.2 on only Solaris 2.6. This restriction is due to the removal of support for the SFE 1.0 be(7D) driver on versions of Solaris later than 2.6. The SFE 1.0 be(7D) driver is used for the cluster interconnect on the SS1000 and SC2000 machines.
You cannot mix UDWIS/DWIS and SCI controllers on the same I/O board.
The following restrictions apply only to Ultra 2 Series configurations:
The Sun Cluster node must be reinstalled to migrate from one basic hardware configuration to another. For example, a configuration with three FC/S cards and one SQEC card must be reinstalled to migrate to a configuration with two FC/S cards, one SQEC card, and one SFE or SunFDDI(TM) card.
Dual FC/OMs per FC/S card is supported only when used with the SFE or SunFDDI card.
In the SFE or SunFDDI 0 card configuration, the recovery mode of a dual FC/OM FC/S card failure is by a failover, not by mirroring or hot sparing.
For more information about Sun Cluster 2.2 hardware considerations and procedures, see the Sun Cluster 2.2 Hardware Site Preparation, Planning, and Installation Guide and Sun Cluster 2.2 Hardware Service Manual.
Note the following restrictions related to Solstice DiskSuite.
In Solstice DiskSuite configurations using mediators, the number of mediator hosts configured for a diskset must be an even number.
Although the shared CCD is an optional feature for two-node clusters running VxVM, a shared CCD cannot be used in Solstice DiskSuite configurations.
The +D option to scconf(1M) cannot be used with Solstice DiskSuite.
The RAID5 feature in the Solstice DiskSuite product is not supported. RAID5 is supported under VERITAS Volume Manager. A hardware implementation of RAID5 is also supported by the Sun StorEdge A3x00 disk expansion unit.
In the event of a power failure that brings down the entire cluster, user intervention is required to restart the cluster. The administrator must determine the last node that went down (by examining /var/adm/messages) and run scadmin startcluster on that node. Then the administrator must run scadmin startnode on the other cluster nodes to bring the cluster back online.
Sun Cluster software must run in C locale. This applies to Sun Cluster daemons, Sun Cluster rc scripts, and the superuser environment. As a consequence, you should not configure the superuser environment or the default environment (through the /etc/default/init file) to anything other than C locale on all hosts in the cluster.
Sun Cluster does not support the use of the loopback file system (lofs) on Sun Cluster nodes.
Do not run client applications on the Sun Cluster nodes. Because of local interface group semantics, a switchover or failover of a logical host may cause a TCP (telnet/rlogin) connection to be broken. This includes both connections that were initiated by the server hosts of the cluster, as well as connections that were initiated by client hosts outside the cluster.
Do not run, on any Sun Cluster node, any processes that run in the real-time scheduling class.
Do not access the /logicalhost directories from shells on any nodes. If you have shell connections to any /logicalhost directories when a switchover or failover is attempted, the switchover or failover will be blocked.
The Sun Cluster HA administrative file system cannot be grown using the Solstice DiskSuite growfs(1M) command.
Logical network interfaces are reserved for use by Sun Cluster.
Sun Prestoserve is not supported. Prestoserve works within the host system, which means that any data contained in the Prestoserve memory would not be available to the Sun Cluster sibling in the event of a switchover.
The be FastEthernet device driver has reached end of life and is not supported in the Solaris 7 or Solaris 8 operating environments. Due to this situation, the SPARCserver(TM) 1000 and SPARCcenter(TM) 2000, which use the be driver for the cluster interconnect, are not supported for Sun Cluster 2.2 with the Solaris 7 or Solaris 8 operating environments. These servers are supported for Sun Cluster 2.2 with only the Solaris 2.6 operating environment.
The user-defined script clustername.reconfig.user_script is not supported in Sun Cluster 2.2.