All nodes in a cluster must be installed with the same version of the Solaris operating environment (Solaris 2.6, Solaris 7, or Solaris 8) before you can install the Sun Cluster software. When you install Solaris on cluster nodes, follow the general rules in this section.
Install the Entire Distribution Solaris software packages on all Sun Cluster nodes.
All platforms except the E10000 require at least the Entire Distribution Solaris installation. E10000 systems require the Entire Distribution + OEM.
After installing the Solaris operating environment, you must install the latest patches. For the current list of required patches for the Solaris operating environment, consult your Enterprise Services representative or service provider, or see the Sun website http//sunsolve.sun.com.
If you are upgrading from an earlier version of the Solaris operating environment:
You must use the upgrade option in the Solaris installation program (rather than reinstalling the operating environment) and be prepared to increase the size of your root (/) and /usr slices to accommodate the Solaris environment.
The upgrade option in the Solaris installation program provides the ability to reallocate disk space if the current file systems don't have enough space for the upgrade. By default, an auto-layout feature tries to determine how to reallocate the disk space so the upgrade can succeed. If auto-layout cannot determine how to reallocate disk space, you must specify which file systems can be moved or changed and run auto-layout again.
If you are installing Sun Cluster for the first time:
Set up each Sun Cluster node as stand-alone machine. Do this in response to a question in the Solaris installation program.
Do not define an exported file system. HA-NFS file systems are not mounted on /export and only HA-NFS file systems should be NFS-shared on Sun Cluster nodes.
Disable the Solaris power management "autoshutdown" mechanism if it applies to any nodes in your Sun Cluster configuration. See the pwconfig(1M) and power.conf(4) man pages for details.
A new feature called interface groups was added to the Solaris 2.6 operating environment. This feature is implemented as default behavior in Solaris 2.6 software, but as optional behavior in subsequent releases.
As described in the ifconfig(1M) man page, logical or physical interfaces that share an IP prefix are collected into an interface group. IP uses the interface group to rotate source address selections when the source address is unspecified. An interface group made up of multiple physical interfaces is used to distribute traffic across different IP addresses on a per-IP-destination basis (see netstat(1M) for information about per-IP-destination).
When enabled, the interface groups feature causes a problem with switchover of logical hosts. The system will experience RPC timeouts and the switchover will fail, causing the logical host to remain mastered on its current host. Therefore, interface groups must be disabled on all cluster nodes. The status of interface groups is determined by the value of ip_enable_group_ifs in /etc/system.
The value for this parameter can be checked with the following ndd command:
# ndd /dev/ip ip_enable_group_ifs |
If the value returned is 1 (enabled), disable interface groups by editing the /etc/system file to include the following line
set ip:ip_enable_group_ifs=0 |
Whenever you modify the /etc/system file, you must reboot the system for the changes to take effect.
When Solaris is installed, the system disk is partitioned into slices for root (/), /usr, and other standard file systems. You must change the partition configuration to meet the requirements your volume manager. Use the guidelines in the following sections to allocate disk space accordingly.
See your Solaris documentation for file system sizing guidelines. Sun Cluster imposes no additional requirements for file system slices.
If you will be using Solstice DiskSuite, set aside a 10 Mbyte slice on the system disk for metadevice state database replicas. See your Solstice DiskSuite documentation for more information about replicas.
If you will be using VxVM, designate a disk for the root disk group (rootdg). See your VERITAS documentation for guidelines and details about creating the rootdg. Refer also to "VERITAS Volume Manager Considerations", for more information.
The root (/) slice on your local disk must have enough space for the various files and directories as well as space for the device inodes in /devices and symbolic links in /dev.
The root slice also must be large enough to hold the following:
Solaris system software
Sun Cluster, some components from your volume management software, and any third-party software packages
Data space for symbolic links in /dev for the disk units and for volume manager use
Sun Cluster uses various shell scripts that run as root processes. For this reason, the /.cshrc* and /.profile files for user root should be empty or non-existent on the cluster nodes.
Your cluster might require a larger root file system if it contains large numbers of disk drives.
If you run out of free space, you must reinstall the operating environment on all cluster nodes to obtain additional free space in the root slice. Make sure at least 20 percent of the total space on the root slice is left free.
The /usr slice holds the user file system. The /var slice holds the system log files. The /opt slice holds the Sun Cluster and data service software packages. See your Solaris advanced system administration documentation for details about changing the allocation values when installing Solaris software.