This section provides the following guidelines for planning Solaris software installation in a cluster configuration.
For more information about Solaris software, see your Solaris installation documentation.
You can install Solaris software from a local DVD-ROM or from a network installation server by using the JumpStartTM installation method. In addition, Sun Cluster software provides a custom method for installing both the Solaris OS and Sun Cluster software by using the JumpStart installation method. If you are installing several cluster nodes, consider a network installation.
See How to Install Solaris and Sun Cluster Software (JumpStart) for details about the scinstall JumpStart installation method. See your Solaris installation documentation for details about standard Solaris installation methods.
Consider the following points when you plan the use of the Solaris OS in a Sun Cluster configuration:
Solaris 10 Zones - Install Sun Cluster 3.2 framework software only in the global zone.
To determine whether you can install a Sun Cluster data service directly in a non-global zone, see the documentation for that data service.
If you configure non-global zones on a cluster node, the loopback file system (LOFS) must be enabled. See the information for LOFS for additional considerations.
Loopback file system (LOFS) - During cluster creation with the Solaris 9 version of Sun Cluster software , LOFS capability is disabled by default. During cluster creation with the Solaris 10 version of Sun Cluster software, LOFS capability is not disabled by default.
If the cluster meets both of the following conditions, you must disable LOFS to avoid switchover problems or other failures:
Sun Cluster HA for NFS is configured on a highly available local file system.
The automountd daemon is running.
If the cluster meets only one of these conditions, you can safely enable LOFS.
If you require both LOFS and the automountd daemon to be enabled, exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS.
Interface groups - Solaris interface groups are not supported in a Sun Cluster configuration. The Solaris interface groups feature is disabled by default during Solaris software installation. Do not re-enable Solaris interface groups. See the ifconfig(1M) man page for more information about Solaris interface groups.
Power-saving shutdown - Automatic power-saving shutdown is not supported in Sun Cluster configurations and should not be enabled. See the pmconfig(1M) and power.conf(4) man pages for more information.
IP Filter - Sun Cluster software only supports filtering with Solaris IP Filter for failover services. Do not use IP Filter with scalable services. For more information about using IP Filter with failover services, see Using Solaris IP Filtering with Sun Cluster in Sun Cluster 3.2 Release Notes for Solaris OS.
Sun Cluster 3.2 software requires at least the End User Solaris Software Group. However, other components of your cluster configuration might have their own Solaris software requirements as well. Consider the following information when you decide which Solaris software group you are installing.
Servers - Check your server documentation for any Solaris software requirements. For example , Sun EnterpriseTM 10000 servers require the Entire Solaris Software Group Plus OEM Support.
SCI-PCI adapters - To use SCI-PCI adapters, which are available for use in SPARC based clusters only, or the Remote Shared Memory Application Programming Interface (RSMAPI), ensure that you install the RSMAPI software packages, which are SUNWrsm and SUNWrsmo, and for the Solaris 9 OS on SPARC based platforms also SUNWrsmx and SUNWrsmox. The RSMAPI software packages are included only in some Solaris software groups. For example, the Developer Solaris Software Group includes the RSMAPI software packages but the End User Solaris Software Group does not.
If the software group that you install does not include the RSMAPI software packages, install the RSMAPI software packages manually before you install Sun Cluster software. Use the pkgadd(1M) command to manually install the software packages. See the Section (3RSM) man pages for information about using the RSMAPI.
Additional Solaris packages - You might need to install other Solaris software packages that are not part of the End User Solaris Software Group. The Apache HTTP server packages are one example. Third-party software, such as ORACLE®, might also require additional Solaris software packages. See your third-party documentation for any Solaris software requirements.
To avoid the need to manually install Solaris software packages, install the Entire Solaris Software Group Plus OEM Support.
Add this information to the appropriate Local File System Layout Worksheet.
When you install the Solaris OS, ensure that you create the required Sun Cluster partitions and that all partitions meet minimum space requirements.
swap – The combined amount of swap space that is allocated for Solaris and Sun Cluster software must be no less than 750 Mbytes. For best results, add at least 512 Mbytes for Sun Cluster software to the amount that is required by the Solaris OS. In addition, allocate any additional swap amount that is required by applications that are to run on the cluster node.
If you create an additional swap file, do not create the swap file on a global device. Only use a local disk as a swap device for the node.
/globaldevices – Create a file system at least 512 Mbytes large that is to be used by the scinstall(1M) utility for global devices.
Volume manager – Create a 20-Mbyte partition on slice 7 for volume manager use. If your cluster uses VERITAS Volume Manager (VxVM) and you intend to encapsulate the root disk, you need to have two unused slices available for use by VxVM.
To meet these requirements, you must customize the partitioning if you are performing interactive installation of the Solaris OS.
See the following guidelines for additional partition planning information:
As with any other system running the Solaris OS, you can configure the root (/), /var, /usr, and /opt directories as separate file systems. Or, you can include all the directories in the root (/) file system. The following describes the software contents of the root (/), /var, /usr, and /opt directories in a Sun Cluster configuration. Consider this information when you plan your partitioning scheme.
root (/) – The Sun Cluster software itself occupies less than 40 Mbytes of space in the root (/) file system. Solaris Volume Manager software requires less than 5 Mbytes, and VxVM software requires less than 15 Mbytes. To configure ample additional space and inode capacity, add at least 100 Mbytes to the amount of space you would normally allocate for your root ( /) file system. This space is used for the creation of both block special devices and character special devices used by the volume management software. You especially need to allocate this extra space if a large number of shared disks are in the cluster.
/var – The Sun Cluster software occupies a negligible amount of space in the /var file system at installation time. However, you need to set aside ample space for log files. Also, more messages might be logged on a clustered node than would be found on a typical standalone server. Therefore, allow at least 100 Mbytes for the /var file system.
/usr – Sun Cluster software occupies less than 25 Mbytes of space in the /usr file system. Solaris Volume Manager and VxVM software each require less than 15 Mbytes.
/opt – Sun Cluster framework software uses less than 2 Mbytes in the /opt file system. However, each Sun Cluster data service might use between 1 Mbyte and 5 Mbytes. Solaris Volume Manager software does not use any space in the /opt file system. VxVM software can use over 40 Mbytes if all of its packages and tools are installed.
In addition, most database and applications software is installed in the /opt file system.
SPARC: If you use Sun Management Center software to monitor the cluster, you need an additional 25 Mbytes of space on each node to support the Sun Management Center agent and Sun Cluster module packages.
Sun Cluster software requires you to set aside a special file system on one of the local disks for use in managing global devices. This file system is later mounted as a cluster file system. Name this file system /globaldevices, which is the default name that is recognized by the scinstall(1M) command.
The scinstall command later renames the file system /global/.devices/node@nodeid, where nodeid represents the number that is assigned to a node when it becomes a cluster member. The original /globaldevices mount point is removed.
The /globaldevices file system must have ample space and ample inode capacity for creating both block special devices and character special devices. This guideline is especially important if a large number of disks are in the cluster. A file system size of 512 Mbytes should suffice for most cluster configurations.
If you use Solaris Volume Manager software, you must set aside a slice on the root disk for use in creating the state database replica. Specifically, set aside a slice for this purpose on each local disk. But, if you only have one local disk on a node, you might need to create three state database replicas in the same slice for Solaris Volume Manager software to function properly. See your Solaris Volume Manager documentation for more information.
If you use VERITAS Volume Manager (VxVM) and you intend to encapsulate the root disk, you need to have two unused slices that are available for use by VxVM. Additionally, you need to have some additional unassigned free space at either the beginning or the end of the disk. See your VxVM documentation for more information about root disk encapsulation.
Table 1–2 shows a partitioning scheme for a cluster node that has less than 750 Mbytes of physical memory. This scheme is to be installed with the End User Solaris Software Group, Sun Cluster software, and the Sun Cluster HA for NFS data service. The last slice on the disk, slice 7, is allocated with a small amount of space for volume-manager use.
This layout allows for the use of either Solaris Volume Manager software or VxVM software. If you use Solaris Volume Manager software, you use slice 7 for the state database replica. If you use VxVM, you later free slice 7 by assigning the slice a zero length. This layout provides the necessary two free slices, 4 and 7, as well as provides for unused space at the end of the disk.
Table 1–2 Example File-System Allocation
Slice |
Contents |
Size Allocation |
Description |
---|---|---|---|
0 |
/ |
6.75GB |
Remaining free space on the disk after allocating space to slices 1 through 7. Used for the Solaris OS, Sun Cluster software, data-services software, volume-manager software, Sun Management Center agent and Sun Cluster module agent packages, root file systems, and database and application software. |
1 |
swap |
1GB |
512 Mbytes for the Solaris OS. 512 Mbytes for Sun Cluster software. |
2 |
overlap |
8.43GB |
The entire disk. |
3 |
/globaldevices |
512MB |
The Sun Cluster software later assigns this slice a different mount point and mounts the slice as a cluster file system. |
4 |
unused |
- |
Available as a free slice for encapsulating the root disk under VxVM. |
5 |
unused |
- |
- |
6 |
unused |
- |
- |
7 |
volume manager |
20MB |
Used by Solaris Volume Manager software for the state database replica, or used by VxVM for installation after you free the slice. |
For information about the purpose and function of Solaris 10 zones in a cluster, see Support for Solaris Zones on Sun Cluster Nodes in Sun Cluster Concepts Guide for Solaris OS.
Consider the following points when you create a Solaris 10 non-global zone, simply referred to as a zone, on a cluster node.
Unique zone name - The zone name must be unique in the node. Do not specify the same name to more than one zone on the same node.
Reusing a zone name on multiple nodes - To simplify cluster administration, you can use the same name for a zone on each node where resource groups are to be brought online in that zone.
Private IP addresses - Do not attempt to use more private IP addresses than are available in the cluster.
Mounts - Do not include global mounts in zone definitions. Include only loopback mounts.
Failover services - In multiple-node clusters, while Sun Cluster software permits you to specify different zones on the same node in a failover resource group's node list, doing so is useful only during testing. If a single node hosts all zones in the node list, the node becomes a single point of failure for the resource group. For highest availability, zones in a failover resource group's node list should be on different nodes.
In single-node clusters, there is no functional risk if you specify multiple zones in a failover resource group's node list.
Scalable services - Do not create non-global zones for use in the same scalable service on the same node. Each instance of the scalable service must run on a different cluster node.
LOFS - Solaris Zones requires that the loopback file system (LOFS) be enabled. However, the Sun Cluster HA for NFS data service requires that LOFS be disabled, to avoid switchover problems or other failures. If you configure both non-global zones and Sun Cluster HA for NFS in your cluster, do one of the following to prevent possible problems in the data service:
Disable the automountd daemon.
Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS.