This section provides the following guidelines for planning Solaris software installation in a cluster configuration.
For more information about Solaris software, see your Solaris installation documentation.
You can install Solaris software from a local DVD-ROM or from a network installation server by using the JumpStartTM installation method. In addition, Sun Cluster software provides a custom method for installing both the Solaris OS and Sun Cluster software by using the JumpStart installation method. If you are installing several cluster nodes, consider a network installation.
See How to Install Solaris and Sun Cluster Software (JumpStart) for details about the scinstall JumpStart installation method. See your Solaris installation documentation for details about standard Solaris installation methods.
Consider the following points when you plan the use of the Solaris OS in a Sun Cluster configuration:
Solaris 10 Zones – Install Sun Cluster framework software only in the global zone.
To determine whether you can install a Sun Cluster data service directly in a non-global zone, see the documentation for that data service.
If you configure non-global zones on a global-cluster node, the loopback file system (LOFS) must be enabled. See the information for LOFS for additional considerations.
Loopback file system (LOFS) – During cluster creation with the Solaris 9 version of Sun Cluster software , LOFS capability is disabled by default. During cluster creation with the Solaris 10 version of Sun Cluster software, LOFS capability is not disabled by default.
If the cluster meets both of the following conditions, you must disable LOFS to avoid switchover problems or other failures:
Sun Cluster HA for NFS is configured on a highly available local file system.
The automountd daemon is running.
If the cluster meets only one of these conditions, you can safely enable LOFS.
If you require both LOFS and the automountd daemon to be enabled, exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS.
Interface groups – Solaris interface groups are not supported in a Sun Cluster configuration. The Solaris interface groups feature is disabled by default during Solaris software installation. Do not re-enable Solaris interface groups. See the ifconfig(1M) man page for more information about Solaris interface groups.
Power-saving shutdown – Automatic power-saving shutdown is not supported in Sun Cluster configurations and should not be enabled. See the pmconfig(1M) and power.conf(4) man pages for more information.
IP Filter – Sun Cluster software does not support the Solaris IP Filter feature for scalable services, but does support Solaris IP Filter for failover services.
fssnap – Sun Cluster software does not support the fssnap command, which is a feature of UFS. However, you can use the fssnap command on local systems that are not controlled by Sun Cluster software. The following restrictions apply to fssnap support:
The fssnap command is supported on local files systems that are not managed by Sun Cluster software.
The fssnap command is not supported on cluster file systems.
The fssnap command is not supported on local file systems under the control of HAStoragePlus.
Sun Cluster 3.2 1/09 software requires at least the End User Solaris Software Group (SUNWCuser). However, other components of your cluster configuration might have their own Solaris software requirements as well. Consider the following information when you decide which Solaris software group you are installing.
Servers – Check your server documentation for any Solaris software requirements. For example , Sun EnterpriseTM 10000 servers require the Entire Solaris Software Group Plus OEM Support.
SCI-PCI adapters – To use SCI-PCI adapters, which are available for use in SPARC based clusters only, or the Remote Shared Memory Application Programming Interface (RSMAPI), ensure that you install the RSMAPI software packages, which are SUNWrsm and SUNWrsmo, and for the Solaris 9 OS on SPARC based platforms also SUNWrsmx and SUNWrsmox. The RSMAPI software packages are included only in some Solaris software groups. For example, the Developer Solaris Software Group includes the RSMAPI software packages but the End User Solaris Software Group does not.
If the software group that you install does not include the RSMAPI software packages, install the RSMAPI software packages manually before you install Sun Cluster software. Use the pkgadd(1M) command to manually install the software packages. See the Section (3RSM) man pages for information about using the RSMAPI.
Additional Solaris packages – You might need to install other Solaris software packages that are not part of the End User Solaris Software Group. The Apache HTTP server packages are one example. Third-party software, such as ORACLE®, might also require additional Solaris software packages. See your third-party documentation for any Solaris software requirements.
To avoid the need to manually install Solaris software packages, install the Entire Solaris Software Group Plus OEM Support.
Add this information to the appropriate Local File System Layout Worksheet.
When you install the Solaris OS, ensure that you create the required Sun Cluster partitions and that all partitions meet minimum space requirements.
swap – The combined amount of swap space that is allocated for Solaris and Sun Cluster software must be no less than 750 Mbytes. For best results, add at least 512 Mbytes for Sun Cluster software to the amount that is required by the Solaris OS. In addition, allocate any additional swap amount that is required by applications that are to run on the Solaris host.
If you create an additional swap file, do not create the swap file on a global device. Use only a local disk as a swap device for the host.
/globaldevices – Create a file system at least 512 Mbytes large that is to be used by the scinstall(1M) utility for global devices.
Volume manager – Create a 20-Mbyte partition on slice 7 for volume manager use. If your cluster uses Veritas Volume Manager (VxVM) and you intend to encapsulate the root disk, you need to have two unused slices available for use by VxVM.
To meet these requirements, you must customize the partitioning if you are performing interactive installation of the Solaris OS.
See the following guidelines for additional partition planning information:
As with any other system running the Solaris OS, you can configure the root (/), /var, /usr, and /opt directories as separate file systems. Or, you can include all the directories in the root (/) file system.
No file-system type other than UFS is valid for the root (/) file system. Do not attempt to change the file-system type after the root (/) file system is created.
The following describes the software contents of the root (/), /var, /usr, and /opt directories in a Sun Cluster configuration. Consider this information when you plan your partitioning scheme.
root (/) – The Sun Cluster software itself occupies less than 40 Mbytes of space in the root (/) file system. Solaris Volume Manager software requires less than 5 Mbytes, and VxVM software requires less than 15 Mbytes. To configure ample additional space and inode capacity, add at least 100 Mbytes to the amount of space you would normally allocate for your root ( /) file system. This space is used for the creation of both block special devices and character special devices used by the volume management software. You especially need to allocate this extra space if a large number of shared disks are in the cluster.
/var – The Sun Cluster software occupies a negligible amount of space in the /var file system at installation time. However, you need to set aside ample space for log files. Also, more messages might be logged on a clustered node than would be found on a typical stand-alone server. Therefore, allow at least 100 Mbytes for the /var file system.
/usr – Sun Cluster software occupies less than 25 Mbytes of space in the /usr file system. Solaris Volume Manager and VxVM software each require less than 15 Mbytes.
/opt – Sun Cluster framework software uses less than 2 Mbytes in the /opt file system. However, each Sun Cluster data service might use between 1 Mbyte and 5 Mbytes. Solaris Volume Manager software does not use any space in the /opt file system. VxVM software can use over 40 Mbytes if all of its packages and tools are installed.
In addition, most database and applications software is installed in the /opt file system.
SPARC: If you use Sun Management Center software to monitor the cluster, you need an additional 25 Mbytes of space on each Solaris host to support the Sun Management Center agent and Sun Cluster module packages.
Sun Cluster software requires you to set aside a dedicated file system on one of the local disks for use in managing global devices. This file system is usually located on your root disk. However, if you use different storage on which to locate the global-devices file system, such as a Logical Volume Manager volume, it must not be part of a Solaris Volume Manager shared disk set or part of a VxVM disk group other than a root disk group. This file system is later mounted as a UFS cluster file system. Name this file system /globaldevices, which is the default name that is recognized by the scinstall(1M) command.
No file-system type other than UFS is valid for the global-devices file system. Do not attempt to change the file-system type after the global-devices file system is created.
The scinstall command later renames the file system /global/.devices/node@nodeid, where nodeid represents the number that is assigned to a Solaris host when it becomes a global-cluster member. The original /globaldevices mount point is removed.
The /globaldevices file system must have ample space and ample inode capacity for creating both block special devices and character special devices. This guideline is especially important if a large number of disks are in the cluster. A file system size of 512 Mbytes should suffice for most cluster configurations.
If you use Solaris Volume Manager software, you must set aside a slice on the root disk for use in creating the state database replica. Specifically, set aside a slice for this purpose on each local disk. But, if you have only one local disk on a Solaris host, you might need to create three state database replicas in the same slice for Solaris Volume Manager software to function properly. See your Solaris Volume Manager documentation for more information.
If you use Veritas Volume Manager (VxVM) and you intend to encapsulate the root disk, you need to have two unused slices that are available for use by VxVM. Additionally, you need to have some additional unassigned free space at either the beginning or the end of the disk. See your VxVM documentation for more information about root disk encapsulation.
Table 1–2 shows a partitioning scheme for a Solaris host that has less than 750 Mbytes of physical memory. This scheme is to be installed with the End User Solaris Software Group, Sun Cluster software, and the Sun Cluster HA for NFS data service. The last slice on the disk, slice 7, is allocated with a small amount of space for volume-manager use.
This layout allows for the use of either Solaris Volume Manager software or VxVM software. If you use Solaris Volume Manager software, you use slice 7 for the state database replica. If you use VxVM, you later free slice 7 by assigning the slice a zero length. This layout provides the necessary two free slices, 4 and 7, as well as provides for unused space at the end of the disk.
Table 1–2 Example File-System Allocation
Slice |
Contents |
Size Allocation |
Description |
---|---|---|---|
0 |
/ |
6.75GB |
Remaining free space on the disk after allocating space to slices 1 through 7. Used for the Solaris OS, Sun Cluster software, data-services software, volume-manager software, Sun Management Center agent and Sun Cluster module agent packages, root file systems, and database and application software. |
1 |
swap |
1GB |
512 Mbytes for the Solaris OS. 512 Mbytes for Sun Cluster software. |
2 |
overlap |
8.43GB |
The entire disk. |
3 |
/globaldevices |
512MB |
The Sun Cluster software later assigns this slice a different mount point and mounts the slice as a cluster file system. |
4 |
unused |
- |
Available as a free slice for encapsulating the root disk under VxVM. |
5 |
unused |
- |
- |
6 |
unused |
- |
- |
7 |
volume manager |
20MB |
Used by Solaris Volume Manager software for the state database replica, or used by VxVM for installation after you free the slice. |
For information about the purpose and function of Solaris 10 zones in a cluster, see Support for Solaris Zones in Sun Cluster Concepts Guide for Solaris OS.
For guidelines about configuring a cluster of non-global zones, see Zone Clusters.
Consider the following points when you create a Solaris 10 non-global zone, simply referred to as a zone, on a global-cluster node.
Unique zone name – The zone name must be unique on the Solaris host.
Reusing a zone name on multiple nodes – To simplify cluster administration, you can use the same name for a zone on each node where resource groups are to be brought online in that zone.
Private IP addresses – Do not attempt to use more private IP addresses than are available in the cluster.
Mounts – Do not include global mounts in zone definitions. Include only loopback mounts.
Failover services – In multiple-host clusters, while Sun Cluster software permits you to specify different zones on the same Solaris host in a failover resource group's node list, doing so is useful only during testing. If a single host contains all zones in the node list, the node becomes a single point of failure for the resource group. For highest availability, zones in a failover resource group's node list should be on different hosts.
In single-host clusters, no functional risk is incurred if you specify multiple zones in a failover resource group's node list.
Scalable services – Do not create non-global zones for use in the same scalable service on the same Solaris host. Each instance of the scalable service must run on a different host.
Cluster file systems - Do not directly add a cluster file system from the global zone to a non-global zone. Instead, add a loopback mount of the cluster file system from the global zone to the non-global zone. This restriction does not apply to QFS shared file systems.
LOFS – Solaris Zones requires that the loopback file system (LOFS) be enabled. However, the Sun Cluster HA for NFS data service requires that LOFS be disabled, to avoid switchover problems or other failures. If you configure both non-global zones and Sun Cluster HA for NFS in your cluster, do one of the following to prevent possible problems in the data service:
Disable the automountd daemon.
Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS.
Exclusive-IP zones – The following guidelines apply specifically to exclusive-IP non-global zones:
Logical-hostname resource groups – In a resource group that contains a LogicalHostname resource, if the node list contains any non-global zone with the ip-type property set to exclusive, all zones in that node list must have this property set to exclusive. Note that a global zone always has the ip-type property set to shared, and therefore cannot coexist in a node list that contains zones of ip-type=exclusive. This restriction applies only to versions of the Solaris OS that use the Solaris zones ip-type property.
IPMP groups – For all public-network adapters that are used for data-service traffic in the non-global zone, you must manually configure IPMP groups in all /etc/hostname.adapter files on the zone. This information is not inherited from the global zone. For guidelines and instructions to configure IPMP groups, follow the procedures in Part VI, IPMP, in System Administration Guide: IP Services.
Private-hostname dependency - Exclusive-IP zones cannot depend on the private hostnames and private addresses of the cluster.
Shared-address resources – Shared-address resources cannot use exclusive-IP zones.
Consider the following points when you create a Sun Logical Domains (LDoms) I/O domain or guest domain on a physically clustered machine that is SPARC hypervisor capable:
SCSI LUN requirement – The virtual shared storage device, or virtual disk back end, of a Sun LDoms guest domain must be a full SCSI LUN in the I/O domain. You cannot use an arbitrary virtual device.
Fencing – Do not export a storage LUN to more than one guest domain on the same physical machine, unless you also disable fencing for that device. Otherwise, if two different guest domains on the same machine both are visible to a device, the device will be fenced whenever one of the guest domains dies. The fencing of the device will panic any other guest domain that subsequently tries to access the device.
Network isolation – Guest domains that are located on the same physical machine but are configured in different clusters must be network isolated from each other. Use one of the following methods:
Configure the clusters to use different network interfaces in the I/O domain for the private network.
Use different network addresses for each of the clusters.
Networking in guest domains – Network packets to and from guest domains must traverse service domains to reach the network drivers through virtual switches. Virtual switches use kernel threads that run at system priority. The virtual-switch threads must be able to acquire needed CPU resources to perform critical cluster operations, including heartbeats, membership, checkpoints, and so forth. Configuring virtual switches with the mode=sc setting enables expedited handling of cluster heartbeat packets. However, the reliability of other critical cluster operations can be enhanced by adding more CPU resources to the service domain under the following workloads:
High-interrupt load, for example, due to network or disk I/O. Under extreme load, virtual switches can preclude system threads from running for a long time, including virtual-switch threads.
Real-time threads that are overly aggressive in retaining CPU resources. Real-time threads run at a higher priority than virtual-switch threads, which can restrict CPU resources for virtual-switch threads for an extended time.
Exporting storage from I/O domains – If you configure a cluster that is composed of Sun Logical Domains I/O domains, do not export its storage devices to other guest domains that also run Sun Cluster software.
Solaris I/O multipathing – Do not run Solaris I/O multipathing software (MPxIO) from guest domains. Instead, run Solaris I/O multipathing software in the I/O domain and export it to the guest domains.
Private-interconnect IP address range – The private network is shared by all guest domains that are created on the same physical machine and it is visible to all these domains. Before you specify a private-network IP address range to the scinstall utility for use by a guest-domain cluster, ensure that the address range is not already in use by another guest domain on the same physical machine.
For more information about Sun Logical Domains, see the Logical Domains (LDoms) 1.0.3 Administration Guide.