Sun Cluster 2.2 Software Installation Guide

Chapter 2 Planning the Configuration

This chapter provides information and procedures for planning your Sun Cluster configuration.

2.1 Configuration Planning Overview

Configuration planning includes making decisions about:

Before you develop your configuration plan, consider the reliability issues described in "2.5 Configuration Rules for Improved Reliability". Also, the Sun Cluster environment imposes some configuration restrictions that you should consider before completing your configuration plan. These are described in "2.6 Configuration Restrictions".

Appendix A, Configuration Worksheets and Examples, provides worksheets to help you plan your configuration.

2.2 Configuration Planning Tasks

The following sections describe the tasks and issues associated with planning your configuration. You are not required to perform the tasks in the order shown here, but you should address each task as part of your configuration plan.

2.2.1 Planning the Administrative Workstation

You must decide whether to use a dedicated SPARCTM workstation, known as the administrative workstation, for administering the active cluster. The administrative workstation is not a cluster node. The administrative workstation can be any SPARC machine capable of running a telnet session to the Terminal Concentrator to facilitate console logins. Alternatively, on E10000 platforms, you must have the ability from the administrative workstation to log into the System Service Processor (SSP) and connect using the netcon command.

Sun Cluster does not require a dedicated administrative workstation, but using one provides you these advantages:

The administrative workstation must run the same version of the Solaris operating environment (Solaris 2.6 or Solaris 7) as the other nodes in the cluster.


Note -

It is possible to use a cluster node as both the administrative workstation and a cluster node. This entails installing a cluster node as both "client" and "server."


2.2.2 Establishing Names and Naming Conventions

Before configuring the cluster, you must decide on names for the following:

The network interface names (and associated IP addresses) are necessary for each logical host on each public network. Although you are not required to use a particular naming convention, the following naming conventions are used throughout the documentation and are recommended. Use the configuration worksheets included in Appendix A, Configuration Worksheets and Examples.

Cluster - As part of the configuration process, you will be prompted for the name of the cluster. You can choose any name; there are no restrictions imposed by Sun Cluster.

Physical Hosts - Physical host names are created by adding the prefix phys- to the logical host names (for physical hosts that master only one logical host each). For example, the physical host that masters a logical host named hahost1 would be named phys-hahost1 by default. There is no Sun Cluster naming convention or default for physical hosts that master more than one logical host.


Caution - Caution -

If you are using DNS as your name service, do not use an underscore in your physical or logical host names. DNS will not recognize a host name containing an underscore.


Logical Hosts and Disk Groups - Logical host names can be different from disk group names in Sun Cluster. However, using the same names is the Sun Cluster convention and eases administration. Refer to "2.2.9 Planning Your Logical Host Configuration", for more information.

Public Network - The names by which physical hosts are known on the public network are their primary physical host names. The names by which physical hosts are known on a secondary public network are their secondary physical host names. Assign these names using the following conventions, as illustrated in Figure 2-1:


Note -

The primary physical host name should be the node name returned by uname -n.


Private Interconnect - There is no default naming convention for the private interconnect.

Naming convention examples are illustrated in Figure 2-1.

Figure 2-1 Public and Private Network Naming Conventions

Graphic

2.2.3 Planning Network Connections

You must have at least one public network connection to a local area network and exactly two private interconnects between the cluster nodes. Refer to Chapter 1, Understanding the Sun Cluster Environment, for overviews of Sun Cluster network configurations, and to Appendix A, Configuration Worksheets and Examples, for network planning worksheets.

2.2.3.1 Public Network Connections

Consider these points when planning your public network configuration:

2.2.3.2 Private Network Connections

Sun Cluster requires two private networks for normal operation. You must decide whether to use 100 Mbit/sec Ethernet or 1 Gbit/sec Scalable Coherent Interface (SCI) connections for the private networks.

In two-node configurations, these networks may be implemented with point-to-point cables between the cluster nodes. In three- or four-node configurations, they are implemented using hubs or switches. Only private traffic between Sun Cluster nodes is transported on these networks.

If you connect nodes by using SCI switches, each node must be connected to the same port number on both switches. During the installation, note that the port numbers on the switches correspond to node numbers. For example, node 0 is the host physically connect to port 0 on the switch, and so on.

A class C network number (204.152.64) is reserved for private network use by the Sun Cluster nodes. The same network number is used by all Sun Cluster systems.

2.2.3.3 Terminal Concentrator and Administrative Workstation Network Connections

The Terminal Concentrator and administrative workstation connect to a public network with access to the Sun Cluster nodes. You must assign IP addresses and host names for them to enable access to the cluster nodes over the public network.


Note -

E10000 systems use a System Service Processor (SSP) instead of a Terminal Concentrator. You will need to assign the SSP a host name, IP address, and root password. You will also need to create a user named "ssp" on the SSP and provide a password for user "ssp" during Sun Cluster installation.


2.2.4 Planning Your Solaris Operating Environment Installation

All nodes in a cluster must be installed with the same version of the Solaris operating environment (Solaris 2.6 or Solaris 7) before you can install the Sun Cluster software. When you install Solaris on cluster nodes, follow the general rules in this section.


Note -

All platforms except the E10000 require at least the Entire Distribution Solaris installation, for both the Solaris 2.6 and Solaris 7 operating environments. E10000 systems require the Entire Distribution + OEM.


2.2.4.1 Using Solaris Interface Groups

A new feature called interface groups was added to the Solaris 2.6 operating environment. This feature is implemented as default behavior in Solaris 2.6, but as optional behavior in subsequent releases.

As described in the ifconfig(1M) man page, if an interface (logical or physical) shares an IP prefix with another interface, these interfaces are collected into an interface group. IP uses an interface group to rotate source address selection when the source address is unspecified, and in the case of multiple physical interfaces in the same group, to distribute traffic across different IP addresses on a per-IP-destination basis (see netstat(1M) for per-IP-destination information).

When enabled, this feature causes a problem with switchover of logical hosts. The system will experience RPC timeouts and the switchover will fail, causing the logical host to remain mastered on its current host.

Interface groups should be disabled on all cluster nodes. The status of interface groups is determined by the value of ip_enable_group_ifs in /etc/system.

The value for this parameter can be checked with the following ndd command:

# ndd /dev/ip ip_enable_group_ifs

If the value returned is 1 (enabled), disable interface groups by running the following command:

set ip:ip_enable_group_ifs=0

Caution - Caution -

Whenever you modify the /etc/system file, you must reboot the system.


2.2.4.2 Partitioning System Disks

When Solaris 2.6 or Solaris 7 is installed, the system disk is partitioned into slices for root (/), /usr, and other standard file systems. You must change the partition configuration to meet the requirements of Sun Cluster and your volume manager. Use the guidelines in the following sections to allocate disk space accordingly.

File System Slices

Table 2-1 shows the slice number, contents, and suggested space allocation for file systems, swap space, and slices. These values are used as the default when you install Solaris with JumpStartTM, but they are not required by Sun Cluster.

Table 2-1 File systme slices

Number 

Contents 

Allocation (Mbytes) 

root (/)

80 

swap 

50 

/var

remaining free space (varies) 

/opt

300 

/usr

300 

Volume Manager Slices

Additionally, if you will be using Solstice DiskSuite, you must set aside a 10 Mbyte slice on the system disk for metadevice state database replicas. See the Solstice DiskSuite documentation for more information about replicas.

If you will be using SSVM or CVM, you must set aside two partitions and a small amount of free space (1024 sectors) on each multihosted disk that is to be managed by SSVM or CVM, for the disk group rootdg. The free space should be located at the beginning or end of each disk and should not be allocated to any slice. Refer to "2.2.5.2 Sun StorEdge Volume Manager and Cluster Volume Manager Considerations", for more information.

The Root (/) Slice

The root (/) slice on your local disk must have enough space for the various files and directories as well as space for the device inodes in /devices and symbolic links in /dev.

The root slice also must be large enough to hold the following:


Note -

Sun Cluster uses various shell scripts that run as root processes. For this reason, the /.cshrc* and /.profile files for user root should be empty or non-existent on the cluster nodes.


Your cluster might require a larger root file system if it contains large numbers of disk drives.


Note -

If you run out of free space, you must reinstall the operating environment on all cluster nodes to obtain additional free space in the root slice. Make sure at least 20 percent of the total space on the root slice is left free.


The /usr, /var, and /opt Slices

The /usr slice holds the user file system. The /var slice holds the system log files. The /opt slice holds the Sun Cluster and data service software packages. See the Solaris Advanced Installation Guide for details about changing the allocation values as Solaris is installed.

2.2.5 Volume Management

Sun Cluster uses volume management software to group disks into disk groups that can then be administered as one unit. Sun Cluster supports Solstice DiskSuite, Sun StorEdge Volume Manager (SSVM), and Cluster Volume Manager (CVM). You can use only one volume manager within a single cluster configuration.

You must install the volume management software after you install the Solaris operating environment. You can install the volume management software either before or after you install Sun Cluster software. Refer to your volume manager software documentation and to Chapter 3, Installing and Configuring Sun Cluster Software, for instructions on installing the volume management software.

Use these guidelines when configuring your disks:

See "Volume Manager Slices" for disk layout recommendations related to volume management, and consult your volume manager documentation for any additional restrictions.

2.2.5.1 Solstice DiskSuite Considerations

Consider these points when planning Solstice DiskSuite configurations:

2.2.5.2 Sun StorEdge Volume Manager and Cluster Volume Manager Considerations

Consider these points when planning SSVM and CVM configurations:


Caution - Caution -

Insufficient disk space and slices prevent encapsulation of the boot disk later and increase installation time because the operating environment might have to be reinstalled.



Note -

You will need licenses for Sun StorEdge Volume Manager if you use it with any storage devices other than SPARCstorage Arrays or Sun StorEdge A5000s. SPARCstorage Arrays and Sun StorEdge A5000s include bundled licenses for use with SSVM. Contact the Sun License Center for any necessary SSVM licenses; see http://www.sun.com/licensing/ for more information.

You do not need licenses to run Solstice DiskSuite or Cluster Volume Manager with Sun Cluster.


2.2.6 File System Logging

One important aspect of high availability is the ability to bring file systems back online quickly in the event of a node failure. This aspect is best served by using a logging file system. Sun Cluster supports three logging file systems; VxFS logging from Veritas, DiskSuite UFS logging, and Solaris UFS logging. Cluster Volume Manager (CVM), when used with Oracle Parallel Server, uses raw partitions so does not use a logging file system. However, you can also run CVM in a cluster with both OPS and HA data services. In this configuration, the OPS shared disk groups would use raw partitions, but the HA disk groups could use either VxFS or Solaris UFS logging file systems (Solaris UFS logging is supported only under Solaris 7). Excluding the co-existent CVM configuration described above, Sun Cluster supports the following combinations of volume managers and logging file systems:

Table 2-2 Supported File System Matrix

Solaris Operating Environment 

Volume Manager 

Supported File Systems 

Solaris 2.6 

Sun StorEdge Volume Manager 

VxFS, UFS (no logging) 

Solstice DiskSuite 

DiskSuite UFS logging 

Solaris 7 

Solstice DiskSuite 

DiskSuite UFS logging, Solaris UFS logging 

CVM uses a feature called Dirty Region Logging to aid in fast recovery after a reboot, similar to what the logging file systems provide. For information on CVM, refer to the Sun Cluster Cluster Volume Manager Administration Guide. For information on DiskSuite UFS logging, refer to the Solstice DiskSuite documentation. For information on VxFS logging, see the Veritas documentation. Solaris UFS logging is described briefly below. See the mount_ufs(1M) for more details.

Solaris UFS logging is a new feature in the Solaris 7 operating environment.

Solaris UFS logging uses a circular log to journal the changes made to a UFS file system. As the log fills up, changes are "rolled" into the actual file system. The advantage of logging is that the UFS file system is never left in an inconsistent state, that is, with a half-completed operation. After a system crash, fsck has nothing to fix, so you boot up much faster.

Solaris UFS logging is enabled using the "logging" mount option. To enable logging on a UFS file system, you either add -o logging to the mount command or add the word "logging" to the /etc/opt/SUNWcluster/conf/hanfs/vfstab.logicalhost entry (the rightmost column).

Solaris UFS logging always allocates the log using free space on the UFS file system. The log takes up 1 MByte on file systems less than 1 GByte in size, and 1 MByte per GByte on larger file systems, up to a maximum of 64 MBytes.

Solaris UFS logging always puts the log files on the same disk as the file system. If you use this logging option, you are limited to the size of the disk. DiskSuite UFS logging allows the log to be separated on a different disk. This has the effect of reducing a bit of the I/O that is associated with the log.

With DiskSuite UFS logging, the trans device used for logging creates a metadevice. The log is yet another metadevice which can be mirrored and striped. Furthermore, you can create up to a 1TByte logging file system with Solstice DiskSuite.

The "logging" mount option will not work if you already have logging provided by Solstice DiskSuite--you will receive a warning message explaining you already have logging on that file system. If you require more control over the size or location of the log, you should use DiskSuite UFS logging.

Depending on the file system usage, Solaris UFS logging gives you performance that is the same or better than running without logging.

There is currently no support for converting from DiskSuite UFS logging to Solaris UFS logging.

2.2.7 Determining Your Multihost Disk Requirements

Unless you are using a RAID5 configuration, all multihost disks must be mirrored in Sun Cluster configurations. This enables the configuration to tolerate single-disk failures. Refer to "2.5.1 Mirroring Guidelines", and to your volume management documentation, for more information.

Determine the amount of data that you want to move to the Sun Cluster configuration. If you are not using RAID5, double that amount to allow disk space for mirroring. With RAID5, you need extra space equal to 1/(# of devices -1). Use the worksheets in Appendix A, Configuration Worksheets and Examples, to help plan your disk requirements.

Consider these points when planning your disk requirements:

2.2.7.1 Disk Space Growth

Consider these points when planning for disk space growth:

2.2.7.2 Size and Number of Disk Drives

Several sizes of disks are supported in multihost disk expansion units. Consider these points when deciding which size drives to use:

2.2.8 Planning Your File System Layout on the Multihost Disks

Sun Cluster does not require any specific disk layout or file system size. The requirements for the file system hierarchy are dependent on the volume management software you are using.

Regardless of your volume management software, Sun Cluster requires at least one file system per disk group to serve as the HA administrative file system. This administrative file system is generally mounted on /logicalhost, and must be a minimum of 10 Mbytes. It is used to store private directories containing data service configuration information.

For clusters using Solstice DiskSuite, you need to create a metadevice to contain the HA administrative file system. The HA administrative file system should be configured the same as your other multihost file systems, that is, it should be mirrored and set up as a trans device.

For clusters using SSVM or CVM, Sun Cluster creates the HA administrative file system on a volume named dg-stat where dg is the name of the disk group in which the volume is created. dg is usually the first disk group in the list of disk groups specified when defining a logical host.

Consider these points when planning file system size and disk layout:

2.2.8.1 File Systems With Solstice DiskSuite

Solstice DiskSuite software requires some additional space on the multihost disks and imposes some restrictions on its use. For example, if you are using UNIX file system (UFS) logging under Solstice DiskSuite, one to two percent of each multihost disk must be reserved for metadevice state database replicas and UFS logging. Refer to Appendix B, Configuring Solstice DiskSuite, and to the Solstice DiskSuite documentation for specific guidelines and restrictions.

All metadevices used by each shared diskset are created in advance, at reconfiguration boot time, based on settings found in the md.conf file. The fields in md.conf file are described in the Solstice DiskSuite documentation. The two fields that are used in the Sun Cluster configuration are md_nsets and nmd. The md_nsets field defines the number of disksets and the mnd field defines the number of metadevices to create for each diskset. You should set these fields at install time to allow for all predicted future expansion of the cluster.

Extending the Solstice DiskSuite configuration after the cluster is in production is time consuming because it requires a reconfiguration reboot for each node and always carries the risk that there will not be enough space allocated in the root (/) file system to create all of the requested devices.

The value of md_nsets must be set to the expected number of logical hosts in the cluster, plus one to allow Solstice DiskSuite to manage the private disks on the local host (that is, those metadevices that are not in the local diskset).

The value of nmd must be set to the predicted largest number of metadevices used by any one of the disksets in the cluster. For example, if a cluster uses 10 metadevices in its first 15 disksets, but 1000 metadevices in the 16th diskset, nmd must be set to at least 1000.


Caution - Caution -

All cluster nodes (or cluster pairs in the cluster pair topology) must have identical md.conf files, regardless of the number of logical hosts served by each node. Failure to follow this guideline can result in serious Solstice DiskSuite errors and possible loss of data.


Consider these points when planning your Solstice DiskSuite file system layout:

Table 2-3 Solstice DiskSuite disk partitioning

Slice 

Description 

2 Mbytes, reserved for Solstice DiskSuite 

UFS logs 

Remainder of the disk 

Overlaps Slice 6 and 0 


Note -

The overlap of Slices 6 and 0 by Slice 2 is used for raw devices where there are no UFS logs.


In addition, the first drive on each of the first two controllers in each of the disksets should be partitioned as described in Table 2-4.

Table 2-4 Multihost Disk Partitioning for the First Drive on the First Two Controllers

Slice 

Description 

2 Mbytes, reserved for Solstice DiskSuite 

2 Mbytes, UFS log for HA administrative file systems 

9 Mbytes, UFS master for HA administrative file systems 

UFS logs 

Remainder of the disk 

Overlaps Slice 6 and 0 

Partition 7 is always reserved for use by Solstice DiskSuite as the first or last 2 Mbytes on each multihost disk.

2.2.8.2 File Systems With VERITAS VxFS

You can create UNIX File System (UFS) or Veritas File System (VxFS) file systems in the disk groups of logical hosts. When a logical host is mastered on a cluster node, the file systems associated with the disk groups of the logical host are mounted on the specified mount points of the mastering node.

When you reconfigure logical hosts, Sun Cluster must check the file systems before mounting them, by running the fsck command. Even though the fsck command checks the UFS file systems in non-interactive parallel mode on UFS file systems, this still consumes some time, and this affects the reconfiguration process. VxFS drastically cuts down on the file system check time, especially if the configuration contains large file systems (greater than 500 Mbytes) used for data services.

When setting up mirrored volumes, always add a Dirty Region Log (DRL) to decrease volume recovery time in the event of a system crash. When mirrored volumes are used in clusters, DRL must be assigned for volumes greater than 500 Mbytes.

With SSVM and CVM, it is important to estimate the maximum number of volumes that will be used by any given disk group at the time the disk group is created. If the number is less than 1000, default minor numbering can be used. Otherwise, you must carefully plan the way in which minor numbers are assigned to disk group volumes. It is important that no two disk groups shared by the same nodes have overlapping minor number assignments.

As long as default numbering can be used and all disk groups are currently imported, it is not necessary to use the minor option to the vxdg init command at disk group creation time. Otherwise, the minor option must be used to prevent overlapping the volume minor number assignments. It is possible to modify the minor numbering later, but doing so might require you to reboot and import the disk group again. Refer to the vxdg(1M) man page for details.

2.2.8.3 Mount Information

The /etc/vfstab file contains the mount points of file systems residing on local devices. For a multihost file system used for a logical host, all the nodes that can potentially master the logical host should possess the mount information.

The mount information for a logical host's file system is kept in a separate file on each node, named /etc/opt/SUNWcluster/conf/hanfs/vfstab.logicalhost. The format of this file is identical to the /etc/vfstab file for ease of maintenance, though not all fields are used.


Note -

You must keep the vfstab.loghostname file consistent among all nodes of the cluster. Use the rcp command or file transfer protocol (FTP) to copy the file to the other nodes of the cluster. Alternately, simultaneously edit the file by using crlogin or ctelnet.


The same file system cannot be mounted by more than one node at the same time, because a file system can be mounted only if the corresponding disk group has been imported by the node. The consistency and uniqueness of the disk group imports and logical host mastery is enforced by the cluster framework logical host reconfiguration sequence.

2.2.8.4 Booting From a SPARCstorage Array

Sun Cluster supports booting from a private or shared disk inside a SPARCstorage Array.

Consider these points when using a boot disk in an SSA:

2.2.9 Planning Your Logical Host Configuration

A disk group stores the data for one or more data services. Generally, several data services share a logical host, and therefore fail over together. If you want to enable a particular data service to fail over independently of all other data services, then assign a logical host to that data service alone, and do not allow any other data services to share it.

As part of the installation and configuration, you need to establish the following for each logical host:

Use the logical host worksheet in Appendix A, Configuration Worksheets and Examples, to record this information.

Consider these points when planning your logical host configuration:

2.2.10 Planning the Cluster Configuration Database Volume

As part of the installation and configuration, you configure a Cluster Configuration Database (CCD) volume to store internal configuration data. In a two-node cluster using SSVM or CVM, this volume can be shared between the nodes thereby increasing the availability of the CCD. In clusters with more than two nodes, a copy of the CCD is local to each node. See "C.5 Configuring the Shared CCD Volume", for details on configuring a shared CCD.


Note -

You cannot used a shared CCD in a two-node cluster using Solstice DiskSuite.


If each node keeps its own copy of the CCD, then updates to the CCD are disabled by default when one node is not part of the cluster. This prevents the database from getting out of synchronization when only a single node is up.

The CCD requires two disks as part of a disk group for a shared volume. These disks are dedicated for CCD use and cannot be used by any other application, file system, or database.

The CCD should be mirrored for maximum availability. The two disks comprising the CCD should be on separate controllers.

In clusters using CVM or SSVM, the scinstall(1M) script will ask you how you want to set up the CCD on a shared volume in your configuration.

Refer to Chapter 1, Understanding the Sun Cluster Environment, for a general overview of the CCD. Refer to the chapter on general Sun Cluster administration in the Sun Cluster 2.2 System Administration Guide for procedures used to administer the CCD.


Note -

Although the installation procedure does not prevent you from choosing disks on the same controller, this would introduce a possible single point of failure and is not recommended.


2.2.11 Planning the Quorum Device (SSVM and CVM Only)

If you are using Cluster Volume Manager or Sun StorEdge Volume Manager as your cluster volume manager, you must configure a quorum device regardless of the number of cluster nodes. During the Sun Cluster installation process, scinstall(1M) will prompt you to configure a quorum device.

The quorum device is either an array controller or a disk.

During the cluster software installation, you will need to make decisions concerning:

2.2.11.1 Cluster Topology Considerations

Before you select the quorum device for your cluster, be aware of the implications of your selection. Any node pair of the cluster must have a quorum device. That is, one quorum device must be specified for every node set that share multihost disks. Each node in the cluster must be informed of all quorum devices in the cluster, not just the quorum device connected to it. The scinstall(1M) script offers all possible node pairs in sequence and displays any common devices that are quorum device candidates.

In two-node clusters with dual-ported disks, a single quorum device needs to specified.

In greater than two-node clusters with dual-ported disks, not all of the cluster nodes have access to the entire disk subsystem. In such configurations, you must specify one quorum device for each set of nodes that shares disks.

Sun Cluster configurations can consist of disk storage units (such as the Sun StorEdge A5000) that can be connected to all nodes in the cluster. This allows for applications such as OPS to run on clusters of greater than two nodes. A disk storage unit that is physically connected to all nodes in the cluster is referred to as a direct attached device. In this type of cluster a single quorum device needs to be selected from a direct attached device.

In clusters with direct attached devices, if the cluster interconnect fails, one of the following will happen:

In clusters without direct attached devices to all nodes of the cluster, you will, by definition, have multiple quorum devices (one for each node pair that share disks). In this configuration, the quorum device only comes into play where only two nodes are remaining and they share a common quorum device.

In the event of a node failure, the node that is able to reserve the quorum device remains as the sole survivor of the cluster. This is necessary to ensure the integrity of data on the shared disks.

2.2.12 Planning a Data Migration Strategy

Consider these points when deciding how to migrate existing data to the Sun Cluster environment.

2.2.13 Selecting a Multihost Backup Strategy

Before you load data onto the multihost disks in a Sun Cluster configuration, you should have a plan for backing up the data. Sun recommends using Solstice BackupTM or ufsdump to back up your Sun Cluster configuration.

If you are converting your backup method from Online:BackupTM to Solstice Backup, special considerations exist because the two products are not compatible. The primary decision for the system administrator is whether or not the files backed up with Online:Backup will be available online after Solstice Backup is in use. Refer to the Solstice Backup documentation for details on conversion.

2.2.14 Planning for Problem Resolution

The following files should be saved after the system is configured and running. In the unlikely event that the cluster should experience problems, these files can help service providers debug and solve cluster problems.

2.3 Selecting a Solaris Install Method

You can install Solaris from a local CD-ROM or from a network install server using JumpStart. If you are installing several Solaris machines, consider a network install. Otherwise, use the local CD-ROM.


Note -

Configurations using FDDI as the primary public network cannot be network-installed directly using JumpStart because the FDDI drivers are unbundled and are not available in "mini-unix." If you use FDDI as the primary public network, you must install Solaris from CD-ROM.


2.4 Licensing

Sun Cluster 2.2 requires no framework or HA data service licenses to run. You do not need licenses to run Solstice DiskSuite or Cluster Volume Manager with Sun Cluster 2.2. However, you will need licenses for Sun Enterprise Volume Manager, if you use it with any storage devices other than SPARCstorage Arrays or StorEdge A5000s. SPARCstorage Arrays and StorEdge A5000s include bundled licenses for use with SSVM. Contact the Sun License Center for any necessary SSVM licenses; see http://www.sun.com/licensing/ for more information.

You may need to obtain licenses for DBMS products and other third party products. Contact your third party service provider for third party product licenses.

2.5 Configuration Rules for Improved Reliability

The rules discussed in this section help ensure that your Sun Cluster configuration is highly available. These rules also help determine the appropriate hardware for your configuration.

2.5.1 Mirroring Guidelines

Unless you are using a RAID5 configuration, all multihost disks must be mirrored in Sun Cluster configurations. This enables the configuration to tolerate single-disk failures.

Consider these points when mirroring multihost disks:

2.5.1.1 Mirroring Root (/)

For maximum availability, you should mirror root (/), /usr, /var, /opt, and swap on the local disks. Under Sun StorEdge Volume Manager and Cluster Volume Manager, this means encapsulating the root disk and mirroring the generated subdisks. However, mirroring the root disk is not a requirement of Sun Cluster.

You should consider the risks, complexity, cost, and service time for the various alternatives concerning the root disk. There is not one answer for all configurations. You might want to consider your local Enterprise Services representative's preferred solution when deciding whether to mirror root.

Refer to your volume manager documentation for instructions on mirroring root.

Consider the following issues when deciding whether to mirror the root file system.

2.5.1.2 Solstice DiskSuite Mirroring Alternatives

Consider the following alternatives when deciding whether to mirror root (/) file systems under Solstice DiskSuite. The issues mentioned in this section are not applicable to Sun StorEdge Volume Manager or Cluster Volume Manager configurations.

2.6 Configuration Restrictions

This section describes Sun Cluster configuration restrictions.

2.6.1 Service and Application Restrictions

Note the following restrictions related to services and applications.

2.6.2 Sun Cluster HA for NFS Restrictions

Note the following restrictions related to Sun Cluster HA for NFS.

2.6.3 Hardware Restrictions

Note the following hardware-related restrictions.

2.6.4 Solstice DiskSuite Restrictions

Note the following restrictions related to Solstice DiskSuite.

2.6.5 Other Restrictions