|Skip Navigation Links|
|Exit Print View|
|Oracle Solaris Cluster Software Installation Guide Oracle Solaris Cluster|
This section provides the following guidelines for planning Solaris software installation in a cluster configuration.
For more information about Solaris software, see your Solaris installation documentation.
You can install Solaris software from a local DVD-ROM or from a network installation server by using the JumpStart installation method. In addition, Oracle Solaris Cluster software provides a custom method for installing both the Solaris OS and Oracle Solaris Cluster software by using the JumpStart installation method. If you are installing several cluster nodes, consider a network installation.
See How to Install Solaris and Oracle Solaris Cluster Software (JumpStart) for details about the scinstall JumpStart installation method. See your Solaris installation documentation for details about standard Solaris installation methods.
Consider the following points when you plan the use of the Solaris OS in an Oracle Solaris Cluster configuration:
To determine whether you can install an Oracle Solaris Cluster data service directly in a non-global zone, see the documentation for that data service.
If you configure non-global zones on a global-cluster node, the loopback file system (LOFS) must be enabled. See the information for LOFS for additional considerations.
Loopback file system (LOFS) – During cluster creation, LOFS capability is enabled by default. If the cluster meets both of the following conditions, you must disable LOFS to avoid switchover problems or other failures:
The automountd daemon is running.
If the cluster meets only one of these conditions, you can safely enable LOFS.
If you require both LOFS and the automountd daemon to be enabled, exclude from the automounter map all files that are part of the highly available local file system that is exported by HA for NFS.
Power-saving shutdown – Automatic power-saving shutdown is not supported in Oracle Solaris Cluster configurations and should not be enabled. See the pmconfig(1M) and power.conf(4) man pages for more information.
IP Filter – Oracle Solaris Cluster software does not support the Solaris IP Filter feature for scalable services, but does support Solaris IP Filter for failover services. Observe the following guidelines and restrictions when you configure Solaris IP Filter in a cluster:
NAT routing is not supported.
The use of NAT for translation of local addresses is supported. NAT translation rewrites packets on-the-wire and is therefore transparent to the cluster software.
Stateful filtering rules are not supported; only stateless filtering is supported. Oracle Solaris Cluster relies on IP network multipathing (IPMP) for public network monitoring, which does not work with stateful filtering rules.
fssnap – Oracle Solaris Cluster software does not support the fssnap command, which is a feature of UFS. However, you can use the fssnap command on local systems that are not controlled by Oracle Solaris Cluster software. The following restrictions apply to fssnap support:
The fssnap command is supported on local files systems that are not managed by Oracle Solaris Cluster software.
The fssnap command is not supported on cluster file systems.
The fssnap command is not supported on local file systems under the control of HAStoragePlus.
Oracle Solaris Cluster 3.3 5/11 software requires at least the End User Solaris Software Group (SUNWCuser). However, other components of your cluster configuration might have their own Solaris software requirements as well. Consider the following information when you decide which Solaris software group you are installing.
Servers – Check your server documentation for any Solaris software requirements. For example , Sun Enterprise 10000 servers require the Entire Solaris Software Group Plus OEM Support.
Additional Solaris packages – You might need to install other Solaris software packages that are not part of the End User Solaris Software Group. The Apache HTTP server packages and Trusted Extensions software are two examples that require packages that are in a higher software group than End User. Third-party software might also require additional Solaris software packages. See your third-party documentation for any Solaris software requirements.
Tip - To avoid the need to manually install Solaris software packages, install the Entire Solaris Software Group Plus OEM Support.
Add this information to the appropriate Local File System Layout Worksheet.
When you install the Solaris OS, ensure that you create the required Oracle Solaris Cluster partitions and that all partitions meet minimum space requirements.
swap – The combined amount of swap space that is allocated for Solaris and Oracle Solaris Cluster software must be no less than 750 Mbytes. For best results, add at least 512 Mbytes for Oracle Solaris Cluster software to the amount that is required by the Solaris OS. In addition, allocate any additional swap amount that is required by applications that are to run on the Solaris host.
(Optional) /globaldevices – Create a file system at least 512 Mbytes large that is to be used by the scinstall(1M) utility for global devices. If you use a lofi device instead, you do not need to create this file system. Both choices are functionally equivalent.
Volume manager – Create a 20-Mbyte partition on slice 7 for volume manager use. If your cluster uses Veritas Volume Manager (VxVM) and you intend to encapsulate the root disk, you need to have two unused slices available for use by VxVM.
To meet these requirements, you must customize the partitioning if you are performing interactive installation of the Solaris OS.
See the following guidelines for additional partition planning information:
As with any other system running the Solaris OS, you can configure the root (/), /var, /usr, and /opt directories as separate file systems. Or, you can include all the directories in the root (/) file system.
The following describes the software contents of the root (/), /var, /usr, and /opt directories in an Oracle Solaris Cluster configuration. Consider this information when you plan your partitioning scheme.
root (/) – The Oracle Solaris Cluster software itself occupies less than 40 Mbytes of space in the root (/) file system. Solaris Volume Manager software requires less than 5 Mbytes, and VxVM software requires less than 15 Mbytes. To configure ample additional space and inode capacity, add at least 100 Mbytes to the amount of space you would normally allocate for your root ( /) file system. This space is used for the creation of both block special devices and character special devices used by the volume management software. You especially need to allocate this extra space if a large number of shared disks are in the cluster.
/var – The Oracle Solaris Cluster software occupies a negligible amount of space in the /var file system at installation time. However, you need to set aside ample space for log files. Also, more messages might be logged on a clustered node than would be found on a typical stand-alone server. Therefore, allow at least 100 Mbytes for the /var file system.
/usr – Oracle Solaris Cluster software occupies less than 25 Mbytes of space in the /usr file system. Solaris Volume Manager and VxVM software each require less than 15 Mbytes.
/opt – Oracle Solaris Cluster framework software uses less than 2 Mbytes in the /opt file system. However, each Oracle Solaris Cluster data service might use between 1 Mbyte and 5 Mbytes. Solaris Volume Manager software does not use any space in the /opt file system. VxVM software can use over 40 Mbytes if all of its packages and tools are installed.
In addition, most database and applications software is installed in the /opt file system.
SPARC: If you use Sun Management Center software to monitor the cluster, you need an additional 25 Mbytes of space on each Solaris host to support the Sun Management Center agent and Oracle Solaris Cluster module packages.
Oracle Solaris Cluster software offers two choices of locations to host the global-devices namespace:
A lofi device
A dedicated file system on one of the local disks
This section describes the guidelines for using a dedicated partition. This information does not apply if you instead host the global-devices namespace on a lofi.
The /globaldevices file system is usually located on your root disk. However, if you use different storage on which to locate the global-devices file system, such as a Logical Volume Manager volume, it must not be part of a Solaris Volume Manager shared disk set or part of a VxVM disk group other than a root disk group. This file system is later mounted as a UFS cluster file system. Name this file system /globaldevices, which is the default name that is recognized by the scinstall(1M) command.
However, a UFS global-devices file system can coexist on a node with other root file systems that use ZFS.
The scinstall command later renames the file system /global/.devices/node@nodeid, where nodeid represents the number that is assigned to a Solaris host when it becomes a global-cluster member. The original /globaldevices mount point is removed.
The /globaldevices file system must have ample space and ample inode capacity for creating both block special devices and character special devices. This guideline is especially important if a large number of disks are in the cluster. Create a file system size of at least 512 Mbytes and a density of 512, as follows:
# newfs -i 512 globaldevices-partition
This number of inodes should suffice for most cluster configurations.
If you use Solaris Volume Manager software, you must set aside a slice on the root disk for use in creating the state database replica. Specifically, set aside a slice for this purpose on each local disk. But, if you have only one local disk on a Solaris host, you might need to create three state database replicas in the same slice for Solaris Volume Manager software to function properly. See your Solaris Volume Manager documentation for more information.
If you use Veritas Volume Manager (VxVM) and you intend to encapsulate the root disk, you need to have two unused slices that are available for use by VxVM. Additionally, you need to have some additional unassigned free space at either the beginning or the end of the disk. See your VxVM documentation for more information about root disk encapsulation.
Table 1-2 shows a partitioning scheme for a Solaris host that has less than 750 Mbytes of physical memory. This scheme is to be installed with the End User Solaris Software Group, Oracle Solaris Cluster software, and the Oracle Solaris Cluster HA for NFS data service. The last slice on the disk, slice 7, is allocated with a small amount of space for volume-manager use.
This layout allows for the use of either Solaris Volume Manager software or VxVM software. If you use Solaris Volume Manager software, you use slice 7 for the state database replica. If you use VxVM, you later free slice 7 by assigning the slice a zero length. This layout provides the necessary two free slices, 4 and 7, as well as provides for unused space at the end of the disk.
Table 1-2 Example File-System Allocation
For information about the purpose and function of Solaris zones in a cluster, see Support for Oracle Solaris Zones in Oracle Solaris Cluster Concepts Guide.
For guidelines about configuring a cluster of non-global zones, see Zone Clusters.
Consider the following points when you create a Solaris 10 non-global zone, simply referred to as a zone, on a global-cluster node.
Reusing a zone name on multiple nodes – To simplify cluster administration, you can use the same name for a zone on each node where resource groups are to be brought online in that zone.
Private IP addresses – Do not attempt to use more private IP addresses than are available in the cluster.
Mounts – Do not include global mounts in zone definitions. Include only loopback mounts.
Failover services – In multiple-host clusters, while Oracle Solaris Cluster software permits you to specify different zones on the same Solaris host in a failover resource group's node list, doing so is useful only during testing. If a single host contains all zones in the node list, the node becomes a single point of failure for the resource group. For highest availability, zones in a failover resource group's node list should be on different hosts.
In single-host clusters, no functional risk is incurred if you specify multiple zones in a failover resource group's node list.
Scalable services – Do not create non-global zones for use in the same scalable service on the same Solaris host. Each instance of the scalable service must run on a different host.
Cluster file systems - For cluster file systems that use UFS or VxFS, do not directly add a cluster file system to a non-global zone by using the zonecfs command. Instead, configure an HAStoragePlus resource, which manages the mounting of the cluster file system in the global zone and performs a loopback mount of the cluster file system in the non-global zone.
LOFS – Oracle Solaris Zones requires that the loopback file system (LOFS) be enabled. However, the Oracle Solaris Cluster HA for NFS data service requires that LOFS be disabled, to avoid switchover problems or other failures. If you configure both non-global zones and Oracle Solaris Cluster HA for NFS in your cluster, do one of the following to prevent possible problems in the data service:
Disable the automountd daemon.
Exclude from the automounter map all files that are part of the highly available local file system that is exported by Oracle Solaris Cluster HA for NFS.
Logical-hostname resource groups – In a resource group that contains a LogicalHostname resource, if the node list contains any non-global zone with the ip-type property set to exclusive, all zones in that node list must have this property set to exclusive. Note that a global zone always has the ip-type property set to shared, and therefore cannot coexist in a node list that contains zones of ip-type=exclusive. This restriction applies only to versions of the Oracle Solaris OS that use the Oracle Solaris zones ip-type property.
IPMP groups – For all public-network adapters that are used for data-service traffic in the non-global zone, you must manually configure IPMP groups in all /etc/hostname.adapter files on the zone. This information is not inherited from the global zone. For guidelines and instructions to configure IPMP groups, follow the procedures in Part VI, IPMP, in System Administration Guide: IP Services.
Private-hostname dependency - Exclusive-IP zones cannot depend on the private hostnames and private addresses of the cluster.
Shared-address resources – Shared-address resources cannot use exclusive-IP zones.
Consider the following points when you create a Sun Logical Domains (LDoms) I/O domain or guest domain on a physically clustered machine that is SPARC hypervisor capable:
SCSI LUN requirement – The virtual shared storage device, or virtual disk back end, of an LDoms guest domain must be a full SCSI LUN in the I/O domain. You cannot use an arbitrary virtual device.
Fencing – Do not export a storage LUN to more than one guest domain on the same physical machine, unless you also disable fencing for that device. Otherwise, if two different guest domains on the same machine both are visible to a device, the device will be fenced whenever one of the guest domains dies. The fencing of the device will panic any other guest domain that subsequently tries to access the device.
Network isolation – Guest domains that are located on the same physical machine but are configured in different clusters must be network isolated from each other. Use one of the following methods:
Configure the clusters to use different network interfaces in the I/O domain for the private network.
Use different network addresses for each of the clusters.
Networking in guest domains – Network packets to and from guest domains must traverse service domains to reach the network drivers through virtual switches. Virtual switches use kernel threads that run at system priority. The virtual-switch threads must be able to acquire needed CPU resources to perform critical cluster operations, including heartbeats, membership, checkpoints, and so forth. Configuring virtual switches with the mode=sc setting enables expedited handling of cluster heartbeat packets. However, the reliability of other critical cluster operations can be enhanced by adding more CPU resources to the service domain under the following workloads:
High-interrupt load, for example, due to network or disk I/O. Under extreme load, virtual switches can preclude system threads from running for a long time, including virtual-switch threads.
Real-time threads that are overly aggressive in retaining CPU resources. Real-time threads run at a higher priority than virtual-switch threads, which can restrict CPU resources for virtual-switch threads for an extended time.
Non-shared storage - For non-shared storage, such as for LDoms guest-domain OS images, you can use any type of virtual device. You can back such virtual devices by any implement in the I/O domain, such as files or volumes. However, do not copy files or clone volumes in the I/O domain for the purpose of mapping them into different guest domains of the same cluster. Such copying or cloning would lead to problems because the resulting virtual devices would have the same device identity in different guest domains. Always create a new file or device in the I/O domain, which would be assigned a unique device identity, then map the new file or device into a different guest domain.
Exporting storage from I/O domains – If you configure a cluster that is composed of LDoms I/O domains, do not export its storage devices to other guest domains that also run Oracle Solaris Cluster software.
Solaris I/O multipathing – Do not run Solaris I/O multipathing software (MPxIO) from guest domains. Instead, run Solaris I/O multipathing software in the I/O domain and export it to the guest domains.
Private-interconnect IP address range – The private network is shared by all guest domains that are created on the same physical machine and it is visible to all these domains. Before you specify a private-network IP address range to the scinstall utility for use by a guest-domain cluster, ensure that the address range is not already in use by another guest domain on the same physical machine.
For more information about Sun Logical Domains, see the Logical Domains (LDoms) 1.0.3 Administration Guide.