|Skip Navigation Links|
|Exit Print View|
|Oracle Solaris Cluster Software Installation Guide Oracle Solaris Cluster 4.1|
This section provides the following guidelines for planning Oracle Solaris software installation in a cluster configuration:
For more information about Oracle Solaris software, see your Oracle Solaris installation documentation.
You can install Oracle Solaris software from a local DVD-ROM or from a network installation server by using the Automated Installer (AI) installation method. In addition, Oracle Solaris Cluster software provides a custom method for installing both the Oracle Solaris OS and Oracle Solaris Cluster software by using the AI installation method. During AI installation of Oracle Solaris software, you choose to either install the OS with defaults accepted or run an interactive installation of the OS where you can customize the installation for components such as the boot disk and the ZFS root pool. If you are installing several cluster nodes, consider a network installation.
See How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (Automated Installer) for details about the scinstall AI installation method. See your Oracle Solaris installation documentation for details about standard Oracle Solaris installation methods and what configuration choices you must make during installation of the OS.
Consider the following points when you plan the use of the Oracle Solaris OS in an Oracle Solaris Cluster configuration:
Oracle Solaris Zones – Install Oracle Solaris Cluster framework software only in the global zone.
Loopback file system (LOFS) – During cluster creation, LOFS capability is enabled by default. If the cluster meets both of the following conditions, you must disable LOFS to avoid switchover problems or other failures:
If the cluster meets only one of these conditions, you can safely enable LOFS.
If you require both LOFS and the automountd daemon to be enabled, exclude from the automounter map all files that are part of the highly available local file system that is exported by HA for NFS.
Power-saving shutdown – Automatic power-saving shutdown is not supported in Oracle Solaris Cluster configurations and should not be enabled. See the poweradm(1M) man page for more information.
Network Auto-Magic (NWAM) – The Oracle Solaris Network Auto-Magic (NWAM) feature activates a single network interface and disables all others. For this reason, NWAM cannot coexist with the Oracle Solaris Cluster software and you must disable it before you configure or run your cluster.
IP Filter feature – Oracle Solaris Cluster relies on IP network multipathing (IPMP) for public network monitoring. Any IP Filter configuration must be made in accordance with IPMP configuration guidelines and restrictions concerning IP Filter.
fssnap – Oracle Solaris Cluster software does not support the fssnap command, which is a feature of UFS. However, you can use the fssnap command on local systems that are not controlled by Oracle Solaris Cluster software. The following restrictions apply to fssnap support:
The fssnap command is supported on local files systems that are not managed by Oracle Solaris Cluster software.
The fssnap command is not supported on cluster file systems.
The fssnap command is not supported on local file systems under the control of HAStoragePlus.
When you install the Oracle Solaris OS, ensure that you create the required Oracle Solaris Cluster partitions and that all partitions meet minimum space requirements.
The Oracle Solaris Cluster software itself occupies less than 40 Mbytes of space in the root (/) file system.
Each Oracle Solaris Cluster data service might use between 1 Mbyte and 5 Mbytes.
Solaris Volume Manager software requires less than 5 Mbytes.
To configure ample additional space and inode capacity, add at least 100 Mbytes to the amount of space you would normally allocate for your root ( /) file system. This space is used for the creation of both block special devices and character special devices used by the volume management software. You especially need to allocate this extra space if a large number of shared disks are in the cluster.
You need to set aside ample space for log files. Also, more messages might be logged on a clustered node than would be found on a typical stand-alone server. Therefore, allow at least 100 Mbytes for log files.
/var – The Oracle Solaris Cluster software occupies a negligible amount of space in the /var file system at installation time. However, you need to set aside ample space for log files. Also, more messages might be logged on a clustered node than would be found on a typical stand-alone server. Therefore, allow at least 100 Mbytes for the /var file system.
swap – The combined amount of swap space that is allocated for Oracle Solaris and Oracle Solaris Cluster software must be no less than 750 Mbytes. For best results, add at least 512 Mbytes for Oracle Solaris Cluster software to the amount that is required by the Oracle Solaris OS. In addition, allocate any additional swap amount that is required by applications that are to run on the Oracle Solaris host.
To support Solaris Volume Manager, you can create this partition on one of the following locations:
A local disk other than the ZFS root pool
The ZFS root pool, if the ZFS root pool is on a partition rather than a disk
Set aside a slice for this purpose on each local disk. However, if you have only one local disk on an Oracle Solaris host, you might need to create three state database replicas in the same slice for Solaris Volume Manager software to function properly. See Solaris Volume Manager Administration Guide for more information.
To meet these requirements, you must customize the partitioning if you are performing interactive installation of the Oracle Solaris OS.
Consider the following points when you create an Oracle VM Server for SPARC I/O domain or guest domain on a physically clustered machine that is SPARC hypervisor capable:
SCSI LUN requirement – The virtual shared storage device, or virtual disk back end, of an Oracle VM Server for SPARC guest domain must be a full SCSI LUN in the I/O domain. You cannot use an arbitrary virtual device.
Fencing – Do not export a storage LUN to more than one guest domain on the same physical machine unless you also disable fencing for that device. Otherwise, if two different guest domains on the same machine both are visible to a device, the device will be fenced whenever one of the guest domains halts. The fencing of the device will panic any other guest domain that subsequently tries to access the device.
Network isolation – Guest domains that are located on the same physical machine but are configured in different clusters must be network isolated from each other. Use one of the following methods:
Configure the clusters to use different network interfaces in the I/O domain for the private network.
Use different network addresses for each of the clusters when you perform initial configuration of the clusters.
Networking in guest domains – Network packets to and from guest domains must traverse service domains to reach the network drivers through virtual switches. Virtual switches use kernel threads that run at system priority. The virtual-switch threads must be able to acquire needed CPU resources to perform critical cluster operations, including heartbeats, membership, checkpoints, and so forth. Configuring virtual switches with the mode=sc setting enables expedited handling of cluster heartbeat packets. However, the reliability of other critical cluster operations can be enhanced by adding more CPU resources to the service domain under the following workloads:
High-interrupt load, for example, due to network or disk I/O. Under extreme load, virtual switches can preclude system threads from running for a long time, including virtual-switch threads.
Real-time threads that are overly aggressive in retaining CPU resources. Real-time threads run at a higher priority than virtual-switch threads, which can restrict CPU resources for virtual-switch threads for an extended time.
Non-shared storage – For non-shared storage, such as for Oracle VM Server for SPARC guest-domain OS images, you can use any type of virtual device. You can back such virtual devices by any implement in the I/O domain, such as files or volumes. However, do not copy files or clone volumes in the I/O domain for the purpose of mapping them into different guest domains of the same cluster. Such copying or cloning would lead to problems because the resulting virtual devices would have the same device identity in different guest domains. Always create a new file or device in the I/O domain, which would be assigned a unique device identity, then map the new file or device into a different guest domain.
Exporting storage from I/O domains – If you configure a cluster that is composed of Oracle VM Server for SPARC I/O domains, do not export its storage devices to other guest domains that also run Oracle Solaris Cluster software.
Oracle Solaris I/O multipathing – Do not run Oracle Solaris I/O multipathing software (MPxIO) from guest domains. Instead, run Oracle Solaris I/O multipathing software in the I/O domain and export it to the guest domains.
Live migration restriction - Live migration is not supported for logical domains that are configured to run as cluster nodes. However, logical domains that are configured to be managed by the HA for Oracle VM Server for SPARC data service can use live migration.
For more information about Oracle VM Server for SPARC, see the Oracle VM Server for SPARC 2.1 Administration Guide.