This chapter provides planning information and guidelines specific to an Open HA Cluster 2009.06 configuration. The information in this chapter supplements or replaces guidelines in Chapter 1, Planning the Open HA Cluster Configuration for those features and functionality that are supported in an Open HA Cluster 2009.06 configuration. For information about Sun Cluster features that are not supported or are limited in an Open HA Cluster 2009.06 configuration, see Open HA Cluster 2009.06 Release Notes.
The following are the hardware and software requirements or defaults for an Open HA Cluster configuration:
Operating system – An Open HA Cluster 2009.06 configuration runs only on OpenSolaris 2009.06 software.
Hardware platform – An Open HA Cluster 2009.06 configuration runs on either SPARC based platforms or on 32-bit or 64-bit x86 based platforms.
All nodes in a cluster must run on the same platform. For x86 based platforms, you cannot use both 32–bit machines and 64–bit machines in the same cluster.
Hardware topology – An Open HA Cluster 2009.06 configuration consists of the following hardware components:
Exactly two physical cluster nodes that run on the same subnet
At least one network adapter per node
Shared storage is optional
Root file system – ZFS is the default root file system.
The creation of a /globaldevices partition for use as the global-devices namespace is incompatible with a ZFS root file system. You must either configure a lofi device to host the global-devices namespace, or create the /globaldevices partition on a UFS root file system.
System shell – Korn shell 93 (ksh93) is the default system shell.
Administrator role – By default, the initial user account has the Primary Administrator profile.
Network interface manager – By default, Network Auto-Magic (NWAM) is the default network interface manager. However, NWAM is incompatible with Open HA Cluster 2009.06 software and you must disable it before you configure Open HA Cluster 2009.06 software.
DHCP – Open HA Cluster software uses certain network configuration files in ways that are incompatible with running DHCP clients with IPMP. Therefore, cluster nodes cannot be DHCP clients. You must disable DHCP and instead configure a static IP address for the public network.
Observe the following guidelines for IPMP groups in an Open HA Cluster configuration:
Link-based IPMP groups –At cluster installation time, automatically created IPMP groups are configured as link-based groups. If you want an IPMP group to be probe based, you must manually edit the /etc/hostname.adapter file on each node to add test addresses.
LogicalHostname and SharedAddress resources – If you configure a LogicalHostname or SharedAddress resource with a hostname that uses a single adapter, the automatically created IPMP group for that adapter is configured for link-based monitoring. You can afterwards modify the /etc/hostname.adapter files for these IPMP groups to make them probe based.
Observe the following guidelines for the private interconnect in an Open HA Cluster configuration:
Optional private interconnect – The use of a physical private interconnect is optional. You can instead use the public network for cluster traffic by configuring virtual network interfaces, or VNICs.
Creation of VNICs – To use VNICs for the cluster transport, you can either configure the VNICs in advance or use the scinstall utility in Custom Mode to create them when you establish the cluster. For information about manually creating a VNIC, see How to Create a Virtual Network Interface (VNIC).
When you use the scinstall utility in Custom Mode to create a new VNIC, you specify the following information:
The name of the physical adapter, or NIC, to use
The physical adapter's MAC address or choose automatic selection (auto)
The name to give the VNIC, using the naming convention vnicN
The VNICs are created when cluster configuration and establishment is performed.
Autodiscovery of adapters - If you use the scinstall utility in Custom Mode to create a VNIC for use by the first cluster node you configure, you cannot use autodiscovery of adapters for the rest of the cluster nodes. When you are prompted whether to use autodiscovery, type “No”.
Coexistence of physical and virtual adapters – You can use a combination of physical and virtual adapters in the cluster or on a single node. However, if there is a large difference in the bandwidth for the different NICs and VNICs, performance can be impacted by the lower-speed NICs during peak loads. Ensure that the NICs and VNICs you use in the same cluster have comparable bandwidth.
IP Security Architecture (IPsec) – Only use IPsec with Internet Key Exchange (IKE) for key management. Do not use the manual-key form of key management when you configure IPsec in an Open HA Cluster configuration.
iSCSI is a protocol that enables clients, called initiators, to send SCSI commands to SCSI storage devices, called targets, on remote servers. It is a Storage Area Network (SAN) protocol that enables the consolidation of storage into data-center storage arrays, while providing hosts with the illusion of locally attached disks. The use of iSCSI does not require special-purpose cabling. Instead, communication is run over long distances by using the existing network infrastructure.
Observe the following guidelines for configuring iSCSI storage in an Open HA Cluster configuration:
COMSTAR – Only COMSTAR based iSCSI target implementations are supported in an Open HA Cluster 2009.06 configuration.
iSCSI target location – A disk that is exported as an iSCSI target must be a local disk that is directly attached to the cluster node that hosts the iSCSI target. You cannot use a disk as an iSCSI target if it is hosted by multiple nodes or if it is not directly attached to the cluster node.
Topology – Configure the hardware connections as shown in the following diagram. This diagram shows a two-node Open HA Cluster 2009.06 configuration that uses COMSTAR and a failover ZFS storage pool to provide high availability. The arrows indicate iSCSI connections. One or more connections provide a path from each node to the same disk on Node 1. In the cluster DID namespace, this becomes a single DID device, with paths from both nodes. Similarly, one or more connections provide a path from each node to the same disk on Node 2. This creates a second DID device. The mirroring of these two DID devices by using a ZFS storage pool creates a failover ZFS file system in the Open HA Cluster configuration.