C H A P T E R 2 |
Plan the installation and configuration of your cluster thoroughly to avoid setbacks and delays. Choose the size and hardware configuration of your cluster to suit your purpose. The following sections describe the considerations involved in making these choices:
Configurations provided in this section are examples of supported configurations. If you plan to install a hardware configuration that is not described in the following examples, especially if you are using a mix of hardware, ask your support team for information about the supported configurations and available configuration options.
Perform evaluations of the Netra HA Suite software on a cluster that is easy to set up, for example, either a two-node or four-node cluster. Use a two-node cluster (master-eligible nodes only) if you do not plan to use client (diskless or dataless) nodes in your cluster with your application. Use a four-node cluster (two master-eligible nodes and two master-ineligible nodes) in other cases.
The quickest setup for a two-node cluster is to use rackmounted servers. Whichever rackmounted server you use, the cluster must be configured as follows (with IP replication used for data sharing between the two master-eligible nodes):
Two rackmounted servers configured as master-eligible nodes.
Some partitions of the rackmounted server’s internal disks configured as replicated partitions (for storage of highly available data).
On-board Gigabit Ethernet interfaces (or supplementary cards) of the two rack-mounted servers for the configuration of the cluster network and (optionally) for providing external access to the cluster. In the latter case, use at least four NICs.
Two external Ethernet switches (see Chapter 1 of the Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide for information about configuring and connecting the switches).
One terminal server to manage the consoles (see Chapter 1 of the Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide for information about configuring and connecting the terminal server).
The following list provides some examples of a two-node cluster based on rack-mounted servers:
Two-node cluster with two Netra 120 or two SunFire V210 servers
Two-node cluster with two Netra 240 or two SunFire V240 servers
Two-node cluster with two Netra 440 or two SunFire V440 servers
Choose Netra T2000 servers if you want to test the behavior and performance of the Netra HA Suite Foundation Services on CMT-based hardware.
If you already have a two-node cluster set up as described in the preceding section, and want to evaluate the testing of client nodes, you can either add new rack-mounted servers connected to the cluster network, or add an ATCA chassis with two blades to your configuration. The second option is recommended, as it offers more flexibility in the choice of which client nodes you use. For example, you can use either SPARC®, Opteron™, or CMT-based blades running the Solaris OS or Linux, and will then have the potential to test scalability afterwards (you can easily add up to 12 blades in the chassis).A four-node cluster based on two rackmounted servers and two ATCA blades must contain the following:
Two rackmounted servers configured as master-eligible nodes.
One ATCA (Netra CT 900) chassis and two ATCA blades configured as master-ineligible nodes (diskless or dataless).
Replicated partitions defined on the internal disks of the rackmounted servers (for storage of highly available data).
On-board Gigabit Ethernet interfaces (or supplementary cards) of the rack-mounted servers for the configuration of the cluster network and (optionally) for providing external access to the cluster. In the latter case, use of at least four NICs is recommended.
Two external Ethernet switches (see Chapter 1 of the Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide for information about configuring and connecting the switches).
One terminal server to manage the consoles (see Chapter 1 of the Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide for information about configuring and connecting the terminal server).
The following are examples of a four-node cluster based on two rackmounted ATCA blades:
Four-node cluster with two Netra 240 servers and one Netra CT 900 chassis containing two Netra CP3010 or two Netra CP 3020 blades.
Four-node cluster with two Netra 440 servers and one Netra CT 900 chassis containing two Netra CP3010 or two Netra CP 3020 blades.
Four-node cluster with two Netra T2000 servers and one Netra CT 900 chassis containing two Netra CP3060 blades.
If you are installing a four-node cluster from scratch, use an ATCA blade server (Netra CT900 chassis with Netra CP30xx blades) to ensure maximum flexibility and possible scalability. A four-node cluster based on an ATCA blade server must be configured with the following hardware:
One ATCA (Netra CT 900) chassis in which the four blades are in the same chassis, or two ATCA chassis in which two blades (one MEN and one NMEN) are in each chassis.
Two ATCA blades (Netra CP30xx) configured as master-eligible nodes.
Two ATCA blades (Netra CP30xx) configured as master-ineligible nodes (diskless or dataless).
Replicated partitions defined on the internal disks of the two MENs (for storage of highly available data).
Internal ATCA base or extended fabrics used for the cluster network configuration. Other Gigabit Ethernet interfaces (available on the blades) are used for connecting external networks (if required).
Two ATCA switch blades (see Chapter 1 of the Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide for information about configuring and connecting the switches).
Two ATCA shelf manager blades to manage the consoles (see Chapter 1 of the Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide for information about configuring and connecting the terminal server).
The following are examples of a four-node cluster based on an ATCA blade server:
Four-node cluster with one Netra CT 900 chassis containing four Netra CP3010, four Netra CP3020, or four Netra CP3060 blades.
Four- node cluster with one Netra CT 900 chassis containing two Netra CP3010 blades and two Netra CP3060 blades, or two Netra CP3020 blades and two Netra CP3060 blades.
Four-node cluster with two Netra CT 900 chassis containing two Netra CP3020 or two Netra CP3060 blades in each chassis (one MEN and one NMEN in each chassis).
The details provided in Choosing Hardware for Evaluation Purposes also apply for testing real applications, but the size of your cluster must be aligned with your expectation in term of performance. This means that you might have to build clusters of up to 64 nodes (the maximum number of nodes currently supported by the Netra HA Suite software).Consider the following hardware configurations based on the number of nodes you need in a cluster. These suggestions also provide an indication of memory requirements for the Foundation Services.
Each cluster must have two master-eligible nodes. You can have a mix of diskless nodes and dataless nodes in a cluster. For definitions of the types of nodes, see the Netra High Availability Suite 3.0 1/08 Foundation Services Glossary.
In a sample 18-node cluster, the memory footprint of each running daemon is as follows:
The total memory used for Foundation Services-related daemons is approximately 70 megabytes for the master and vice-master nodes, and 55 megabytes for the remaining nodes.
In general, the OS that is chosen for a cluster is a strategic decision, made at the corporate level, and not a technical choice.
To get the best use of the Netra HA Suite Foundation Services, you must run them under the Solaris OS (primarily, the Solaris 10 OS), which is supported on all of the hardware referenced in this guide. Every service of the Foundation Services is available for use with it.
If you choose to use Linux, you can use only Netra CP3020 blades in an ATCA chassis to run the Netra HA Suite Foundation Services. Also, some services are not available at all under Linux (for example, diskless support). Further, some services have limitations under Linux (for example, IPv6 addresses are not supported on an external network). For information about the limitations that exist under Linux, see the Netra High Availability Suite 3.0 1/08 Release Notes.
If you have to use Linux to run your application, a good compromise could be to have two MENs on rackmounted servers running the Solaris 10 OS, with some NMENs (Netra CP3020s in an ATCA chassis) running your application under Linux. This configuration enables you to run your application with Linux, while benefiting from the Netra HA Suite services that are running the Solaris OS.
An installation server is required for all installation methods. An installation server enables you to install the operating system (Solaris or Linux) and the Netra High Availability (HA) Suite software on the cluster.
The installation server requires the following.
If you are developing applications that you plan to deploy on a cluster running the Foundation Services, you can install a development host. The development host is an optional hardware component. It can be on one (or more) additional servers, or the installation server can be used for the development environment, as well. If you are developing applications using the Cluster Membership Manager (CMM) API or the Service Availability Forum/Cluster Manager (SA Forum/CLM) API, you might require specific software. For more information about CMM, SA Forum/CLM, and the specific software required to develop applications for your cluster, see the Netra High Availability Suite 3.0 1/08 Foundation Services CMM Programming Guide and the Netra High Availability Suite 3.0 1/08 Foundation Services SA Forum Programming Guide.
The development host requires the following:
Hardware requirements | UltraSPARC platform and i386 Sun Platforms |
Operating system | Solaris Operating System or Linux Operating System |
Software requirements | Sun™ Studio 10 software |
Disk capacity | 1.3—2.6 Gbytes, depending on the Solaris OS version in use |
Free space | Minimum 1.5 Gbytes after the Solaris OS has been installed |
Copyright © 2008, Sun Microsystems, Inc. All rights reserved.