C H A P T E R  2

Choosing Hardware and Software for the Cluster and the Installation Server

Plan the installation and configuration of your cluster thoroughly to avoid setbacks and delays. Choose the size and hardware configuration of your cluster to suit your purpose. The following sections describe the considerations involved in making these choices:


Choosing Hardware for a Netra HA Suite Cluster

Configurations provided in this section are examples of supported configurations. If you plan to install a hardware configuration that is not described in the following examples, especially if you are using a mix of hardware, ask your support team for information about the supported configurations and available configuration options.

Choosing Hardware for Evaluation Purposes

Perform evaluations of the Netra HA Suite software on a cluster that is easy to set up, for example, either a two-node or four-node cluster. Use a two-node cluster (master-eligible nodes only) if you do not plan to use client (diskless or dataless) nodes in your cluster with your application. Use a four-node cluster (two master-eligible nodes and two master-ineligible nodes) in other cases.

Two-Node Cluster

The quickest setup for a two-node cluster is to use rackmounted servers. Whichever rackmounted server you use, the cluster must be configured as follows (with IP replication used for data sharing between the two master-eligible nodes):

  • Two rackmounted servers configured as master-eligible nodes.

  • Some partitions of the rackmounted server’s internal disks configured as replicated partitions (for storage of highly available data).

  • On-board Gigabit Ethernet interfaces (or supplementary cards) of the two rack-mounted servers for the configuration of the cluster network and (optionally) for providing external access to the cluster. In the latter case, use at least four NICs.

  • Two external Ethernet switches (see Chapter 1 of the Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide for information about configuring and connecting the switches).

  • One terminal server to manage the consoles (see Chapter 1 of the Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide for information about configuring and connecting the terminal server).

The following list provides some examples of a two-node cluster based on rack-mounted servers:

  • Two-node cluster with two Netra 120 or two SunFire V210 servers

  • Two-node cluster with two Netra 240 or two SunFire V240 servers

  • Two-node cluster with two Netra 440 or two SunFire V440 servers

  • Two-node cluster with two Netra T2000 servers

Choose Netra T2000 servers if you want to test the behavior and performance of the Netra HA Suite Foundation Services on CMT-based hardware.



Note - If you need to evaluate data sharing between two MENs by using shared disks (instead of IP replication), add an external disk array to your configuration. Check with your support team to determine which external disks are supported for use on a particular server.





Note - All of the servers listed in this section are running with only the Solaris OS. To evaluate the Foundations Services on Linux, you must set up a cluster with two Netra CP3020 blades (Opteron-based blades) in an ATCA chassis (preferably the Sun Netra CT900 chassis).



Four-Node Cluster

If you already have a two-node cluster set up as described in the preceding section, and want to evaluate the testing of client nodes, you can either add new rack-mounted servers connected to the cluster network, or add an ATCA chassis with two blades to your configuration. The second option is recommended, as it offers more flexibility in the choice of which client nodes you use. For example, you can use either SPARC®, Opteron™, or CMT-based blades running the Solaris OS or Linux, and will then have the potential to test scalability afterwards (you can easily add up to 12 blades in the chassis).A four-node cluster based on two rackmounted servers and two ATCA blades must contain the following:

  • Two rackmounted servers configured as master-eligible nodes.

  • One ATCA (Netra CT 900) chassis and two ATCA blades configured as master-ineligible nodes (diskless or dataless).

  • Replicated partitions defined on the internal disks of the rackmounted servers (for storage of highly available data).

  • On-board Gigabit Ethernet interfaces (or supplementary cards) of the rack-mounted servers for the configuration of the cluster network and (optionally) for providing external access to the cluster. In the latter case, use of at least four NICs is recommended.

  • Two external Ethernet switches (see Chapter 1 of the Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide for information about configuring and connecting the switches).

  • One terminal server to manage the consoles (see Chapter 1 of the Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide for information about configuring and connecting the terminal server).

The following are examples of a four-node cluster based on two rackmounted ATCA blades:

  • Four-node cluster with two Netra 240 servers and one Netra CT 900 chassis containing two Netra CP3010 or two Netra CP 3020 blades.

  • Four-node cluster with two Netra 440 servers and one Netra CT 900 chassis containing two Netra CP3010 or two Netra CP 3020 blades.

  • Four-node cluster with two Netra T2000 servers and one Netra CT 900 chassis containing two Netra CP3060 blades.



Note - If you need to evaluate data sharing between two MENs by using shared disks (instead of IP replication), add an external disk array to your configuration. Check with your support team to determine which external disks are supported for use on a particular server.





Note - If you intend for your cluster to have all nodes running Linux, to evaluate the Foundations Services, you must set up a cluster with only Netra CP3020 blades (Opteron-based blades) in one or two ATCA chassis (preferably the Sun Netra CT900 chassis).



If you are installing a four-node cluster from scratch, use an ATCA blade server (Netra CT900 chassis with Netra CP30xx blades) to ensure maximum flexibility and possible scalability. A four-node cluster based on an ATCA blade server must be configured with the following hardware:

  • One ATCA (Netra CT 900) chassis in which the four blades are in the same chassis, or two ATCA chassis in which two blades (one MEN and one NMEN) are in each chassis.

  • Two ATCA blades (Netra CP30xx) configured as master-eligible nodes.

  • Two ATCA blades (Netra CP30xx) configured as master-ineligible nodes (diskless or dataless).

  • Replicated partitions defined on the internal disks of the two MENs (for storage of highly available data).

  • Internal ATCA base or extended fabrics used for the cluster network configuration. Other Gigabit Ethernet interfaces (available on the blades) are used for connecting external networks (if required).

  • Two ATCA switch blades (see Chapter 1 of the Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide for information about configuring and connecting the switches).

  • Two ATCA shelf manager blades to manage the consoles (see Chapter 1 of the Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide for information about configuring and connecting the terminal server).

The following are examples of a four-node cluster based on an ATCA blade server:

  • Four-node cluster with one Netra CT 900 chassis containing four Netra CP3010, four Netra CP3020, or four Netra CP3060 blades.

  • Four- node cluster with one Netra CT 900 chassis containing two Netra CP3010 blades and two Netra CP3060 blades, or two Netra CP3020 blades and two Netra CP3060 blades.

  • Four-node cluster with two Netra CT 900 chassis containing two Netra CP3020 or two Netra CP3060 blades in each chassis (one MEN and one NMEN in each chassis).



Note - If you plan to evaluate a four-node cluster on a system where Logical Domains are in use, build your cluster using either an ATCA blade server with two Netra CP3060 blades or two Netra rackmounted T2000 servers (both are UltraSPARC T1 processor-based hardware). On each Netra CP 3060 blade (or each Netra T2000 server) you will configure three domains (one control domain and two guest domains). Each domain will run the Solaris OS. Each guest domain will be considered as a node of the cluster. There will be one master-eligible and one master-ineligible node on each Netra CP3060 blade or Netra T2000 server.



Choosing Hardware for Real Application Testing

The details provided in Choosing Hardware for Evaluation Purposes also apply for testing real applications, but the size of your cluster must be aligned with your expectation in term of performance. This means that you might have to build clusters of up to 64 nodes (the maximum number of nodes currently supported by the Netra HA Suite software).Consider the following hardware configurations based on the number of nodes you need in a cluster. These suggestions also provide an indication of memory requirements for the Foundation Services.


TABLE 2-1   Suggested Hardware Configurations for a Netra HA Suite Cluster
Cluster Size Recommended Hardware
2 nodes Rackmounted servers: Netra 120/240/440/1290/T2000 and SunFire equivalent
4–12 nodes 1 or 2 ATCA chassis (Netra CT 900) up to 12 Netra CP30xx blades as MENs and NMENs

Note: Netra CP3020 is mandatory if Linux is the preferred choice of OS

OR

2 rackmounted servers as MENs (Netra 120/240/440/1290/T2000), 1 or 2 ATCA chassis (Netra CT 900), up to 10 Netra CP30xx blades as NMENs (diskless or dataless),

Note: Netra CP3020 is mandatory if Linux is the preferred choice of OS for NMENs

12–48 nodes 2 rackmounted servers as MENs (Netra 240/440/1290/T2000), up to 4 ATCA chassis (Netra CT 900), up to 46 Netra CP30xx blades as NMENs dataless (maximum of 12 per chassis),

Note: Netra CP3020 is mandatory if Linux is the preferred choice of OS for NMENs

48–64 nodes 2 rackmounted servers as MENs (Netra 440/1290/T2000), up to 6 ATCA chassis (Netra CT 900), up to 62 Netra CP30xx blades as NMENs dataless (maximum of 12 per chassis)

Note: Netra CP3020 is mandatory if Linux is the preferred choice of OS for NMENs


Sample Memory Usage for an 18-Node Cluster

Each cluster must have two master-eligible nodes. You can have a mix of diskless nodes and dataless nodes in a cluster. For definitions of the types of nodes, see the Netra High Availability Suite 3.0 1/08 Foundation Services Glossary.

In a sample 18-node cluster, the memory footprint of each running daemon is as follows:


TABLE 2-2   Sample Results of Memory Usage for 18-Node Cluster
Function Memory Used
DHCP 4.8 MB on the master and vice-master nodes
PROBE 2.6 MB on every node
CMM 4.7 MB on the master and vice-master nodes and 3.4 MB on the remaining nodes
CRFS 3.4 MB on the master and vice-master nodes
SNDR 2.7 MB on the master and vice-master nodes
JVM 50 MB (approximately)

The total memory used for Foundation Services-related daemons is approximately 70 megabytes for the master and vice-master nodes, and 55 megabytes for the remaining nodes.


Choosing Software (OS) for a Netra HA Suite Cluster

In general, the OS that is chosen for a cluster is a strategic decision, made at the corporate level, and not a technical choice.

To get the best use of the Netra HA Suite Foundation Services, you must run them under the Solaris OS (primarily, the Solaris 10 OS), which is supported on all of the hardware referenced in this guide. Every service of the Foundation Services is available for use with it.

If you choose to use Linux, you can use only Netra CP3020 blades in an ATCA chassis to run the Netra HA Suite Foundation Services. Also, some services are not available at all under Linux (for example, diskless support). Further, some services have limitations under Linux (for example, IPv6 addresses are not supported on an external network). For information about the limitations that exist under Linux, see the Netra High Availability Suite 3.0 1/08 Release Notes.

If you have to use Linux to run your application, a good compromise could be to have two MENs on rackmounted servers running the Solaris 10 OS, with some NMENs (Netra CP3020s in an ATCA chassis) running your application under Linux. This configuration enables you to run your application with Linux, while benefiting from the Netra HA Suite services that are running the Solaris OS.


Choosing Hardware and Software for the Installation Server

An installation server is required for all installation methods. An installation server enables you to install the operating system (Solaris or Linux) and the Netra High Availability (HA) Suite software on the cluster.

The installation server requires the following.


Hardware requirements UltraSPARC® platform or i386 Sun platforms

Two network devices, as follows:

  • If the installation server is part of the public network, one network device is used to connect the installation server to this public network. The other network device is used to connect the installation server to the cluster network.

  • If the installation server is a portable machine, you require only one network device to connect to the cluster network

Operating system Solaris Operating System or Linux Operating System.

To install a cluster running the Solaris OS, you must install the Solaris OS on the installation server. There is no need to have the same release of Solaris on the installation server and the cluster. To install a cluster running the supported Linux distribution, you must have the Solaris 9 or 10 OS, or a SuSe 9 distribution installed on the installation server.

Software requirements Perl Version 5, which is available with the Developer Solaris Software Group.
Disk capacity Minimum 1.5 Gbytes for a Solaris software distribution,

4 Gbytes for an eight-node cluster.

Free space Minimum 1.5 Gbytes after the Solaris Operating System has been installed.


Choosing a Development Host

If you are developing applications that you plan to deploy on a cluster running the Foundation Services, you can install a development host. The development host is an optional hardware component. It can be on one (or more) additional servers, or the installation server can be used for the development environment, as well. If you are developing applications using the Cluster Membership Manager (CMM) API or the Service Availability Forum/Cluster Manager (SA Forum/CLM) API, you might require specific software. For more information about CMM, SA Forum/CLM, and the specific software required to develop applications for your cluster, see the Netra High Availability Suite 3.0 1/08 Foundation Services CMM Programming Guide and the Netra High Availability Suite 3.0 1/08 Foundation Services SA Forum Programming Guide.

The development host requires the following:


Hardware requirements UltraSPARC platform and i386 Sun Platforms

One network device

Operating system Solaris Operating System or Linux Operating System
Software requirements Sun™ Studio 10 software

Fortetrademark Developer 6 Software Suite (at least Update 1)

Java™ 2 Software Development Kit Standard Edition

Disk capacity 1.3—2.6 Gbytes, depending on the Solaris OS version in use
Free space Minimum 1.5 Gbytes after the Solaris OS has been installed