C H A P T E R 2 |
The nhinstall tool enables you to install and configure software and services on the nodes of a cluster, whatever type and number of nodes there are. You install and configure the nhinstall tool on the installation server.
For information about setting up the installation environment and configuring the nhinstall tool, see the following sections:
Before installing the nhinstall tool on an installation server running the Solaris OS, you must prepare the selected OS and the Netra HA Suite software for future installation the cluster nodes. This involves creating a Solaris, Wind River CGL,
To install the Solaris Operating System on the cluster nodes, create a Solaris distribution on the installation server. If you are installing more than one Solaris distribution on the cluster, perform the steps in the procedure for each Solaris distribution.
Make sure that you have at least 1.8 Gbytes of free disk space for the Solaris 9 OS and 3 Gbytes of free disk space for the Solaris 10 OS on the installation server.
Create a directory for the Solaris distribution:
# mkdir Solaris-distribution-dir |
where Solaris-distribution-dir is the directory where the distribution is to be stored on the installation server.
Change to the directory where the setup_install_server command is located:
# cd Solaris-dir/Solaris_x/Tools |
Run the setup_install_server command:
Solaris-distribution-dir # ./setup_install_server |
For more information about the setup_install_server command, see the appropriate documentation:
|
To install the MontaVista Linux CGE on the cluster, get two CD-ROM images from MontaVista for the Netra HA Suite and install them on the installation server.
Get the Linux Support Package (LSP) CD-ROM image:
lsps-x86-pc_target-x86_amd64--xxxxxxx.iso
This package contains preconfigured kernel binaries, kernel modules, kernel headers, and kernel sources for the Netra™ CP3020 hardware.
Get the Target Distribution Package (TDP) CD-ROM image:
target-x86_amd64-tdp-xxxxxxx.iso
This package contains the file system with prebuilt applications and the MontaVista installer.
Make sure that you have at least 600 Mbytes of free disk space on the installation server.
Mount the target TDP CD-ROM images:
# /usr/sbin/lofiadm -a path_to_cdrom_image/target-x86_amd64-tdp-xxxxxxx.iso |
The command will return a device, such as /dev/lofi/1 for example.
Mount the device returned in the preceding step:
# /usr/sbin/mount -F hsfs /dev/lofi/1 path_you_chose_to_mount_the_target_TDP |
Copy the mounted directory to a directory that can be exported through NFS:
TDP># /usr/bin/cp path_to_the_mounted_target_TDP path_for_copying_the_target |
xxxxxxx.iso# /usr/sbin/lofiadm -a path_to_cdrom_image/lsps-x86-pc_target- x86_amd64- |
The command will return a device, such as /dev/lofi/2 for example
Mount the device returned in the preceding step:
# /usr/sbin/mount -F hsfs /dev/lofi/2 path_you_chose_to_mount_the_LSP |
Copy the mounted directory to a directory that can be exported through NFS:
# /usr/bin/cp path_to_the_mounted_LSP path_for_copying_the_LSP |
Modify the MontaVista LSP to use the Netra HA Suite LSP package.
The Netra HA Suite provides a MontaVista package named lsp-x86-pc_target-x86_amd64-2.6.10_mvlcge401-1.2.1.xxxxxxx.x86_amd64.mvl, which contains Linux kernel modules, as well as a MontaVista Linux kernel patch to include the Netra HA Suite Carrier Grade Transport Protocol (CGTP), a reliable IP transport mechanism based on transparent multirouting using redundant routes.
For Netra HA Suite, you must use this package instead of the original MontaVista LSP package by copying it into the LSP distribution as follows.
Install the Netra HA Suite kernel package:
# /usr/bin/rpm2cpio NHAS-software-distribution-dir/Product/NetraHASuite_3.0/ FoundationServices/mvlcge40/x86_amd64/Packages/sun-nhas-kernel-source-3.0- 6.x86_amd64.rpm | /usr/bin/cpio -id |
where NHAS-software-distribution-dir is the directory that contains the Netra HA Suite distribution.
Copy the Netra HA Suite LSP in your MontaVista target distribution to replace the original LSP:
# cp ./usr/src/sun/nhas/LSP/target/lsp-x86-pc_target-x86_amd64-2.6.10_ mvlcge401-1.2.1.xxxxxxx.x86_amd64.mvl path_where_you_copied_the_LSP /x86_amd64/lsps/x86-pc_target-x86_amd64/target/ |
|
Ensure that the installation server has at least 73 Mbytes of free disk space.
Create a directory for the Wind River CGL distribution:
# mkdir Wind-River-distribution-dir |
where Wind-River-distribution-dir is the directory where the distribution is to be stored on the installation server.
Install the Netra HA Suite kernel package:
# /usr/bin/rpm2cpio NHAS-software-distribution-dir/Product/NetraHASuite_3.0/FoundationServices/wrl1_4/i686/Packages/sun-nhas-kernel-source-3.0-*.i686.rpm | /usr/bin/cpio -id |
where NHAS-software-distribution-dir is the directory that contains the Netra HA Suite distribution.
Note - The command shown in the preceding example should appear on one line, but wraps in the printed text due to page size limitations. |
The following files, will be installed under ./usr/src/sun/nhas/distribution:
Copy these files to the Wind-River-distribution-dir directory you created in Step 3 for the Wind River CGL distribution:
# cp ./usr/src/sun/nhas/distribution/sun_netra_cp3020-linux-modules-WR1.4aq_cgl-nhas.tar.bz2 Wind-River-distribution-dir # cp ./usr/src/sun/nhas/distribution/System.map-netra_cp3020 Wind-River-distribution-dir # cp ./usr/src/sun/nhas/distribution/bzImage-netra_cp3020 Wind-River-distribution-dir |
To the Wind-River-distribution-dir directory, copy a compressed tar of the root file system of Wind River Linux CGL 1.4 for Sun Netra CP3020 nodes.
This file is provided by Wind River with the Wind River Platform For Network Equipment Linux Edition 1.4 BSP for Sun CP3020. For more information, refer to http://windriver.com/products/bsp_web/bsp_vendor.html?vendor=Sun
# cp sun_netra_cp3020-dist.tar.bz2 Wind-River-distribution-dir |
To the Wind-River-distribution-dir directory, copy a lilo boot loader rpm, as required by the Netra High Availability Suite installer. For example, you can use lilo-22.7-19.x86_64.rpm, or any other lilo version. You can download this lilo rpm file from http://rpmfind.net/
# cp lilo-22.7-19.x86_64.rpm Wind-River-distribution-dir |
Before you begin the installation process, make sure that the installation server is configured as described in Chapter 1.
If you are planning to install remotely from another system, open a shell window to connect to the installation server.
Confirm that the Solaris software packages that contain Perl 5.0 are installed on the installation server.
Use the pkginfo command to check for the SUNWp15u, SUNWp15p, and SUNWp15m Perl packages.
Delete any entries for your cluster nodes in the following files:
Disable the installation server as a router by creating an /etc/notrouter file:
# touch /etc/notrouter |
If a system running the Solaris Operating System has two network interfaces, the system is configured as a router by default. However, for security reasons, a Foundation Services cluster network must not be routed.
Modify the /etc/nsswitch.conf file on the installation server so that files is positioned before nis in the hosts, ethers, and bootparams entries:
hosts: files nis ethers: files nis bootparams: files nis netmasks: files nis |
From the installation server, open a terminal window to connect to the console of each cluster node.
You can also connect to the consoles from the system that you use to connect to the installation server.
Before installing the nhinstall tool on the installation server running the Linux SLES9 OS, you must install a MontaVista CGE or Wind River CGL
|
To install the MontaVista Linux CGE on the cluster, get two CD-ROM images from MontaVista for the Netra HA Suite and install them on the installation server.
Get the following Linux Support Package (LSP) CD-ROM image:
lsps-x86-pc_target-x86_amd64--xxxxxxx.iso
This package contains preconfigured kernel binaries, kernel modules, kernel headers, and kernel sources for the Netra™ CP3020 hardware.
Get the following Target Distribution Package (TDP) CD-ROM image:
target-x86_amd64-tdp-xxxxxxx.iso
This package contains the file system with prebuilt applications and the MontaVista installer.
Make sure that you have at least 600 Mbytes of free disk space on the installation server.
Mount the target TDP CD-ROM images:
# /bin/mount -oro -o loop -t iso9660 path_to_cdrom_image/target-x86_amd64- xxxxxxx.iso path_you_chose_to_mount_the_target_TDP |
Copy the mounted directory to a directory that can be exported through NFS:
# /bin/cp path_to_the_mounted_target_TDP path_for_copying_the_target_TDP |
xxxxxxx.iso# /bin/mount -oro -o loop -t iso9660 path_to_cdrom _image/lsps-x86-pc_target-x86_amd64- path_you_chose_to_mount_the_LSP |
Copy the mounted directory to a directory that can be exported through NFS:
# /bin/cp path_to_the_mounted_LSP path_for_copying_the_LSP |
Modify the MontaVista LSP to use the Netra HA Suite LSP package.
The Netra HA Suite provides a MontaVista package named lsp-x86-pc_target-x86_amd64-2.6.10_mvlcge401-1.2.1.xxxxxxx.x86_amd64.mvl, which contains Linux kernel modules, as well as a MontaVista Linux kernel patch to include the Netra HA Suite CGTP, a reliable IP transport mechanism based on transparent multirouting using redundant routes.
For Netra HA Suite, you must use this package instead of the original MontaVista LSP package by copying it into the LSP distribution as follows.
Copy the Netra HA Suite LSP in your MontaVista target distribution to replace the original LSP:
# cp /usr/src/sun/nhas/LSP/target/lsp-x86-pc_target-x86_amd64-2.6.10_ mvlcge401-1.2.1.xxxxxxx.x86_amd64.mvl path_where_you_copied_the_LSP /x86_amd64/lsps/x86-pc_target-x86_amd64/target/ |
|
Ensure that the installation server has at least 73 Mbytes of free disk space.
Create a directory for the Wind River CGL distribution:
# mkdir Wind-River-distribution-dir |
where Wind-River-distribution-dir is the directory where the distribution is to be stored on the installation server.
Install the Netra HA Suite kernel package:
# rpm -i --nodeps --ignorearch NHAS-software-distribution-dir/Product/NetraHASuite_3.0/FoundationServices/wrl1_4/i686/Packages/sun-nhas-kernel-source-3.0-*.i686.rpm | /usr/bin/cpio -id |
where NHAS-software-distribution-dir is the directory that contains the Netra HA Suite distribution.
Note - The command shown in the preceding example should appear on one line, but wraps in the printed text due to page size limitations. |
The following files, will then be installed under /usr/src/sun/nhas/distribution:
Copy these files to the Wind-River-distribution-dir directory you created in Step 3 for the Wind River CGL distribution:
nhas.tar.bz2 # cp /usr/src/sun/nhas/distribution/sun_netra_cp3020-linux-modules-WR1.4aq_cgl-nhas.tar.bz2 Wind-River-distribution-dir distribution-dir# cp /usr/src/sun/nhas/distribution/System.map-netra_cp3020 Wind-River-distribution-dir dir# cp /usr/src/sun/nhas/distribution/bzImage-netra_cp3020 Wind-River-distribution-dir |
To the Wind-River-distribution-dir directory, copy a compressed tar of the root file system of Wind River Linux CGL 1.4 for Sun Netra CP3020 nodes.
This file is provided by Wind River with the Wind River Platform For Network Equipment Linux Edition 1.4 BSP for Sun CP3020. For more information, refer to: http://windriver.com/products/bsp_web/bsp_vendor.html?vendor=Sun
# cp sun_netra_cp3020-dist.tar.bz2 Wind-River-distribution-dir |
To the Wind-River-distribution-dir directory, copy a lilo boot loader rpm (lilo-22.7-19.x86_64.rpm) required by the Netra High Availability Suite installer.
You can download this lilo rpm file from http://rpmfind.net/
# cp lilo-22.7-19.x86_64.rpm Wind-River-distribution-dir |
|
Before you begin the installation processon a Suse SLES9 installation server, make sure it is configured as described in Chapter 1.
If you are planning to install remotely from another system, open a shell window to connect to the installation server.
Confirm that the a Perl 5 RPM package is installed on the installation server.
Use the rpm -qa perl command to confirm that Perl is installed.
Confirm that the ISC DHCP server RPM package is installed on the installation server.
Use the command rpm -qa dhcp-server to confirm that the DHCP server is installed.
Confirm that the TFTP RPM package is installed on the installation server.
Use the command rpm -qa tftp to confirm that tftp is installed.
/sbin/chkconfig tftp on# /etc/init.d/xinetd restart# |
Start the NFS server as follows:
# /usr/sbin/rcnfsserver restart |
Delete any entries for your cluster nodes in the following files:
Modify the /etc/nsswitch.conf file on the installation server so that files is positioned before nis in the hosts, ethers, and bootparams entries:
hosts: files nis ethers: files nis bootparams: files nis netmasks: files nis |
From the installation server, open a terminal window to connect to the console of each cluster node.
You can also connect to the consoles from the system that you use to connect to the installation server.
Install the package containing the nhinstall tool on the installation server described in the following procedure.
Install the nhinstall package, SUNWnhas-installer:
# pkgadd -d /software-distribution-dir/Product/NetraHASuite_3.0/ FoundationServices/ Solaris_x/arch/Packages/SUNWnhas-installer |
where software-distribution-dir is the directory that contains Netra HA Suite packages, x is 9 or 10 depending on the version of the Solaris OS in use on the installation server, and where arch is sparc or x64, depending on the installation server architecture.
# rpm -i /software-distribution-dir/Product/NetraHASuite_3.0/ FoundationServices/ SLES9/arch/Packages/sun-nhas-installer-3.0-24.arch.rpm |
where software-distribution-dir is the directory that contains Netra HA Suite packages, and where arch is i686 or x86_64, depending on the installation server architecture.
To access the man pages on the installation server, install the man page package, SUNWnhas-manpages:
# pkgadd -d /software-distribution-dir/Product/NetraHASuite_3.0/ FoundationServices/ Solaris_x/arch/Packages/SUNW nhas-manpages |
where software-distribution-dir is the directory that contains the Netra HA Suite packages, x is 9 or 10 depending on the version of the Solaris OS in use on the installation server, and where arch is sparc or x64 depending on the installation server architecture.
# rpm -i /software-distribution- dir/Product/NetraHASuite_3.0/FoundationServices/ SLES9/arch/Packages/sun- nhas-manpages-3.0-24.arch.rpm |
where software-distribution-dir is the directory that contains the Netra HA Suite packages and where arch is i686 or x86_64 depending on the installation server architecture.
Modify the shell variable MANPATH to include the path /opt/SUNWcgha/man.
Check SunSolve to download any nhinstall patches for this release.
If there are patches, see the associated Readme file for installation directions.
After you have installed the package containing the nhinstall tool, configure the nhinstall tool to install the Foundation Services on your cluster. To configure the nhinstall tool, modify the following configuration files:
Use this configuration file to define the IP address of the installation server and the locations of the software distributions for the operating system and the Foundation Services.
You must modify this configuration file. For details on each available option, see the env_installation.conf(4) man page.
Use this configuration file to define the nodes, disks, and options in your cluster configuration. You must modify this configuration file. For details on each available option, see the cluster_definition.conf(4) man page.
Use this configuration file to specify additional packages and patches that you want to install during the installation process. You must configure your addon.conf file with packages specific to your hardware. For help with your specific configuration, contact your Foundation Services representative. This file is optional. If this file is not configured, the nhinstall tool does not install any additional patches or packages. For more information, see the addon.conf(4) man page.
Use this configuration file if you want to specify the set of Solaris packages to be installed on the cluster. The default package set is defined in the nodeprof.conf.template file. For more information, see the nodeprof.conf(4) man page.
If you do not create this file, the same set of Solaris packages is installed on the master-eligible and dataless nodes. Create the dataless_nodeprof.conf file, if you want to customize the Solaris installation on the dataless nodes. For more information, see the dataless_nodeprof.conf(4) man page.
If you do not create this file, the same set of Solaris packages is installed on the master-eligible and diskless nodes. Create the diskless_nodeprof.conf file, if you want to customize the Solaris installation on the diskless nodes. For more information, see the diskless_nodeprof.conf(4) man page.
The following sections describe in detail the main configuration options of the nhinstall tool:
Configuring the Environment for Diskless Nodes on the Solaris OS
Configuring the Boot Policy for Diskless Nodes on the Solaris OS
Configuring DHCP Configuration Files Locally on Solaris OS Master-Eligible Nodes
Configuring the Floating External Address of the Master Node
Sharing Physical Interfaces Between CGTP and IPMP Using VLAN
Installing a Different Version of the Operating System on Diskless and Dataless Nodes
Configuring Reduced Duration of Disk Synchronization on the Solaris OS
Installing the Node Management Agent (NMA) on the Solaris OS
Installing the SA Forum Cluster Membership API (SA Forum/CLM)
If you are using AMD64-based hardware or SPARC-based sun4v hardware, use the HARDWARE parameter to specify the type of node. Specifying this information is not required for SPARC-based sun4u hardware.
For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
Use the SLICE (for both Linux and the Solaris OS) or SHARED_SLICE (for the Solaris OS only) parameters to specify the disk partitions on the master-eligible nodes.
If you plan to use Netra High Availability Suite for replicating NFS-served data over IP, use the SLICE parameter for all partitions.
On the Solaris OS, it is also possible to locate NFS-served data on shared disks. If you plan to do so, use the SHARED_SLICE parameter for the partition storing this data and use SLICE for the local partitions (the root file system, for example).
TABLE 2-1 through TABLE 2-3 list the space requirements on the Solaris OS for sample disk partitions of master-eligible nodes in a cluster with diskless nodes, either with IP-replicated data or with a shared disk. TABLE 2-4 lists the space requirements on Linux for sample disk partitions of master-eligible nodes. TABLE 2-5 lists the space requirements for example disk partitions of dataless nodes. *
Disk Partition | File System Name | Description | Example Size |
---|---|---|---|
0 | / | The root file system, boot partition, and volume management software. This partition must be mounted with the logging option. | 2 Gbytes minimum |
1 | swap | Minimum size when physical memory is less than 1 Gbyte. | 1 Gbyte |
3 | /export | Exported file system reserved for diskless nodes. The /export file system must be mounted with the logging option. This partition is further sliced if diskless nodes are added to the cluster. | 2.8 Gbyte + 160 Mbytes per diskless node |
4 | /SUNWcgha/local | This partition is reserved for NFS status files, services, and configuration files. The /SUNWcgha/local file system must be mounted with the logging option. | 2 Gbytes |
5 | Reserved for Reliable NFS internal use | Bitmap partition reserved for nhcrfsd. This volume is associated with the /export file system. | 1 Mbyte |
6 | Reserved for Reliable NFS internal use | Bitmap partition reserved for nhcrfsd. This partition is associated with the /SUNWcgha/local file system. | 1 Mbyte |
7 | replica | If you have configured volume management, this partition must be named replica. This partition is mounted with the logging option. See Configuring Volume Management. | The remaining space |
Note - Partition 2 is reserved for overlapping the entire disk. It is automatically created and must not be defined. |
Configure the SLICE parameter in the cluster_definition.conf file to specify the disk partitions on the dataless nodes.
TABLE 2-5 lists the space requirements for example disk partitions of dataless nodes.
Note - Partition 2 is reserved for overlapping the entire disk. It is automatically created and must not be defined. |
Configure the MIRROR parameter to mirror a shared disk to another shared disk on the Solaris OS.
On the Solaris OS, to prevent simultaneous access to the shared data in case of split-brain, SCSI disk reservation is used. The SCSI version is configured by the SHARED_DISK_FENCING parameter. It can be set to SCSI2 or SCSI3.
On the Solaris OS, you can configure the nhinstall tool to store the scoreboard bitmaps of IP-replicated partitions either in memory or on the disk.
If the BITMAP_IN_MEMORY parameter is set to YES in the cluster_definition.conf file, the bitmaps are configured to be stored in memory. When the master node is shut down gracefully, the scoreboard bitmap is saved on the disk.
If the BITMAP_IN_MEMORY parameter is set to NO, the bitmaps are configured to be written on the disk at each update.
You can configure the nhinstall tool to use the NFS option noac for the directories that are mounted remotely. The noac option suppresses data and attribute caching.
You can configure the nhinstall tool to set up a direct link between the master-eligible nodes by using the serial port on each master-eligible node. Make sure that you have connected the serial ports with a cable before configuring the direct link. This connection prevents a split brain situation, where there are two master nodes in the cluster because the network between the master node and the vice-master node fails. For an illustration of the connection between the master-eligible nodes, see the Netra High Availability Suite 3.0 1/08 Foundation Services Getting Started Guide.
The DIRECT_LINK parameter in the cluster_definition.conf file enables you to define the serial device on each master-eligible node, the speed of the serial line, and the heartbeat (in seconds) checking the link between the two nodes. For example:
DIRECT_LINK=/dev/ttyb 115200 20 |
You can configure the nhinstall tool to reboot the master-eligible nodes automatically during the installation.
If the AUTO_REBOOT parameter is set to YES in the env_installation.conf file, you are prompted to boot the master-eligible nodes the first time only. After the first boot, the master-eligible nodes are automatically rebooted by the nhinstall tool.
If AUTO_REBOOT is set to NO, the nhinstall tool prompts you to reboot the master-eligible nodes at different stages of the installation. This process requires you to move between console windows to perform tasks directly on the nodes.
You can configure the nhinstall tool to install and configure the Carrier Grade Transport Protocol (CGTP).
If the USE_CGTP parameter is set to YES in the cluster_definition.conf file, the nhinstall tool installs CGTP.
If the USE_CGTP parameter is set to NO, nhinstall does not install the CGTP packages and patches. In this case, your cluster is configured with a single network interface. You do not have a redundant cluster network. For information about the advantages of redundant network interfaces, see the Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
If you define diskless nodes with the NODE or DISKLESS parameters in the cluster_definition.conf file, the nhinstall tool installs the Solaris services for the diskless nodes. The tool also configures the boot options for each diskless node on the master-eligible nodes.
If you do not define any diskless nodes in the cluster_definition.conf file, the nhinstall tool does not install the Solaris services for diskless nodes. If you plan to add diskless nodes to the cluster at a later date, set the INSTALL_DISKLESS_ENV parameter in add the cluster_definition.conf to specify on which platform you want nhinstall to set up the Solaris services for diskless nodes.
If you do not set this parameter, the nhinstall tool does not install the Solaris services for diskless nodes on master-eligible nodes. In this case, you cannot use nhinstall to add diskless nodes to the cluster at a later date without reinstalling the software. Therefore, try to include possible future nodes in your cluster configuration.
You can configure the nhinstall tool to have the diskless nodes in the cluster boot statically or by using the node's client ID. The DISKLESS_BOOT_POLICY parameter in the cluster_definition.conf configuration file enables you to choose a boot policy for the diskless nodes in your cluster. All diskless nodes in a cluster are configured with the same boot policy.
The following table summarizes the boot policies supported by the nhinstall tool.
For further information about the boot policies for diskless nodes, see the Netra High Availability Suite 3.0 1/08 Foundation Services Overview and the Netra High Availability Suite 3.0 1/08 Foundation Services Manual Installation Guide for the Solaris OS.
By default, nhinstall configures diskless nodes so that the DHCP configuration files are stored in the highly available directory /SUNWcgha/remote/var/dhcp on the master-eligible nodes. You can configure the cluster to put the DHCP configuration files in a local directory, /var/dhcp, on the master eligible nodes by adding the following line to the cluster_definition.conf file.
REPLICATED_DHCP_FILES=NO
When you install with nhinstall and with this feature enabled, nhinstall copies the DHCP configuration files from the master to the vice-master node.
If you enable this feature, each time you update the DHCP configuration files on the master after initial cluster installation, you must copy these files to the vice-master node. For more information, see the cluster_definition.conf(4) and nhadm(1M) man pages.
By default, nhinstall configures the installation server to be the default router to the public network. To choose another machine as the router to the public network specify the IP address of the default router of your choice in the cluster_definition.conf file as follows:
DEFAULT_ROUTER_IP=IP address
For more information, see the cluster_definition.conf(4) man page.
You can configure IPv4 addresses of any class for the nodes of your cluster by using the nhinstall tool. The CLUSTER_NETWORK parameter enables you to specify the netmask and the subnets for the NIC0, NIC1, and cgtp0 interfaces of your nodes. For example, to define Class B IP addresses for the nodes, the CLUSTER_NETWORK parameter is defined as follows:
CLUSTER_NETWORK=255.255.0.0 192.168.0.0 192.169.0.0 192.170.0.0 |
You can configure the nhinstall tool to set a floating external address. A floating external address is an external IP address that is assigned to the master role rather than to a specific node. This IP address enables you to connect to the current master node from systems outside the cluster network.
As an option, IPMP (IP Multipathing) on the Solaris OS or bonding on Linux, can be used to support a floating external address on dual redundant links.
EXTERNAL_MASTER_ADDRESS controls an external floating address not managed by IPMP or bonding. It makes EXTERNAL_ACCESS (the former directive) obsolete.
EXTERNAL_IPMP_MASTER_ADDRESS on the Solaris OS controls an external floating address managed by IPMP.
EXTERNAL_BONDING_MASTER_ADDRESS on Linux controls an external floating address managed by the bonding driver.
If you specify an IP address and a network interface for the external address parameter in the cluster_definition.conf file, the floating external address is configured. The External Address Manager daemon, nheamd, which monitors floating addresses and IPMP groups or bonding interfaces on master-eligible nodes is also installed. This daemon makes sure that the external IP address is always assigned to the current master node. For more information, see the nheamd(1M) man page.
If you do not configure the external address parameter in the cluster_definition.conf configuration file, the floating external address is not created. Therefore, the master node cannot be accessed by systems outside the cluster network.
You can configure the nhinstall tool to set external IP addresses on network interfaces to a public network. Then, the nodes can be accessed from systems outside the cluster network.
Set the PUBLIC_NETWORK parameter in the cluster_definition.conf file specifying the subnet and netmask for the subnet.
If the installation server has to be configured to use this public network for the cluster nodes installation, the SERVER_IP parameter must also be defined in env_installation.conf to specify an IP address for the installation server on the same subnetwork as defined for PUBLIC_NETWORK.
If SERVER_IP is not defined in env_installation.conf, the installation server will be configured to use the private network for the cluster nodes installation, and the configuration of the public network will be done on cluster nodes only, not on the installation server.
For more information about SERVER_IP, refer to the env_installation.conf(4) man page.
Specify the external IP address, external node name, and the external network interface for each NODE definition. For example:
MEN=10 08:00:20:f9:c5:54 - - - - FSNode1 192.168.12.5 hme1 MEN=20 08:00:20:f9:a8:12 - - - - FSNode2 192.168.12.6 hme1 |
Physical links can be shared between CGTP and IPMP only when CGTP is used over a VLAN. Before using this configuration, refer to detailed information about Solaris VLAN and IPMP in the Solaris System Administration Guide: IP Services.Not all network interfaces support VLAN. Check that your interfaces support this use. Solaris shows VLAN interfaces as separate physical interfaces, even though there is only one. Since VLANs are configured by using special names for the interfaces, you must define the topology and the interface names for that topology Keep the following points in mind when defining your topology:
Be careful not to set the booting interface on a VLAN. Installation is impossible unless the installation server and boot server are both configured to be part of the VLAN.
Do not set the IPMP interfaces on a VLAN unless all other interfaces on all nodes in the group can belong to the same VLAN (including the clients).
CGTP can be configured with both links on a VLAN, or with only one.
The VLANs on the switches must be configured before starting the installation.
It is IMPORTANT to have a third node (the client, for example, or a router) with an address in the same subnetwork as the IPMP test addresses, as a reference. Many reference nodes are available in order to avoid SPOFs.
For example, consider the three-node cluster shown in FIGURE 2-1. Three ce NICs are on each MEN. In both MENs, ce0 is connected to switch 1, ce1 to switch 2 and ce2 to switch 3. The external router, to which clients connect, is connected to switches 2 and 3. This restricts ce1 and ce2 for external access. CGTP can be used on any two NICs. In this case, ce0 and ce1 were chosen, making ce1 a shared interface.
The VLAN is created with VID 123 over the interface ce1 by plumbing an interface called ce123001. In this example, ce0 and ce123001 will be used for CGTP, and ce1 and ce2 for IPMP. Create the tagged VLAN on SW2 (for information on how to create a VLAN, refer to your switch’s documentation), create a cluster_definition.conf file respecting these interfaces, and launch the installation as for any other case.
The volume management feature enables you to do the following:
On the Solaris OS, the volume management software that is installed depends on the version of the Solaris OS that you plan to install. For information on supported software versions, see the Netra High Availability Suite 3.0 1/08 Release Notes.
If both servers do not have the same disk configuration (for example, if they have a different number of disks, or if disks are numbered differently on the bus), you must install the Volume Management feature of the OS you are using. For more information, see the Netra High Availability Suite 3.0 1/08 Foundation Services Getting Started Guide.
To install the Volume Management software on the nodes of your cluster, perform one of the following procedures:
|
You can use the nhinstall tool to install and configure volume management to use soft partitions. The use of volume management is mandatory for servers with different disk configurations (for example, servers that have a different number of disks, or servers that use FC-AL disks). This situation can result in different minor device numbers on both servers, preventing Reliable-NFS from performing a failover. An NFS file handler contains the minor device number of the disk supporting a file and it must be the same on both servers. Using volume management hides the disk numbering and enables you to ensure that files that are duplicated on both servers have the same NFS file handler.
Configure the nhinstall tool to support logical disk partitions by installing the volume management feature as follows:
Configure the cluster_definition.conf file:
For a detailed example, see the cluster_definition.conf(4) man page.
Run the nhinstall tool to install the operating system and Foundation Services on the master-eligible nodes.
For more information, see To Launch the nhinstall Tool.
The nhinstall tool installs and configures the appropriate volume management software depending on the version of the operating system you chose to install.
To configure advanced volume management, install the operating system and configure the volume management feature to suit your needs. Then configure nhinstall to install only the Foundation Services.
Install the operating system with volume management on the master-eligible nodes.
For more information, see the documentation for your volume management software:
For Solaris 9 or Solaris 10, Solaris Volume Manager Administration Guide
This documentation is available at http://www.oracle.com/technetwork/indexes/documentation/index.html.
For MontaVista CGE 4.0, see the MontaVista documentation at http://support.mvista.com
For Wind River CGL 1.4, see the man pages or generic LVM2 configuration “how tos,” such as those at: http://www.tldp.org/HOWTO/LVM-HOWTO
Note - Install the same packages of the same version of the operating system on both master-eligible nodes. Create identical disk partitions on the disks of both master-eligible nodes. |
Configure a physical Ethernet card interface that corresponds to the first network interface, NIC0.
Configure the sizes of the disk partitions.
For more information, see TABLE 2-1 for the Solaris OS and TABLE 2-4 for Linux.
In the env_installation.conf file, set OS_INSTALL to DISKLESS_DATALESS_ONLY.
The operating system is configured on the dataless nodes and the Solaris OS, services will be configured for the diskless environment.
Run the nhinstall tool to install the Foundation Services on the master-eligible nodes.
For more information, see To Launch the nhinstall Tool.
To install a Solaris package set on cluster nodes other than the default package set, specify the Solaris package set to be installed. For a list of the contents of the default package set, see the /opt/SUNWcgha/templates/nhinstall/nodeprof.conf.template file. For information about installing a Solaris package set on cluster nodes, see the nodeprof.conf(4) man page. For information about installing a customized Solaris package set on the diskless nodes, see the diskless_nodeprof.conf(4) man page. For information about installing a customized Solaris package set on the dataless nodes, see the dataless_nodeprof.conf4.
To install a version of the Solaris OS on diskless nodes that is different from the version you are installing on master-eligible nodes, specify the location of the two Solaris distributions in the env_installation.conf file. For example:
SOLARIS_DIR=/export/s10DISKLESS_SOLARIS_DIR=/export/s9u8
To install a version of the Solaris OS on dataless nodes that is different from the versions you are installing on master-eligible nodes, specify the location of the two Solaris distributions in the env_installation.conf file. For example:
SOLARIS_DIR=/export/s10DATALESS_SOLARIS_DIR=/export/s9u8
By default, the values provided to the DISKLESS_SOLARIS_DIR and DATALESS_SOLARIS_DIR parameters are set to be the same as that provided to the SOLARIS_DIR parameter. For more information, see the env_installation.conf(4) man page.
To install the Solaris OS on master-eligible nodes and install the MontaVista CGE Linux Operating System on dataless nodes, specify the location of the Solaris distribution, the MontaVista target distribution, and the MontaVista LSP distribution in the env_installation.conf file using the parameters SOLARIS_DIR, DATALESS_MVISTA_TARGET_DIR, and DATALESS_MVISTA_LSP_DIR. For example:
SOLARIS_DIR=/export/s10DATALESS_MVISTA_TARGET_DIR=/export/mvista/target_tdpDATALESS_MVISTA_LSP_DIR=/export/mvista/lsp
To install Wind River CGL on master-eligible nodes and the Solaris OS on dataless nodes, specify the location of the Wind River CGL, the directory where a root NFS file system will be created for each type of platform, and the path to the Solaris distribution in the env_installation.conf file using the parameters WINDRIVER_IMAGES_DIR, WINDRIVER_ROOTNFS_DIR, and DATALESS_SOLARIS_DIR. For example:
WINDRIVER_IMAGES_DIR=/dist/WindRiverWINDRIVER_ROOTNFS_DIR=/export/root/WindRiverDATALESS_SOLARIS_DIR=/export/s10
There are three data management policies available with the Foundation Services. By default, the nhinstall tool sets the data management policy to be Integrity for data replication over IP, and Availability when using shared disks. To choose another policy, change the value of the following variable in the cluster_definition.conf file.
DATA_MGT_POLICY=INTEGRITY | AVAILABILITY | ADAPTABILITY
For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
By default, diskless and dataless nodes reboot if there is no master in the cluster. If you do not want the diskless and dataless nodes to reboot in this situation, add the following line to the cluster_definition.conf file:
MASTER_LOSS_DETECTION=YES
For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
By default nhinstall enables this feature. It reduces the time taken for full synchronization between the master and the vice-master disks by synchronizing only the blocks that contain replicated data.
Note - Only use this feature on the Solaris OS with UFS file systems. |
To disable this feature and have all blocks replicated, add the following line to the cluster definition.conf file:
SLICE_SYNC_TYPE=RAW
For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
To activate the sanity check of replicated slices, add the following line to the cluster_definition.conf file:
CHECK_REPLICATED_SLICES=YES
By default, the nhinstall tool does not activate this feature. For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
By default, disk synchronization starts automatically when the cluster software is installed. If you want to delay the start of disk synchronization, add the following line to the cluster_definition.conf file:
SYNC_FLAG=NO
You can trigger disk synchronization at a time of your choice using the nhenablesync tool. For more information, see the cluster_definition.conf4 and nhenablesync(1M) man pages and Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
By default, nhinstall configures the cluster so that slices are synchronized in parallel. Synchronizing slices one slice at a time reduces the network and disk overhead but increases the time it takes for the vice-master to synchronize with the master. During this time, the vice-master is not eligible to take on the role of master. To enable serialized slice synchronization, add the following line to the cluster_definition.conf file:
SERIALIZE_SYNC=YES
For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
By default, the Node Management Agent is installed on systems running the Solaris OS. This feature is not yet supported on Linux.Set the INSTALL_NMA parameter to NO to avoid installing this agent on systems running the Solaris OS.
By default, the Node State Manager is not installed.Set the INSTALL_NSM parameter to YES to install NSM.
By default, the SA Forum/CLM API is not installed.Set the INSTALL_SAFCLM parameter to YES to install the SA Forum/CLM API.
Copyright © 2008, Sun Microsystems, Inc. All rights reserved.