C H A P T E R  2

Installing and Configuring the nhinstall Tool

The nhinstall tool enables you to install and configure software and services on the nodes of a cluster, whatever type and number of nodes there are. You install and configure the nhinstall tool on the installation server.

For information about setting up the installation environment and configuring the nhinstall tool, see the following sections:


Preparing the Installation Environment on a Solaris OS Installation Server

Before installing the nhinstall tool on an installation server running the Solaris OS, you must prepare the selected OS and the Netra HA Suite software for future installation the cluster nodes. This involves creating a Solaris, Wind River CGL,

or MontaVista CGE distribution on the installation server.

procedure icon  To Create a Solaris Distribution on the Installation Server

To install the Solaris Operating System on the cluster nodes, create a Solaris distribution on the installation server. If you are installing more than one Solaris distribution on the cluster, perform the steps in the procedure for each Solaris distribution.

  1. Make sure that you have at least 1.8 Gbytes of free disk space for the Solaris 9 OS and 3 Gbytes of free disk space for the Solaris 10 OS on the installation server.

  2. Log in as superuser on the installation server.

  3. Create a directory for the Solaris distribution:


    # mkdir Solaris-distribution-dir
    

    where Solaris-distribution-dir is the directory where the distribution is to be stored on the installation server.

  4. Change to the directory where the setup_install_server command is located:


    # cd Solaris-dir/Solaris_x/Tools
    

    • Solaris-dir is the directory that contains the Solaris installation software. This directory could be on a CD-ROM or in an NFS-shared directory.

    • x is 9 or 10 depending on the Solaris version you want to install.

  5. Run the setup_install_server command:


    Solaris-distribution-dir # ./setup_install_server
    

    For more information about the setup_install_server command, see the appropriate documentation:

    • Solaris 9 Installation Guide and the setup_install_server(1M) man page

    • Solaris 10 Release and Installation Collection and the setup_install_server(1M) man page

procedure icon  To Create a MontaVista Distribution on the Installation Server

To install the MontaVista Linux CGE on the cluster, get two CD-ROM images from MontaVista for the Netra HA Suite and install them on the installation server.

  1. Get the Linux Support Package (LSP) CD-ROM image:

    lsps-x86-pc_target-x86_amd64--xxxxxxx.iso

    This package contains preconfigured kernel binaries, kernel modules, kernel headers, and kernel sources for the Netra™ CP3020 hardware.

  2. Get the Target Distribution Package (TDP) CD-ROM image:

    target-x86_amd64-tdp-xxxxxxx.iso

    This package contains the file system with prebuilt applications and the MontaVista installer.

  3. Make sure that you have at least 600 Mbytes of free disk space on the installation server.

  4. Log in as superuser on the installation server.

  5. Mount the target TDP CD-ROM images:


    # /usr/sbin/lofiadm -a path_to_cdrom_image/target-x86_amd64-tdp-xxxxxxx.iso 
    

    The command will return a device, such as /dev/lofi/1 for example.

  6. Mount the device returned in the preceding step:


    # /usr/sbin/mount -F hsfs /dev/lofi/1 path_you_chose_to_mount_the_target_TDP 
    

  7. Copy the mounted directory to a directory that can be exported through NFS:


    TDP># /usr/bin/cp path_to_the_mounted_target_TDP path_for_copying_the_target 
    

  8. Mount the LSP CD-ROM images:


    xxxxxxx.iso# /usr/sbin/lofiadm -a path_to_cdrom_image/lsps-x86-pc_target-
    x86_amd64- 
    

    The command will return a device, such as /dev/lofi/2 for example

  9. Mount the device returned in the preceding step:


    # /usr/sbin/mount -F hsfs /dev/lofi/2 path_you_chose_to_mount_the_LSP 
    

  10. Copy the mounted directory to a directory that can be exported through NFS:


    # /usr/bin/cp path_to_the_mounted_LSP path_for_copying_the_LSP 
    

  11. Modify the MontaVista LSP to use the Netra HA Suite LSP package.

    The Netra HA Suite provides a MontaVista package named lsp-x86-pc_target-x86_amd64-2.6.10_mvlcge401-1.2.1.xxxxxxx.x86_amd64.mvl, which contains Linux kernel modules, as well as a MontaVista Linux kernel patch to include the Netra HA Suite Carrier Grade Transport Protocol (CGTP), a reliable IP transport mechanism based on transparent multirouting using redundant routes.

    For Netra HA Suite, you must use this package instead of the original MontaVista LSP package by copying it into the LSP distribution as follows.

    1. Install the Netra HA Suite kernel package:


      # /usr/bin/rpm2cpio NHAS-software-distribution-dir/Product/NetraHASuite_3.0/ 
      FoundationServices/mvlcge40/x86_amd64/Packages/sun-nhas-kernel-source-3.0- 
      6.x86_amd64.rpm | /usr/bin/cpio -id 
      

      where NHAS-software-distribution-dir is the directory that contains the Netra HA Suite distribution.

    2. Copy the Netra HA Suite LSP in your MontaVista target distribution to replace the original LSP:


      # cp ./usr/src/sun/nhas/LSP/target/lsp-x86-pc_target-x86_amd64-2.6.10_ 
      mvlcge401-1.2.1.xxxxxxx.x86_amd64.mvl path_where_you_copied_the_LSP
      /x86_amd64/lsps/x86-pc_target-x86_amd64/target/ 
      

procedure icon  To Create a Wind River CGL Distribution on a Solaris Installation Server

To create a Wind River CGL distribution on the installation server, follow these steps:

  1. Ensure that the installation server has at least 73 Mbytes of free disk space.

  2. Log in as superuser on the installation server.

  3. Create a directory for the Wind River CGL distribution:


     # mkdir Wind-River-distribution-dir
    

    where Wind-River-distribution-dir is the directory where the distribution is to be stored on the installation server.

  4. Install the Netra HA Suite kernel package:


    # /usr/bin/rpm2cpio NHAS-software-distribution-dir/Product/NetraHASuite_3.0/FoundationServices/wrl1_4/i686/Packages/sun-nhas-kernel-source-3.0-*.i686.rpm | /usr/bin/cpio -id 
    

    where NHAS-software-distribution-dir is the directory that contains the Netra HA Suite distribution.



    Note - The command shown in the preceding example should appear on one line, but wraps in the printed text due to page size limitations.



    The following files, will be installed under ./usr/src/sun/nhas/distribution:

    • sun_netra_cp3020-linux-modules-WR1.4aq_cgl-nhas.tar.bz2 contains the kernel modules.

    • System.map-netra_cp3020 contains the symbol table.

    • bzImage-netra_cp3020 is used to boot the Sun Netra CP3020 hosts.

  5. Copy these files to the Wind-River-distribution-dir directory you created in Step 3 for the Wind River CGL distribution:


# cp ./usr/src/sun/nhas/distribution/sun_netra_cp3020-linux-modules-WR1.4aq_cgl-nhas.tar.bz2 Wind-River-distribution-dir
# cp ./usr/src/sun/nhas/distribution/System.map-netra_cp3020 Wind-River-distribution-dir
# cp ./usr/src/sun/nhas/distribution/bzImage-netra_cp3020 Wind-River-distribution-dir

  • To the Wind-River-distribution-dir directory, copy a compressed tar of the root file system of Wind River Linux CGL 1.4 for Sun Netra CP3020 nodes.

    This file is provided by Wind River with the Wind River Platform For Network Equipment Linux Edition 1.4 BSP for Sun CP3020. For more information, refer to http://windriver.com/products/bsp_web/bsp_vendor.html?vendor=Sun


     # cp sun_netra_cp3020-dist.tar.bz2 Wind-River-distribution-dir
    

  • To the Wind-River-distribution-dir directory, copy a lilo boot loader rpm, as required by the Netra High Availability Suite installer. For example, you can use lilo-22.7-19.x86_64.rpm, or any other lilo version. You can download this lilo rpm file from http://rpmfind.net/


     # cp lilo-22.7-19.x86_64.rpm Wind-River-distribution-dir
    



    Note - There is no constraint on the lilo version you use.



  • procedure icon  To Prepare the Installation Server Running the Solaris OS

    Before you begin the installation process, make sure that the installation server is configured as described in Chapter 1.

    1. If you are planning to install remotely from another system, open a shell window to connect to the installation server.

    2. Confirm that the Solaris software packages that contain Perl 5.0 are installed on the installation server.

      Use the pkginfo command to check for the SUNWp15u, SUNWp15p, and SUNWp15m Perl packages.

    3. Delete any entries for your cluster nodes in the following files:

      • /etc/hosts

      • /etc/ethers, if the file exists

      • /etc/bootparams, if the file exists

    4. Disable the installation server as a router by creating an /etc/notrouter file:


      # touch /etc/notrouter
      

      If a system running the Solaris Operating System has two network interfaces, the system is configured as a router by default. However, for security reasons, a Foundation Services cluster network must not be routed.

    5. Modify the /etc/nsswitch.conf file on the installation server so that files is positioned before nis in the hosts, ethers, and bootparams entries:


      hosts: files nis
      ethers: files nis
      bootparams: files nis
      netmasks: files nis
      

    6. From the installation server, open a terminal window to connect to the console of each cluster node.

      You can also connect to the consoles from the system that you use to connect to the installation server.


    Preparing the Installation Environment on a Linux SLES9 Installation Server

    Before installing the nhinstall tool on the installation server running the Linux SLES9 OS, you must install a MontaVista CGE or Wind River CGL

    distribution on the installation server. You must also prepare the installation server to install the OS and Netra HA Suite software on the cluster nodes.

    procedure icon  To Create a MontaVista Distribution on the Installation Server

    To install the MontaVista Linux CGE on the cluster, get two CD-ROM images from MontaVista for the Netra HA Suite and install them on the installation server.

    1. Get the following Linux Support Package (LSP) CD-ROM image:

      lsps-x86-pc_target-x86_amd64--xxxxxxx.iso

      This package contains preconfigured kernel binaries, kernel modules, kernel headers, and kernel sources for the Netra™ CP3020 hardware.

    2. Get the following Target Distribution Package (TDP) CD-ROM image:

      target-x86_amd64-tdp-xxxxxxx.iso

      This package contains the file system with prebuilt applications and the MontaVista installer.

    3. Make sure that you have at least 600 Mbytes of free disk space on the installation server.

    4. Log in as superuser on the installation server.

    5. Mount the target TDP CD-ROM images:


      # /bin/mount -oro -o loop -t iso9660 path_to_cdrom_image/target-x86_amd64- 
      xxxxxxx.iso path_you_chose_to_mount_the_target_TDP 
      

    6. Copy the mounted directory to a directory that can be exported through NFS:


      # /bin/cp path_to_the_mounted_target_TDP path_for_copying_the_target_TDP 
      

    7. Mount the LSP CD-ROM images:


      xxxxxxx.iso# /bin/mount -oro -o loop -t iso9660 path_to_cdrom 
      _image/lsps-x86-pc_target-x86_amd64- path_you_chose_to_mount_the_LSP
      

    8. Copy the mounted directory to a directory that can be exported through NFS:


      # /bin/cp path_to_the_mounted_LSP path_for_copying_the_LSP
      

    9. Modify the MontaVista LSP to use the Netra HA Suite LSP package.

      The Netra HA Suite provides a MontaVista package named lsp-x86-pc_target-x86_amd64-2.6.10_mvlcge401-1.2.1.xxxxxxx.x86_amd64.mvl, which contains Linux kernel modules, as well as a MontaVista Linux kernel patch to include the Netra HA Suite CGTP, a reliable IP transport mechanism based on transparent multirouting using redundant routes.

      For Netra HA Suite, you must use this package instead of the original MontaVista LSP package by copying it into the LSP distribution as follows.

      1. Install the Netra HA Suite kernel package:


        # rpm -i NHAS-software-distribution-dir/Product/NetraHASuite_3.0/ 
        FoundationServices/mvlcge40/x86_amd64/Packages/sun-nhas-kernel-source-3.0-
        24.x86_amd64.rpm
        

        where NHAS-software-distribution-dir

    is the directory that contains the Netra HA Suite distribution.

  • Copy the Netra HA Suite LSP in your MontaVista target distribution to replace the original LSP:


    # cp /usr/src/sun/nhas/LSP/target/lsp-x86-pc_target-x86_amd64-2.6.10_ 
    mvlcge401-1.2.1.xxxxxxx.x86_amd64.mvl path_where_you_copied_the_LSP 
    /x86_amd64/lsps/x86-pc_target-x86_amd64/target/
    

  • procedure icon  To Create a Wind River CGL Distribution on a SLES9 Installation Server

    To create a Wind River CGL distribution on the installation server, follow these steps:

    1. Ensure that the installation server has at least 73 Mbytes of free disk space.

    2. Log in as superuser on the installation server.

    3. Create a directory for the Wind River CGL distribution:


       # mkdir Wind-River-distribution-dir
      

      where Wind-River-distribution-dir is the directory where the distribution is to be stored on the installation server.

    4. Install the Netra HA Suite kernel package:


      # rpm -i --nodeps --ignorearch NHAS-software-distribution-dir/Product/NetraHASuite_3.0/FoundationServices/wrl1_4/i686/Packages/sun-nhas-kernel-source-3.0-*.i686.rpm | /usr/bin/cpio -id
      

      where NHAS-software-distribution-dir is the directory that contains the Netra HA Suite distribution.



      Note - The command shown in the preceding example should appear on one line, but wraps in the printed text due to page size limitations.



      The following files, will then be installed under /usr/src/sun/nhas/distribution:

      • sun_netra_cp3020-linux-modules-WR1.4aq_cgl-nhas.tar.bz2 contains the kernel modules.

      • System.map-netra_cp3020 contains the symbol table.

      • bzImage-netra_cp3020 is used to boot the Sun Netra CP3020 hosts.

    5. Copy these files to the Wind-River-distribution-dir directory you created in Step 3 for the Wind River CGL distribution:


    nhas.tar.bz2 # cp /usr/src/sun/nhas/distribution/sun_netra_cp3020-linux-modules-WR1.4aq_cgl-nhas.tar.bz2 Wind-River-distribution-dir 
    distribution-dir# cp /usr/src/sun/nhas/distribution/System.map-netra_cp3020 Wind-River-distribution-dir
    dir# cp /usr/src/sun/nhas/distribution/bzImage-netra_cp3020 Wind-River-distribution-dir
    

  • To the Wind-River-distribution-dir directory, copy a compressed tar of the root file system of Wind River Linux CGL 1.4 for Sun Netra CP3020 nodes.

    This file is provided by Wind River with the Wind River Platform For Network Equipment Linux Edition 1.4 BSP for Sun CP3020. For more information, refer to: http://windriver.com/products/bsp_web/bsp_vendor.html?vendor=Sun


     # cp sun_netra_cp3020-dist.tar.bz2 Wind-River-distribution-dir
    

  • To the Wind-River-distribution-dir directory, copy a lilo boot loader rpm (lilo-22.7-19.x86_64.rpm) required by the Netra High Availability Suite installer.

    You can download this lilo rpm file from http://rpmfind.net/


     # cp lilo-22.7-19.x86_64.rpm Wind-River-distribution-dir
    



    Note - There is no constraint on the lilo version you use.



  • procedure icon  To Prepare the Installation Server Running the Linux SLES9 Operating System

    Before you begin the installation processon a Suse SLES9 installation server, make sure it is configured as described in Chapter 1.

    1. If you are planning to install remotely from another system, open a shell window to connect to the installation server.

    2. Confirm that the a Perl 5 RPM package is installed on the installation server.

      Use the rpm -qa perl command to confirm that Perl is installed.

    3. Confirm that the ISC DHCP server RPM package is installed on the installation server.

      Use the command rpm -qa dhcp-server to confirm that the DHCP server is installed.

    4. Confirm that the TFTP RPM package is installed on the installation server.

      Use the command rpm -qa tftp to confirm that tftp is installed.

    5. Enable tftp:


      /sbin/chkconfig tftp on# 
      /etc/init.d/xinetd restart# 
      

    6. Start the NFS server as follows:


      # /usr/sbin/rcnfsserver restart
      

    7. Delete any entries for your cluster nodes in the following files:

      • /etc/hosts

      • /etc/ethers, if the file exists

      • /etc/bootparams, if the file exists

    8. Modify the /etc/nsswitch.conf file on the installation server so that files is positioned before nis in the hosts, ethers, and bootparams entries:


      hosts: files nis
      ethers: files nis
      bootparams: files nis
      netmasks: files nis
      

    9. From the installation server, open a terminal window to connect to the console of each cluster node.

      You can also connect to the consoles from the system that you use to connect to the installation server.


    Installing the nhinstall Tool

    Install the package containing the nhinstall tool on the installation server described in the following procedure.

    procedure icon  To Install the nhinstall Tool

    1. Log in to the installation server as superuser.

    2. Install the nhinstall package, SUNWnhas-installer:

      On the Solaris OS:


      # pkgadd -d /software-distribution-dir/Product/NetraHASuite_3.0/
      FoundationServices/
         Solaris_x/arch/Packages/SUNWnhas-installer
      

      where software-distribution-dir is the directory that contains Netra HA Suite packages, x is 9 or 10 depending on the version of the Solaris OS in use on the installation server, and where arch is sparc or x64, depending on the installation server architecture.

      On Linux SLES9:


      # rpm -i /software-distribution-dir/Product/NetraHASuite_3.0/
      FoundationServices/ 
        SLES9/arch/Packages/sun-nhas-installer-3.0-24.arch.rpm 
      

      where software-distribution-dir is the directory that contains Netra HA Suite packages, and where arch is i686 or x86_64, depending on the installation server architecture.

    3. To access the man pages on the installation server, install the man page package, SUNWnhas-manpages:

      On the Solaris OS:


      # pkgadd -d /software-distribution-dir/Product/NetraHASuite_3.0/
      FoundationServices/  Solaris_x/arch/Packages/SUNW
      nhas-manpages
      

      where software-distribution-dir is the directory that contains the Netra HA Suite packages, x is 9 or 10 depending on the version of the Solaris OS in use on the installation server, and where arch is sparc or x64 depending on the installation server architecture.

      On Linux SLES9:


      # rpm -i /software-distribution-
      dir/Product/NetraHASuite_3.0/FoundationServices/  SLES9/arch/Packages/sun-
      nhas-manpages-3.0-24.arch.rpm
      

      where software-distribution-dir is the directory that contains the Netra HA Suite packages and where arch is i686 or x86_64 depending on the installation server architecture.

    4. Modify the shell variable MANPATH to include the path /opt/SUNWcgha/man.

    5. Check SunSolve to download any nhinstall patches for this release.

      If there are patches, see the associated Readme file for installation directions.


    Configuring the nhinstall Tool

    After you have installed the package containing the nhinstall tool, configure the nhinstall tool to install the Foundation Services on your cluster. To configure the nhinstall tool, modify the following configuration files:

    The following sections describe in detail the main configuration options of the nhinstall tool:

    Selecting the Type of Architecture

    If you are using AMD64-based hardware or SPARC-based sun4v hardware, use the HARDWARE parameter to specify the type of node. Specifying this information is not required for SPARC-based sun4u hardware.

    For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

    Configuring the Disk Partitions on Master-Eligible Nodes

    Use the SLICE (for both Linux and the Solaris OS) or SHARED_SLICE (for the Solaris OS only) parameters to specify the disk partitions on the master-eligible nodes.

    If you plan to use Netra High Availability Suite for replicating NFS-served data over IP, use the SLICE parameter for all partitions.

    On the Solaris OS, it is also possible to locate NFS-served data on shared disks. If you plan to do so, use the SHARED_SLICE parameter for the partition storing this data and use SLICE for the local partitions (the root file system, for example).

    TABLE 2-1 through TABLE 2-3 list the space requirements on the Solaris OS for sample disk partitions of master-eligible nodes in a cluster with diskless nodes, either with IP-replicated data or with a shared disk. TABLE 2-4 lists the space requirements on Linux for sample disk partitions of master-eligible nodes. TABLE 2-5 lists the space requirements for example disk partitions of dataless nodes. *


    TABLE 2-1   Example Disk Partitions of Solaris OS Master-Eligible Nodes With NFS-Served Data Replicated Over IP 
    Disk Partition File System Name Description Example Size
    0 / The root file system, boot partition, and volume management software. This partition must be mounted with the logging option. 2 Gbytes minimum
    1 swap Minimum size when physical memory is less than 1 Gbyte. 1 Gbyte
    3 /export Exported file system reserved for diskless nodes. The /export file system must be mounted with the logging option. This partition is further sliced if diskless nodes are added to the cluster. 2.8 Gbyte + 160 Mbytes per diskless node
    4 /SUNWcgha/local This partition is reserved for NFS status files, services, and configuration files. The /SUNWcgha/local file system must be mounted with the logging option. 2 Gbytes
    5 Reserved for Reliable NFS internal use Bitmap partition reserved for nhcrfsd. This volume is associated with the /export file system. 1 Mbyte
    6 Reserved for Reliable NFS internal use Bitmap partition reserved for nhcrfsd. This partition is associated with the /SUNWcgha/local file system. 1 Mbyte
    7 replica

    OR

    /test1

    If you have configured volume management, this partition must be named replica. This partition is mounted with the logging option. See Configuring Volume Management. The remaining space


    TABLE 2-2   Local Disk Partitions of Solaris OS Master-Eligible Nodes With NFS-Served Data on Shared Disks 
    Disk Partition File System Name Description Example Size
    0 / The root file system, boot partition, and volume management software. This partition must be mounted with the logging option. 2 Gbytes minimum
    1 swap Minimum size when physical memory is less than 1 Gbyte. 1 Gbyte
    7 replica Partition used to store SVM meta database. 8 Mbytes


    TABLE 2-3   Shared Disk Partitions of Solaris OS Master-Eligible Nodes With NFS-Served Data on Shared Disks 
    Disk Partition File System Name Description Example Size
    0 /export Exported file system reserved for diskless nodes. The /export file system must be mounted with the logging option. This partition is further sliced if diskless nodes are added to the cluster. 2.8 Gbyte + 160 Mbytes per diskless node
    1 /SUNWcgha/local This partition is reserved for NFS status files, services, and configuration files. The /SUNWcgha/local file system must be mounted with the logging option. 2 Gbytes
    7 replica Partition used to store SVM meta database. 8 Mbytes



    Note - Partition 2 is reserved for overlapping the entire disk. It is automatically created and must not be defined.




    TABLE 2-4   Disk Partitions of Linux Master-Eligible Nodes With NFS-Served Data Replicated Over IP   
    Disk Partition File System Name Description Example Size
    1 / The root file system, boot partition, and volume management software. 2 Gbytes
    2 swap Minimum size when physical memory is less than 1 Gbyte. 1 Gbyte
    5 /SUNWcgha/local This partition is reserved for NFS status files, services, and configuration files. 2 Gbytes
    6 /export Exported file system. 4 Gbytes
    7 unamed Bitmap partition reserved for nhcrfsd. This volume is associated with the /export and /SUNWcgha/local file systems that will be replicated over IP. 256 Mbytes

    Configuring Disk Partitions on Dataless Nodes

    Configure the SLICE parameter in the cluster_definition.conf file to specify the disk partitions on the dataless nodes.

    TABLE 2-5 lists the space requirements for example disk partitions of dataless nodes.


    TABLE 2-5   Example Disk Partitions of Dataless Nodes 
    Disk Partition File System Name Description Example Size
    0 / The root file system, boot partition, and volume management software. This partition must be mounted with the logging option. 2 Gbytes minimum
    1 swap Minimum size when physical memory is less than 1 Gbyte. 1 Gbyte



    Note - Partition 2 is reserved for overlapping the entire disk. It is automatically created and must not be defined.



    Mirroring Shared Disks on the Solaris OS

    Configure the MIRROR parameter to mirror a shared disk to another shared disk on the Solaris OS.

    Configuring the Disk Fencing on the Solaris OS

    On the Solaris OS, to prevent simultaneous access to the shared data in case of split-brain, SCSI disk reservation is used. The SCSI version is configured by the SHARED_DISK_FENCING parameter. It can be set to SCSI2 or SCSI3.

    Configuring the Scoreboard Bitmaps on the Solaris OS

    On the Solaris OS, you can configure the nhinstall tool to store the scoreboard bitmaps of IP-replicated partitions either in memory or on the disk.

    If the BITMAP_IN_MEMORY parameter is set to YES in the cluster_definition.conf file, the bitmaps are configured to be stored in memory. When the master node is shut down gracefully, the scoreboard bitmap is saved on the disk.

    If the BITMAP_IN_MEMORY parameter is set to NO, the bitmaps are configured to be written on the disk at each update.

    Configuring the NFS Option noac

    You can configure the nhinstall tool to use the NFS option noac for the directories that are mounted remotely. The noac option suppresses data and attribute caching.

    Configuring a Direct Link Between the Master-Eligible Nodes

    You can configure the nhinstall tool to set up a direct link between the master-eligible nodes by using the serial port on each master-eligible node. Make sure that you have connected the serial ports with a cable before configuring the direct link. This connection prevents a split brain situation, where there are two master nodes in the cluster because the network between the master node and the vice-master node fails. For an illustration of the connection between the master-eligible nodes, see the Netra High Availability Suite 3.0 1/08 Foundation Services Getting Started Guide.

    The DIRECT_LINK parameter in the cluster_definition.conf file enables you to define the serial device on each master-eligible node, the speed of the serial line, and the heartbeat (in seconds) checking the link between the two nodes. For example:


    DIRECT_LINK=/dev/ttyb 115200 20
    

    Configuring Automatic Reboot for the Master-Eligible Nodes

    You can configure the nhinstall tool to reboot the master-eligible nodes automatically during the installation.

    Configuring the Carrier Grade Transport Protocol

    You can configure the nhinstall tool to install and configure the Carrier Grade Transport Protocol (CGTP).

    Configuring the Environment for Diskless Nodes on the Solaris OS

    If you define diskless nodes with the NODE or DISKLESS parameters in the cluster_definition.conf file, the nhinstall tool installs the Solaris services for the diskless nodes. The tool also configures the boot options for each diskless node on the master-eligible nodes.

    If you do not define any diskless nodes in the cluster_definition.conf file, the nhinstall tool does not install the Solaris services for diskless nodes. If you plan to add diskless nodes to the cluster at a later date, set the INSTALL_DISKLESS_ENV parameter in add the cluster_definition.conf to specify on which platform you want nhinstall to set up the Solaris services for diskless nodes.

    If you do not set this parameter, the nhinstall tool does not install the Solaris services for diskless nodes on master-eligible nodes. In this case, you cannot use nhinstall to add diskless nodes to the cluster at a later date without reinstalling the software. Therefore, try to include possible future nodes in your cluster configuration.



    Note - On the Solaris OS, you can manually add diskless nodes to a running cluster as described in the Netra High Availability Suite 3.0 1/08 Foundation Services Manual Installation Guide for the Solaris OS.



    Configuring the Boot Policy for Diskless Nodes on the Solaris OS

    You can configure the nhinstall tool to have the diskless nodes in the cluster boot statically or by using the node's client ID. The DISKLESS_BOOT_POLICY parameter in the cluster_definition.conf configuration file enables you to choose a boot policy for the diskless nodes in your cluster. All diskless nodes in a cluster are configured with the same boot policy.

    The following table summarizes the boot policies supported by the nhinstall tool.


    TABLE 2-6   Boot Policies for Diskless Nodes 
    Boot Policy Description
    DHCP static boot policy IP address based on the Ethernet address of the diskless node. The Ethernet address is specified in the cluster_definition.conf file.

    If you set the DISKLESS_BOOT_POLICY parameter to DHCP_STATIC, nhinstall configures the diskless nodes with a static boot policy.

    DHCP client ID boot policy IP address generated from the diskless node's client ID in a CompactPCI server.

    If you set the DISKLESS_BOOT_POLICY parameter to DHCP_CLIENT_ID, nhinstall configures the diskless nodes to use the client ID to generate the IP address.


    For further information about the boot policies for diskless nodes, see the Netra High Availability Suite 3.0 1/08 Foundation Services Overview and the Netra High Availability Suite 3.0 1/08 Foundation Services Manual Installation Guide for the Solaris OS.

    Configuring DHCP Configuration Files Locally on Solaris OS Master-Eligible Nodes

    By default, nhinstall configures diskless nodes so that the DHCP configuration files are stored in the highly available directory /SUNWcgha/remote/var/dhcp on the master-eligible nodes. You can configure the cluster to put the DHCP configuration files in a local directory, /var/dhcp, on the master eligible nodes by adding the following line to the cluster_definition.conf file.

    REPLICATED_DHCP_FILES=NO

    When you install with nhinstall and with this feature enabled, nhinstall copies the DHCP configuration files from the master to the vice-master node.

    If you enable this feature, each time you update the DHCP configuration files on the master after initial cluster installation, you must copy these files to the vice-master node. For more information, see the cluster_definition.conf(4) and nhadm(1M) man pages.

    Configuring the Default Router to the Public Network

    By default, nhinstall configures the installation server to be the default router to the public network. To choose another machine as the router to the public network specify the IP address of the default router of your choice in the cluster_definition.conf file as follows:

    DEFAULT_ROUTER_IP=IP
    address

    For more information, see the cluster_definition.conf(4) man page.

    Configuring the Cluster IP Addresses

    You can configure IPv4 addresses of any class for the nodes of your cluster by using the nhinstall tool. The CLUSTER_NETWORK parameter enables you to specify the netmask and the subnets for the NIC0, NIC1, and cgtp0 interfaces of your nodes. For example, to define Class B IP addresses for the nodes, the CLUSTER_NETWORK parameter is defined as follows:


    CLUSTER_NETWORK=255.255.0.0 192.168.0.0 192.169.0.0 192.170.0.0
    

    Configuring the Floating External Address of the Master Node

    You can configure the nhinstall tool to set a floating external address. A floating external address is an external IP address that is assigned to the master role rather than to a specific node. This IP address enables you to connect to the current master node from systems outside the cluster network.

    As an option, IPMP (IP Multipathing) on the Solaris OS or bonding on Linux, can be used to support a floating external address on dual redundant links.

    If you specify an IP address and a network interface for the external address parameter in the cluster_definition.conf file, the floating external address is configured. The External Address Manager daemon, nheamd, which monitors floating addresses and IPMP groups or bonding interfaces on master-eligible nodes is also installed. This daemon makes sure that the external IP address is always assigned to the current master node. For more information, see the nheamd(1M) man page.

    If you do not configure the external address parameter in the cluster_definition.conf configuration file, the floating external address is not created. Therefore, the master node cannot be accessed by systems outside the cluster network.

    Configuring External IP Addresses for Cluster Nodes

    You can configure the nhinstall tool to set external IP addresses on network interfaces to a public network. Then, the nodes can be accessed from systems outside the cluster network.

    procedure icon  To Configure External IP Addresses for Cluster Nodes

    1. Set the PUBLIC_NETWORK parameter in the cluster_definition.conf file specifying the subnet and netmask for the subnet.

      If the installation server has to be configured to use this public network for the cluster nodes installation, the SERVER_IP parameter must also be defined in env_installation.conf to specify an IP address for the installation server on the same subnetwork as defined for PUBLIC_NETWORK.

      If SERVER_IP is not defined in env_installation.conf, the installation server will be configured to use the private network for the cluster nodes installation, and the configuration of the public network will be done on cluster nodes only, not on the installation server.

      For more information about SERVER_IP, refer to the env_installation.conf(4) man page.

    2. Specify the external IP address, external node name, and the external network interface for each NODE definition. For example:


      MEN=10 08:00:20:f9:c5:54 - - - - FSNode1 192.168.12.5 hme1
      MEN=20 08:00:20:f9:a8:12 - - - - FSNode2 192.168.12.6 hme1
      

      • 192.168.12.5 and 192.168.12.6 are the external IP addresses.

      • FSNode1 and FSNode2 are the external node names.

      • hme1 is the external network interface.

    Sharing Physical Interfaces Between CGTP and IPMP Using VLAN

    Physical links can be shared between CGTP and IPMP only when CGTP is used over a VLAN. Before using this configuration, refer to detailed information about Solaris VLAN and IPMP in the Solaris System Administration Guide: IP Services.Not all network interfaces support VLAN. Check that your interfaces support this use. Solaris shows VLAN interfaces as separate physical interfaces, even though there is only one. Since VLANs are configured by using special names for the interfaces, you must define the topology and the interface names for that topology Keep the following points in mind when defining your topology:

    For example, consider the three-node cluster shown in FIGURE 2-1. Three ce NICs are on each MEN. In both MENs, ce0 is connected to switch 1, ce1 to switch 2 and ce2 to switch 3. The external router, to which clients connect, is connected to switches 2 and 3. This restricts ce1 and ce2 for external access. CGTP can be used on any two NICs. In this case, ce0 and ce1 were chosen, making ce1 a shared interface.

    FIGURE 2-1   Cluster Sharing CGTP and IPMP

    Diagram shows a basic Foundation Services cluster


    The VLAN is created with VID 123 over the interface ce1 by plumbing an interface called ce123001. In this example, ce0 and ce123001 will be used for CGTP, and ce1 and ce2 for IPMP. Create the tagged VLAN on SW2 (for information on how to create a VLAN, refer to your switch’s documentation), create a cluster_definition.conf file respecting these interfaces, and launch the installation as for any other case.

    Configuring Volume Management

    The volume management feature enables you to do the following:

    On MontaVista Linux CGE 4.0 and Wind River CGL 1.4

    , the volume management software that is installed is LVM2.

    On the Solaris OS, the volume management software that is installed depends on the version of the Solaris OS that you plan to install. For information on supported software versions, see the Netra High Availability Suite 3.0 1/08 Release Notes.

    If both servers do not have the same disk configuration (for example, if they have a different number of disks, or if disks are numbered differently on the bus), you must install the Volume Management feature of the OS you are using. For more information, see the Netra High Availability Suite 3.0 1/08 Foundation Services Getting Started Guide.

    To install the Volume Management software on the nodes of your cluster, perform one of the following procedures:

    procedure icon  To Configure Basic Volume Management for Servers With Different Disk Configurations

    You can use the nhinstall tool to install and configure volume management to use soft partitions. The use of volume management is mandatory for servers with different disk configurations (for example, servers that have a different number of disks, or servers that use FC-AL disks). This situation can result in different minor device numbers on both servers, preventing Reliable-NFS from performing a failover. An NFS file handler contains the minor device number of the disk supporting a file and it must be the same on both servers. Using volume management hides the disk numbering and enables you to ensure that files that are duplicated on both servers have the same NFS file handler.

    Configure the nhinstall tool to support logical disk partitions by installing the volume management feature as follows:

    1. In the env_installation.conf file, set OS_INSTALL to ALL.

    2. Configure the cluster_definition.conf file:

      1. Set LOGICAL_SLICE_SUPPORT to YES.

      2. For the Solaris OS only, set the SLICE definition for the last partition to replica.

      For a detailed example, see the cluster_definition.conf(4) man page.

    3. Run the nhinstall tool to install the operating system and Foundation Services on the master-eligible nodes.

      For more information, see To Launch the nhinstall Tool.

      The nhinstall tool installs and configures the appropriate volume management software depending on the version of the operating system you chose to install.

    procedure icon  To Configure Advanced Volume Management

    To configure advanced volume management, install the operating system and configure the volume management feature to suit your needs. Then configure nhinstall to install only the Foundation Services.

    1. Install the operating system with volume management on the master-eligible nodes.

      For more information, see the documentation for your volume management software:



    Note - Install the same packages of the same version of the operating system on both master-eligible nodes. Create identical disk partitions on the disks of both master-eligible nodes.



  • Configure a physical Ethernet card interface that corresponds to the first network interface, NIC0.

  • Configure the sizes of the disk partitions.

    For more information, see TABLE 2-1 for the Solaris OS and TABLE 2-4 for Linux.

  • In the env_installation.conf file, set OS_INSTALL to DISKLESS_DATALESS_ONLY.

    The operating system is configured on the dataless nodes and the Solaris OS, services will be configured for the diskless environment.

  • In the cluster_definition.conf file, do the following:

    1. Set the LOGICAL_SLICE_SUPPORT parameter to NO.

    2. For the SLICE parameter, specify the metadevice names of the partitions.

      For example:


      SLICE=d1 2048 /               -        logging
      

      For details on the SLICE parameter, see the cluster_definition.conf(4) man page.

  • Run the nhinstall tool to install the Foundation Services on the master-eligible nodes.

    For more information, see To Launch the nhinstall Tool.

  • Selecting the Solaris Package Set to be Installed

    To install a Solaris package set on cluster nodes other than the default package set, specify the Solaris package set to be installed. For a list of the contents of the default package set, see the /opt/SUNWcgha/templates/nhinstall/nodeprof.conf.template file. For information about installing a Solaris package set on cluster nodes, see the nodeprof.conf(4) man page. For information about installing a customized Solaris package set on the diskless nodes, see the diskless_nodeprof.conf(4) man page. For information about installing a customized Solaris package set on the dataless nodes, see the dataless_nodeprof.conf4.

    Installing a Different Version of the Operating System on Diskless and Dataless Nodes

    To install a version of the Solaris OS on diskless nodes that is different from the version you are installing on master-eligible nodes, specify the location of the two Solaris distributions in the env_installation.conf file. For example:

    SOLARIS_DIR=/export/s10DISKLESS_SOLARIS_DIR=/export/s9u8

    To install a version of the Solaris OS on dataless nodes that is different from the versions you are installing on master-eligible nodes, specify the location of the two Solaris distributions in the env_installation.conf file. For example:

    SOLARIS_DIR=/export/s10DATALESS_SOLARIS_DIR=/export/s9u8

    By default, the values provided to the DISKLESS_SOLARIS_DIR and DATALESS_SOLARIS_DIR parameters are set to be the same as that provided to the SOLARIS_DIR parameter. For more information, see the env_installation.conf(4) man page.

    To install the Solaris OS on master-eligible nodes and install the MontaVista CGE Linux Operating System on dataless nodes, specify the location of the Solaris distribution, the MontaVista target distribution, and the MontaVista LSP distribution in the env_installation.conf file using the parameters SOLARIS_DIR, DATALESS_MVISTA_TARGET_DIR, and DATALESS_MVISTA_LSP_DIR. For example:

    SOLARIS_DIR=/export/s10DATALESS_MVISTA_TARGET_DIR=/export/mvista/target_tdpDATALESS_MVISTA_LSP_DIR=/export/mvista/lsp

    To install Wind River CGL on master-eligible nodes and the Solaris OS on dataless nodes, specify the location of the Wind River CGL, the directory where a root NFS file system will be created for each type of platform, and the path to the Solaris distribution in the env_installation.conf file using the parameters WINDRIVER_IMAGES_DIR, WINDRIVER_ROOTNFS_DIR, and DATALESS_SOLARIS_DIR. For example:

    WINDRIVER_IMAGES_DIR=/dist/WindRiverWINDRIVER_ROOTNFS_DIR=/export/root/WindRiverDATALESS_SOLARIS_DIR=/export/s10

    Configuring a Data Management Policy

    There are three data management policies available with the Foundation Services. By default, the nhinstall tool sets the data management policy to be Integrity for data replication over IP, and Availability when using shared disks. To choose another policy, change the value of the following variable in the cluster_definition.conf file.

    DATA_MGT_POLICY=INTEGRITY | AVAILABILITY | ADAPTABILITY

    For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

    Configuring a Masterless Cluster

    By default, diskless and dataless nodes reboot if there is no master in the cluster. If you do not want the diskless and dataless nodes to reboot in this situation, add the following line to the cluster_definition.conf file:

    MASTER_LOSS_DETECTION=YES

    For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

    Configuring Reduced Duration of Disk Synchronization on the Solaris OS

    By default nhinstall enables this feature. It reduces the time taken for full synchronization between the master and the vice-master disks by synchronizing only the blocks that contain replicated data.



    Note - Only use this feature on the Solaris OS with UFS file systems.



    To disable this feature and have all blocks replicated, add the following line to the cluster definition.conf file:

    SLICE_SYNC_TYPE=RAW

    For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

    Configuring Sanity Check of Replicated Slices

    To activate the sanity check of replicated slices, add the following line to the cluster_definition.conf file:

    CHECK_REPLICATED_SLICES=YES

    By default, the nhinstall tool does not activate this feature. For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

    Configuring Delayed Synchronization

    By default, disk synchronization starts automatically when the cluster software is installed. If you want to delay the start of disk synchronization, add the following line to the cluster_definition.conf file:

    SYNC_FLAG=NO

    You can trigger disk synchronization at a time of your choice using the nhenablesync tool. For more information, see the cluster_definition.conf4 and nhenablesync(1M) man pages and Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

    Configuring Serialized Slice Synchronization

    By default, nhinstall configures the cluster so that slices are synchronized in parallel. Synchronizing slices one slice at a time reduces the network and disk overhead but increases the time it takes for the vice-master to synchronize with the master. During this time, the vice-master is not eligible to take on the role of master. To enable serialized slice synchronization, add the following line to the cluster_definition.conf file:

    SERIALIZE_SYNC=YES

    For more information, see the cluster_definition.conf(4) man page and Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

    Installing the Node Management Agent (NMA) on the Solaris OS

    By default, the Node Management Agent is installed on systems running the Solaris OS. This feature is not yet supported on Linux.Set the INSTALL_NMA parameter to NO to avoid installing this agent on systems running the Solaris OS.

    Installing the Node State Manager (NSM)

    By default, the Node State Manager is not installed.Set the INSTALL_NSM parameter to YES to install NSM.

    Installing the SA Forum Cluster Membership API (SA Forum/CLM)

    By default, the SA Forum/CLM API is not installed.Set the INSTALL_SAFCLM parameter to YES to install the SA Forum/CLM API.