C H A P T E R  4

Installing the Software on the Dataless Nodes

After you have installed and configured the master-eligible nodes, you can add diskless nodes and dataless nodes to the cluster.

To add a dataless node to the cluster, see the following sections:


Preparing to Install a Dataless Node

Perform the following procedures before installing and configuring a dataless node:

procedure icon  To Connect a Dataless Node to the Cluster

procedure icon  To Define Disk Partitions on a Dataless Node


Installing the Solaris Operating System on a Dataless Node

To install the Solaris Operating System on a dataless node, use the Solaris JumpStart tool. The Solaris JumpStart tool requires the Solaris distribution to be on the installation server. For information about creating a Solaris distribution, see Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide.

procedure icon  To Install the Solaris Operating System on a Dataless Node

  1. Log in to the installation server as superuser.

  2. If not already created, create the Solaris JumpStart environment on the installation server by using the appropriate document for the Solaris release:

    • Solaris 9 or Solaris 10 Installation Guide

    At the end of this process, you have a Jumpstart-dir directory that contains the Solaris JumpStart files that are needed to install the Solaris Operating System on the node.

  3. In the /etc/hosts file, add the name and IP addresses of the dataless node.

  4. In the /etc/ethers file, add the Ethernet address of the dataless node's network interface that is connected to the same switch as the installation server, for example, NIC0.

  5. Share the Solaris-distribution-dir and Jumpstart-dir directories by adding these lines to the /etc/dfs/dfstab file:


    share -F nfs -o rw Solaris-distribution-dirshare -F nfs -o rw Jumpstart-dir
    

    • Solaris-distribution-dir is the directory that contains the Solaris distribution.

    • Jumpstart-dir is the directory that contains the Solaris JumpStart files.

  6. Change to the directory where the add_install_client command is located:


    # cd Solaris-dir/Solaris_x/Tools
    

    • Solaris-dir is the directory that contains the Solaris installation software. This directory could be on a CD-ROM or in an NFS-shared directory.

    • x is 9 or 10 depending on the version of the Solaris Operating System you install.

  7. Run the add_install_client command for each dataless node:


    # ./add_install_client -i IP-address \-e Ethernet-address \
    -s iserver:Solaris-distribution-dir \-c iserver:Jumpstart-dir \-p iserver:sysidcfg-dir \
    -n name-service host-name platform-group
    

    • IP-address is the IP address of the dataless node.

    • Ethernet-address is the Ethernet address of the dataless node.

    • iserver is the IP address of the installation server for the cluster

    • Solaris-distribution-dir is the directory that contains the Solaris distribution.

    • Jumpstart-dir is the directory that contains the Solaris JumpStart files.

    • sysidcfg-dir is the directory that contains the sysidcfg file. This directory is a subdirectory of the Jumpstart-dir directory.

    • name-service is the naming service you would like to use, for example, NIS or NIS+.

    • host-name is the name of the dataless node.

    • platform-group is the hardware platform of the dataless node, for example, sun4u.

    For more details, see the add_install_client1M man page.

  8. Connect to the console of the dataless node.

  9. At the ok prompt, boot the dataless node by using the net device alias:


    ok> boot net - install
    

    If the installation server is connected to the second Ethernet interface, type:


    ok> boot net2 - install
    

    This command installs the Solaris Operating System on the dataless node.



    Note - For x64 diskless nodes, refer to the hardware manual for information about booting.



procedure icon  To Install Solaris Patches

After you have completed the Solaris installation, install the necessary Solaris patches. The Netra High Availability Suite 3.0 1/08 Foundation Services README contains the list of Solaris patches that you must install, depending on the version of Solaris you installed.



Note - Some of these patches are required for CGTP. If you do not plan to install CGTP, do not install the CGTP patches. For more information about the impact of not installing CGTP, see Choosing a Cluster Network.



  1. Log in to the dataless node as superuser.

  2. Mount the directory from the installation server that contains the Solaris patches.

    See To Mount an Installation Server Directory on the Master-Eligible Nodes.

  3. Install the patches on the dataless node:


    # patchadd -M /NetraHASuite/Patches/ patch-name
    


Installing the Netra HA Suite on a Dataless Node

After the Solaris Operating System has been installed on the dataless node, install the Netra HA Suite on the dataless node.

The set of services to be installed on the dataless node is a subset of the Netra HA Suite installed on the master-eligible nodes. Install the packages that are listed as needed for dataless nodes in TABLE 4-1.


TABLE 4-1   Netra HA Suite Packages for Dataless Nodes 
Package Name Package Description
SUNWnhas-admintools Netra HA Suite administration tool
Netra HA Suite Sun CGTP
SUNWnhas-cgtp-cluster CGTP user-space components, configuration scripts, and files
SUNWnhas-cmm Netra HA Suite Cluster Membership Monitor
SUNWnhas-cmm-libs CMM developer package (.h and .so files)
SUNWnhas-common Netra HA Suite common components
SUNWnhas-common-libs Trace library
SUNWjsnmp Java SNMP API
SUNWnhas-nma-local Netra HA Suite Management Agent (initscripts and configuration files)
SUNWnhas-rnfs-client Netra HA Suite Reliable Network File Server (client binaries)
SUNWnhas-safclm-libs Netra HA Suite Service Availability Forum's Cluster Membership Service API (libraries)
SUNWnhas-pmd Netra HA Suite process monitor daemon
SUNWnhas-pmd-solaris Daemon monitor root file system (Solaris 9 OS only)

procedure icon  To Install the Netra HA Suite

  1. Mount the installation server directory on the dataless node as described in To Mount an Installation Server Directory on the Master-Eligible Nodes.

  2. Install the packages by using the pkgadd command:


    # pkgadd -d /NetraHASuite/Packages/ package-name
    

    where /NetraHASuite/Packages is the installation server directory that is mounted on the dataless node.

    CGTP enables a redundant network for your cluster.



    Note - If you do not require CGTP, do not install the CGTP packages. For more information about the impact of not installing CGTP, see Choosing a Cluster Network.




Configuring the Netra HA Suite on a Dataless Node

The following procedures explain how to configure the Netra HA Suite on a dataless node.

procedure icon  To Configure a Dataless Node

  1. Create a /etc/notrouter file:


    # touch /etc/notrouter
    

    Because the cluster network is not routable, the dataless nodes must be disabled as routers.

  2. Modify the /etc/default/login file so you can connect to the node from a remote system as superuser:


    # mv /etc/default/login /etc/default/login.orig
    # chmod 644 /etc/default/login.orig
    # sed '1,$s/^CONSOLE/#CONSOLE/' /etc/default/login.orig > /etc/default/login
    # chmod 444 /etc/default/login
    

  3. Disable power management:


    # touch /noautoshutdown
    

  4. Modify the .rhosts file according to the security policy for your cluster:


    # cp /.rhosts /.rhosts.orig
    # echo "+ root" > /.rhosts
    # chmod 444 /.rhosts
    

  5. Set the boot parameters:


    # /usr/sbin/eeprom local-mac-address?=true
    # /usr/sbin/eeprom auto-boot?=true
    # /usr/sbin/eeprom diag-switch?=false
    



    Note - On x64, refer to the hardware documentation for information about performing this task.



  6. If using the Network Time Protocol (NTP) to run an external clock, configure the dataless node as an NTP server.

    This procedure is described in the Solaris documentation.

procedure icon  To Configure an External IP Address

To configure external IP addresses for a dataless node, the node must have an extra physical network interface or logical network interface. A physical network interface is an unused interface on an existing Ethernet card or a supplemental HME or QFE Ethernet card, for example, hme2. A logical network interface is an interface configured on an existing Ethernet card, for example, hme1:101.

procedure icon  To Update the Network Files on a Dataless Node

  1. Log in to the dataless node as superuser.

    As for the master-eligible nodes, three IP addresses are configured for each dataless node:

    • The IP address for the first network interface, NIC0

    • The IP address for the second network interface, NIC1

    • The IP address for the virtual network interface, cgtp0

    The IP addresses can be IPv4 addresses of any class. However, the nodeid that you later define in the cluster_nodes_table file and the nhfs.conf file must be a decimal representation of the host part of the node's IP address. For information about the files, see To Create the nhfs.conf File for a Dataless Node and To Update the Cluster Node Table.

  2. Create or update the file /etc/hostname.NIC0 for the NIC0 interface.

    This file must contain the cluster network name of the dataless node on the second interface, for example, netraDATALESS1-nic0.

  3. Create or update the file /etc/hostname.NIC1 for the NIC1 interface.

    This file must contain the cluster network name of the master-eligible node on the second interface, for example, netraDATALESS1-nic1.

  4. Create or update the file /etc/hostname.cgtp0 for the cgtp0 interface.

    This file must contain the cluster network name of the dataless node on the cgtp0 interface, for example, netraDATALESS1-cgtp.

  5. In the /etc/hosts file, add the IP address and node name for the NIC0, NIC01, and cgtp0 network interfaces of all the nodes in the cluster:


    127.0.0.1		  localhost
    10.250.1.10 netraMEN1
    10.250.2.10 netraMEN1-nic1
    10.250.3.10 netraMEN1-cgtp
     
    10.250.1.20 netraMEN2
    10.250.2.20 netraMEN2-nic1
    10.250.3.20 netraMEN2-cgtp
     
    10.250.1.30 netraDATALESS1-nic0 loghost 
    netraDATALESS1.localdomain
    10.250.2.30 netraDATALESS1-nic1 netraDATALESS1-nic1.localdomain
    10.250.3.30 netraDATALESS1-cgtp netraDATALESS1-cgtp.localdomain
     
    10.250.1.1 		master
    10.250.2.1 		master-nic1
    10.250.3.1 		master-cgtp
    

  6. Update the /etc/nodename file with the name corresponding to the address of one of the network interfaces, for example, netraDATALESS1-cgtp.

  7. Create the /etc/netmasks file by adding one line for each subnet on the cluster:


    10.250.1.0    255.255.255.0
    10.250.2.0    255.255.255.0
    10.250.3.0    255.255.255.0
    

procedure icon  To Create the nhfs.conf File for a Dataless Node

  1. Log in to the dataless node as superuser.

  2. Create the nhfs.conf file for the dataless node:


    # cp /etc/opt/SUNWcgha/nhfs.conf.template /etc/opt/SUNWcgha/nhfs.conf
    

  3. Edit the nhfs.conf file to suit your cluster configuration.

    An example file for a dataless node on a cluster with the domain ID 250, with network interfaces eri0, eri1, and cgtp0 would be as follows:


    Node.NodeId=40
    Node.NIC0=eri0
    Node.NIC1=eri1
    Node.NICCGTP=cgtp0
    Node.UseCGTP=True
    Node.Type=Dataless
    Node.DomainId=250
    CMM.IsEligible=False
    CMM.LocalConfig.Dir=/etc/opt/SUNWcgha
    

    Choose a unique nodeid and unique node name for the dataless node. To view the nodeid of each node already in the cluster, see the /etc/opt/SUNWcgha/cluster_nodes_table file on the master node. For more information, see the nhfs.conf4 man page.

    If you have not installed the CGTP patches and packages, do the following:

    • Disable the Node.NIC1 and Node.NICCGTP parameters.

      To disable these parameters, add a comment mark (#) at the beginning of the line containing the parameter if this mark is not already present.

    • Configure the Node.UseCGTP and the Node.NIC0 parameters:

      • Node.UseCGTP=False

      • Node.NIC0=interface-name

        where interface-name is the name of the NIC0 interface, for example, hme0, qfe0, or eri0.

procedure icon  To Set Up File Systems for a Dataless Node

Update the /etc/vfstab file in the dataless node's root directory to add the NFS mount points for master node directories that contain middleware data and services.

  1. Log in to a dataless node as superuser.

  2. Edit the entries in the /etc/vfstab file.

    • If you have configured the CGTP, replace the host name of the master node with the host name associated with the floating IP address for the cgtp0 interface that is assigned to the master role, for example, master-cgtp.

    • If you have not configured the CGTP, replace the host name of the master node with the host name associated with the floating IP address for the NIC0 interface that is assigned to the master role, for example, master-nic0.

    For more information about floating addresses of the master nodes, see To Create the Floating Address Triplet Assigned to the Master Role.

  3. Define the mount points /SUNWcgha/remote, SUNWcgha/services, and /SUNWcgha/swdb:

    • If you have configured the CGTP, use the floating IP address for the cgtp0 interface that is assigned to the master role to define the mount points:


      master-cgtp:/SUNWcgha/local/export/data -       \
      /SUNWcgha/remote        nfs     -       yes     rw,hard,fg,intr
       
      master-cgtp:/SUNWcgha/local/export/services/ha_3.0/opt -     \
      /SUNWcgha/services      nfs     -       yes    rw,hard,fg,intr
       
      master-cgtp:/SUNWcgha/local/export/services/ha_3.0 -     \
      /SUNWcgha/swdb  nfs    -       yes     rw,hard,fg,intr
      

    • If you have not configured the CGTP, use the floating IP address for the NIC0 interface that is assigned to the master role:


      master-nic0:/SUNWcgha/local/export/data -       \
      /SUNWcgha/remote        nfs     -       yes     rw,hard,fg,intr
       
      master-nic0:/SUNWcgha/local/export/services/ha_3.0/opt - \
      /SUNWcgha/services      nfs     -       yes    rw,hard,fg,intr
       
      master-nic0:/SUNWcgha/local/export/services/ha_3.0 -     \
      /SUNWcgha/swdb  nfs    -       yes     rw,hard,fg,intr
      

    All file systems that you mount by using NFS must be mounted with the options fg, hard, and intr. You can also set the noac mount option, which suppresses data and attribute caching. Use the noac option only if the impact on performance is acceptable.



    Note - Do not use IP addresses in the /etc/vfstab file for the dataless node. Instead, use logical host names. Otherwise, the pkgadd R command fails and return the following message: WARNING: cannot install to or verify on master_ip>



  4. Create the mount points /SUNWcgha/remote, /SUNWcgha/services, and /SUNWcgha/swdb:


    # mkdir -p SUNWcgha/remote
    # mkdir -p SUNWcgha/services
    # mkdir -p SUNWcgha/swdb
    

  5. Repeat Step 1 through Step 8 for all dataless nodes in the cluster.


Integrating a Dataless Node Into the Cluster

The following procedures explain how to integrate a dataless node into the cluster:

procedure icon  To Update the /etc/hosts Files on Each Peer Node

  1. Log in to the master node as superuser.

  2. Edit the /etc/hosts file to add the following lines:


    IP-address-NIC0 nic0-dataless-node-nameIP-address-NIC1 
    nic1-dataless-node-nameIP-address-cgtp0 cgtp0-dataless-node-name
    

    This modification enables the master node to “see” the network interfaces of the dataless node.

  3. Log in to the vice-master node as superuser.

  4. Repeat Step 2.

    This modification enables the vice-master node to “see” the three network interfaces of the dataless node.

  5. Log in to a dataless node that is part of the cluster, if a dataless node already exists.

  6. Repeat Step 2.

    This modification enables the dataless node to “see” the three network interfaces of the dataless node.

  7. Repeat Step 5 and Step 6 on all other diskless and dataless nodes that are already part of the cluster.

procedure icon  To Update the Cluster Node Table

  1. Log in to the master node as superuser.

  2. Edit the cluster_nodes_table file on the master node with the node information for a dataless node:


    #NodeId Domain_id Name Attributes
    nodeid domainid dataless-node-name -
    

    The nodeid that you define in the cluster_nodes_table file must be the decimal representation of the host part of the node's IP address. For more information about the cluster_nodes_table file, see the cluster_nodes_table4 man page.

  3. Create the cluster_nodes_table file on the master node disk:


    # /opt/SUNWcgha/sbin/nhcmmstat -c reload
    

  4. Repeat Step 2 for each dataless node you are adding to the cluster.


Starting the Cluster

To integrate the dataless node into the cluster, delete the not_configured file and reboot all the nodes. After the nodes have completed booting, verify the configuration before the cluster is restarted.

procedure icon  To Delete the not_configured File

During the installation of the CMM packages, the /etc/opt/SUNWcgha/not_configured file is automatically created. This file enables you to reboot a cluster node during the installation and configuration process without starting the Netra HA Suite.

procedure icon  To Start the Cluster

  1. As superuser, reboot the master node.

  2. After the master node has completed rebooting, reboot the vice-master node as superuser.

  3. After the vice-master node has completed rebooting, boot the master-ineligible nodes as superuser.



    Note - For detailed information about rebooting a node on the operating system version in use at your site, refer to the Netra High Availability Suite 3.0 1/08 Foundation Services Cluster Administration Guide.



procedure icon  To Verify the Cluster Configuration

Use the nhadm tool to verify that the dataless nodes have been configured correctly and are integrated into the cluster.

  1. Log in to the dataless node as superuser.

  2. Run the nhadm tool to validate the configuration:


    # nhadm check starting
    

    If all checks pass the validation, the installation of the Netra HA Suite was successful. For more information, see the nhadm1M man page.