C H A P T E R  3

Installing the Software for Diskless Nodes

When you have installed and configured the master-eligible nodes of the cluster, you can add diskless nodes and dataless nodes to the cluster.

This chapter pertains to diskless nodes. To add dataless nodes to your cluster, see the following chapter.

Information about installing software for diskless nodes is provided in the following sections:


Preparing to Install a Diskless Node

Before installing and configuring the software for a diskless node, check that the node is connected to the cluster and that there is enough disk space on the master-eligible nodes.

procedure icon  To Connect a Diskless Node to the Cluster

procedure icon  To Check Disk Space on the Master Node


Installing the Solaris Operating System for Diskless Nodes on the Master Node

Install the Solaris Operating System for diskless nodes by using the smosservice command on the master node. You run this command only the first time you add a diskless node to a cluster to install the common Solaris services for all diskless nodes. The common Solaris services for the diskless node is installed in the directory /export/exec on the master node. You must also install some OS-specific packages, some on the root file system on each diskless node, some on the /usr directory common to all diskless nodes.

For every additional diskless node, you only need to create the root file system for the new node by using the smdiskless command. The root file system is installed in the /export/root/diskless-node-name directory for each diskless node.

To install the Solaris Operating System for the diskless nodes, see the following procedures.

procedure icon  To Install the Common Solaris Services for Diskless Nodes on the Master Node

  1. Ensure that the mount points to the software distributions have been configured.

    For more information, see To Mount an Installation Server Directory on the Master-Eligible Nodes.

  2. Log in to the master node as superuser.

  3. Start the Solaris Management Console.


    # smc
    # ps -ef | grep smc
    root   474   473  0   Jul 29 ?        0:00 
    /usr/sadm/lib/smc/bin/smcboot
    root   473     1  0   Jul 29 ?        0:00 
    /usr/sadm/lib/smc/bin/smcboot
    

    For more information, see the smc1M man page.

  4. Run the smosservice command, as follows:

    For SPARC diskless nodes:


    # /usr/sadm/bin/smosservice add -p root-password -- \-x mediapath= 
    Solaris-distribution-dir \-x platform=Solaris-platform \-x cluster=Solaris-cluster
    \-x locale=locale
    

    • root-password is the superuser password. By default, this password is sunrules.

    • Solaris-distribution-dir is the mounted directory on the master node that contains the Solaris distribution.

    • Solaris-platform is the Solaris platform, for example, sparc.sun4u.Solaris_9.

    • Solaris-cluster is the Solaris cluster to install, for example, SUNWCuser.

    • locale is the locale to install. For U.S. English, the value is en_US.


      # /usr/sadm/bin/smosservice add -p root-password -- \-x mediapath=
      Solaris-distribution-dir \-x platform=Solaris-platform \-x cluster=Solaris-cluster
      


      # /usr/sadm/bin/smosservice add -p sunrules -- \-x mediapath= 
      /Solaris9-Distribution \-x platform=sparc.sun4u.Solaris_9 \-x
      cluster=SUNWCuser \-x locale=en_US
      

    For x64 diskless nodes:

    For example, to install the Solaris services for Solaris 9 SPARC diskless nodes, type:

    The common Solaris services for all diskless nodes are installed in the /export/exec directory on the master node.

    For more information, see the smosservice1M man page.

procedure icon  To Install OS-Specific Packages



Note - Ignore error messages related to packages that have not been installed. Always answer Y to continue the installation.



  1. Ensure that the mount points to the software distributions have been configured.

    For more information, see To Mount an Installation Server Directory on the Master-Eligible Nodes.

  2. Log in to the master node as superuser.

  3. Install OS-specific packages as follows:

    1. For Solaris 9 SPARC diskless nodes, install the SPARC-specific packages:


      # pkgadd -R /export/Solaris_x/usr_sparc.all SUNWkvm.u SMEvplu.u
      

    2. For Solaris 10 SPARC diskless nodes, install the SPARC-specific packages:


      # pkgadd -R /export/Solaris_x/usr_sparc.all SUNWkvm.u SMEvplu.u SUNWcsl
      

    3. For x64 diskless nodes, install the x64-specific packages:


      # pkgadd -R /export/Solaris_x/usr_i386.all SUNWkvm.i SUNWcsl
      

procedure icon  To Create a Root File System for a Diskless Node on the Master Node

After the common Solaris services for the diskless nodes are installed, use the smdiskless command on the master node to create a root file system for each diskless node in the cluster. You must create the root file system for each diskless node in the cluster.

  1. Log in to the master node as superuser.

  2. Create an entry in /etc/hosts for diskless-node-name on the first node.

  3. Create the root file system for each diskless node, as follows:

    For SPARC diskless nodes:


    # /usr/sadm/bin/smdiskless add -p root-password -- \-i IP-address-NIC0 \
    -e Ethernet-address \-n diskless-node-name \-x os=Solaris-platform \-x locale=
    locale
    

    • root-password is the root password; by default this password is sunrules.

    • IP-address-NIC0 is the IP address of the diskless node on the NIC0 interface, for example, 10.250.1.30.

    • Ethernet-address is the Ethernet address of the diskless node, for example, 08:00:20:01:02:03.

    • diskless-node-name is the name of the diskless node, for example, netraDISKLESS1.

    • Solaris-platform is the Solaris platform, for example, sparc.sun4u.Solaris_9 for the Solaris 9 OS.

    • locale is the language. For U.S. English, the value is en_US.


      # /usr/sadm/bin/smdiskless add -p root-password -- \-i IP-address-NIC0 \
      -e Ethernet-address \-n diskless-node-name \-x os=Solaris-platform
      


      # /usr/sadm/bin/smdiskless add -p sunrules -- -i 10.250.1.20  \-e 
      08:00:20:01:02:03 -n netraDISKLESS1 \ -x os=sparc.sun4u.Solaris_9 
      -x locale=en_US
      

    For x64 diskless nodes:

    For example, to add a new diskless node that is named netraDISKLESS1 that runs Solaris 9 on a Sun4Utrademark machine, type:

  4. For each x64 diskless node, process the boot entry as follows:

    1. umount the boot directory if it is mounted as follows:


      # umount /tftpboot/<diskless-nodename>
      

    2. Update the /etc/vfstab file on the master node by setting the mount at boot option for each umount /tftpboot/<diskless-nodename> entry to no.

    3. Update the /etc/vfstab file on the vice-master node by adding all /tftpboot/<diskless-nodename> entries that are present in the master file to the vice-master file.

      The root file system for the diskless node is created in the following directory: /export/root/netraDISKLESS1.

      For more information, see the smdiskless1M man page.

procedure icon  To Install the Solaris-Specific Packages for Diskless Nodes (SPARC only)

  1. Ensure that the mount points to the software distributions have been configured.

    For more information, see To Mount an Installation Server Directory on the Master-Eligible Nodes.

  2. Log in to the master node as superuser.

  3. Install the SPARC-specific packages for each diskless node:


    # pkgadd -R /export/root/<diskless-nodename> -d /mnt SMEvplr.u SUNWsiox.u
    

procedure icon  To Configure the Trivial File Transfer Protocol on the Master-Eligible Nodes

The smdiskless command creates the directory /tftpboot on the master node. This directory contains the boot image for each diskless node. Create the same directory on the vice-master node. Then, after a switchover, the new master node can boot the diskless nodes.

  1. Log in to the master node as superuser.

  2. Modify the /etc/inetd.conf file to configure the Trivial File Transfer Protocol (TFTP).

    Uncomment the tftp line, by deleting the comment mark at the beginning of the line, for example:


    # tftp   dgram   udp6    wait    root
    /usr/sbin/in.tftpd      in.tftpd -s /tftpboot
    

    For more information, see the inetd.conf4 man page.

  3. Copy the /tftpboot directory to the vice-master node:


    # find /tftpboot | cpio -omB | rsh vice-master-cgtp0-address cpio -idumvB
    

  4. Log in to the vice-master node.

  5. Repeat Step 2 on the vice-master node.

procedure icon  To Install Solaris Patches

In the root directory for each diskless node on the master node, install the necessary Solaris patches. The Netra High Availability Suite 3.0 1/08 Foundation Services README contains the list of Solaris patches that you must install. The contents of this list depends on the version of the Solaris Operating System you installed.



Note - Some of these patches are required for CGTP. If you do not plan to install CGTP, do not install the CGTP patches. For more information about the impact of not installing CGTP, see Choosing a Cluster Network.



  1. Log in to the master node as superuser.

  2. Check that the directory containing the Netra HA Suite software distribution on the installation server is mounted on the master node:


    # mount
    ...
    /NetraHASuite on 10.250.1.100:/software-distribution-dir \
    remote/read/write/setuid/dev=3ec0004 on Tue Sep 24 17:06:09 2002
    #
    

    • 10.250.1.100 is the IP address of the installation server network interface that is connected to the cluster.

    • software-distribution-dir is the directory that contains the Netra HA Suite product for the hardware architecture.

    If the directory is not mounted, mount the directory as described in To Mount an Installation Server Directory on the Master-Eligible Nodes.

  3. Install the Solaris services patches for the diskless nodes on the master node:


    # patchadd -S Solaris_x /NetraHASuite/Patches/patch-number
    

    where x is 9 or 10 depending on the Solaris version installed.

  4. Apply the patches for each diskless node:


    # patchadd -R /export/root/diskless-node-name \
    /NetraHASuite/Patches/patch-number
    


Installing the DHCP and the Reliable Boot Service

The Reliable Boot Service ensures continuous availability of the DHCP server in a cluster. In the event of a failover of the master node, the vice-master node takes over from the master node. In the event of the failure of a diskless node, the Reliable Boot Service enables the diskless node to reboot automatically. This service also reassigns IP addresses to diskless nodes. For more information, see the Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

The Reliable Boot Service is included in Netra HA Suite package SUNWnhas-rbs. These packages contain a DHCP public module. These packages also contain template files for the DHCP service configuration file, the network containers, and dhcptab containers.

procedure icon  To Install the DHCP and the Reliable Boot Service

  1. Log in to each master-eligible node as superuser.

  2. Check that the Solaris DHCP packages are installed on the master-eligible nodes.

    The DHCP is delivered in the SUNWdhcm, SUNWdhcsr, and SUNWdhcsu packages.


    # pkginfo SUNWdhcm SUNWdhcsr SUNWdhcsu
    

    If not already installed, install the Solaris DHCP packages on each master-eligible node:


    # pkgadd -d Solaris-distribution-dir SUNWdhcm SUNWdhcsr SUNWdhcsu
    

  3. Install the SUNWnhas-rbs Reliable Boot Service packages on each master-eligible node as follows:

    For SPARC diskless nodes:


    # pkgadd -d /NetraHASuite/Packages/ SUNWnhas-rbs
    

    For x64 diskless nodes:


    # pkgadd -d /NetraHASuite/Packages/ SUNWnhas-rbs SUNWnhas-rbs-nsmscripts
    



    Note - To support x64 diskless nodes, the Node State Manager must be installed on both master-eligible nodes.




Configuring the DHCP for a Diskless Node

To configure the DHCP for a diskless node, create the DHCP configuration table and network table for the node using the dhcpconfig, dhtadm, and pntadm commands. For more information about these commands and files, see the dhcpconfig1M, dhtadm1M, and pntadm1M man pages.

procedure icon  To Configure the DHCP for a Diskless Node

  1. Log in to the master node as superuser.

  2. Configure the DHCP server:


    # dhcpconfig -D -r SUNWnhrbs -p /SUNWcgha/remote/var/dhcp -n
    

  3. Modify the /etc/inet/dhcpsvc.conf file:


    DAEMON_ENABLED=TRUE
    RUN_MODE=server
    RESOURCE=SUNWnhrbs
    PATH=/SUNWcgha/remote/var/dhcp
    CONVER=1
    INTERFACES=hme0,hme1
    OFFER_CACHE_TIMEOUT=30
    

    • DAEMON_ENABLED enables the DHCP daemon when set to TRUE.

    • RUN_MODE selects the daemon run mode.

    • RESOURCE enables you to add the Reliable Boot Service module, SUNWnhrbs, to the DHCP.

    • PATH enables you to specify the path to the DHCP configuration file. This path must be in a shared file system.

    • CONVER is the integer that specifies the DHCP container version. Do not modify this parameter.

    • INTERFACES enables you to specify the network interfaces on the node, for example, hme0 and hme1.

      If you are configuring a single network link for your cluster (that is, you do not plan to install the CGTP), specify only the first network interface, for example, hme0.

    • OFFER_CACHE_TIMEOUT enables you to specify the number of seconds before OFFER cache timeouts occur, for example, 30.

    For more information, see the dhcpsvc.conf4 man page.

  4. Create the DHCP configuration table:


    # dhtadm -C
    

  5. Modify the DHCP configuration table:


    # dhtadm -A -s SbootFIL -d 'Vendor=vendor-string,7,ASCII,1,0'
    # dhtadm -A -s SswapPTH -d 'Vendor=vendor-string,6,ASCII,1,0'
    # dhtadm -A -s SswapIP4 -d 'Vendor=vendor-string,5,IP,1,0'
    # dhtadm -A -s SrootPTH -d 'Vendor=vendor-string,4,ASCII,1,0'
    # dhtadm -A -s SrootNM -d 'Vendor=vendor-string,3,ASCII,1,0'
    # dhtadm -A -s SrootIP4 -d 'Vendor=vendor-string,2,IP,1,0'
    # dhtadm -A -s SrootOpt -d 'Vendor=vendor-string,1,ASCII,1,0'
    # dhtadm -A -s NhCgtpAddr -d 'Site,128,IP,1,1'
    # dhtadm -A -s NhNic0Addr -d 'Site,129,IP,1,1'
    # dhtadm -A -s NhNic1Addr -d 'Site,130,IP,1,1'
    # dhtadm -A -m subnet1 -d \
    ':Broadcst=broadcast1:MTU=1500:Router=router1:Subnet=255.255.255.0:'
    # dhtadm -A -m subnet2 -d \
    ':Broadcst=broadcast2:MTU=1500:Router=router2:Subnet=255.255.255.0:'
    # dhtadm -A -m Common -d \
    ':BootSrvA=floating-master-address:\
    SrootIP4=floating-master-address:\
    SswapIP4=floating-master-address:\
    BootSrvN=floating-master-address:SrootNM=floating-master-address:'
    

    If you are using x64 diskless nodes, execute the following command:


    # dhtadm -A -m "PXEClient:Arch:00000:UNDI:002001" -d \
      ':BootSrvA=floating-master-address:'
    



    Note - If you are not planning to use CGTP (that is, you plan to configure a single network link for your cluster 0, do not configure the NhCgtpAddr macro.



    • vendor-string is an ASCII string that identifies the client class names that are supported by the DHCP. Specify multiple client class names separated by spaces, for example:


      'SUNW.UltraSPARC-IIi-cEngine SUNW.UltraSPARC-IIi-Netract \
      SUNW.UltraSPARCengine_CP-60,7,ASCII,1,0'
      

    • subnet1 is the NIC0 subnet, for example, 10.250.1.0.

    • subnet2 is the NIC1 subnet, for example, 10.250.2.0.

    • broadcast1 is the broadcast address of the NIC0 subnet, for example, 10.250.1.255.

    • broadcast2 is the broadcast address of the NIC1 subnet, for example, 10.250.2.255.

    • router1 is the router address of the NIC0 subnet, for example, 10.250.1.1.

    • router2 is the router address of the NIC1 subnet, for example, 10.250.2.1.

    • floating-master-address is the floating IP address assigned to the CGTP interface of the current master node. For example, 10.250.3.1. For more information, see Configuring the Master-Eligible Node Addresses.

      If you are not planning to use the CGTP (that is, you plan to configure a single network link for your cluster), use the IP address assigned to one of the NICs on the current master node, for example, 10.250.1.1.

    For more information about the DHCP options, see the dhtadm1M man page.

  6. Create the DHCP network table:


    # pntadm -C subnet1# pntadm -C subnet2
    


Configuring the DHCP Boot Policy for Diskless Nodes

Configure a DHCP boot policy for the diskless nodes in the cluster by updating the DHCP configuration table and the DHCP network table. The boot policy is a way to assign IP addresses to a diskless node when the node is booted.

Diskless nodes can have a static or client ID boot policy. For more information about the DHCP boot policies, see the Netra High Availability Suite 3.0 1/08 Foundation Services Overview.


TABLE 3-1   Boot Policies for Diskless Nodes  
Boot Policy Description
DHCP static boot policy IP address is statically assigned based on the Ethernet address of the diskless node. See To Configure the DHCP Static Boot Policy.
DHCP client ID boot policy IP address is generated from the node's client ID. See To Configure the DHCP Client ID Boot Policy.



Note - If you are not planning to use the CGTP (that is, you plan to configure a single network link for your cluster), configure the DHCP only for the NIC0 interface. In addition, do not configure the NhCgtpAddr macro for the cgtp0 interface.



procedure icon  To Configure the DHCP Static Boot Policy

  1. Log in to the master node as superuser.

  2. Update the DHCP configuration table for the NIC0 interface of the diskless node:


    # dhtadm -A -m macro-name -d \

    ‘:NhCgtpAddr=local-cgtp-addr:NhNic0Addr=local-nic0-addr:NhNic1Addr=

    local-nic1-addr:Include=Common:BootFile=inetboot.sun4u.os:\

    SrootPTH=/export/root/diskless-node-name:\

    SswapPTH=/export/swap/diskless-node-name:Include=subnet:’


    • macro-name is the NIC0 IP address of the node.

    • local-cgtp-addr, local-nic0-addr, and local-nic1-addr are respectively the IP addresses of the cgtp0, nic0, and nic1 interfaces of the node.

    • os is the operating system. Specify Solaris_9 or Solaris_10 depending on the Solaris version you installed.

    • diskless-node-name is the name of the node.

    • subnet is the NIC0 subnet.


      # dhtadm -A -m 10.250.1.30 -d \

      ‘:NhCgtpAddr=10.250.3.30:NhNic0Addr=10.250.1.30:NhNic1Addr=

      10.250.2.30:Include=Common:BootFile=inetboot.sun4u.Solaris 9:\

      SrootPTH=/export/root/netraDISKLESS1:\

      SswapPTH=/export/swap/netraDISKLESS1:Include=10.250.1.0:’


    For a diskless node, netraDISKLESS1, with the NIC0 IP address 10.250.1.30 and Solaris 9, type:

  3. Update the DHCP container for the NIC0 interface of the diskless node.


    # pntadm -A IP-address \-i Ethernet-address \-f PERMANENT+MANUAL -m macro-name subnet
    

    • IP-address is the NIC0 IP address of the node.

    • Ethernet-address of the board of the node. The letters of the address must be in uppercase.

    • macro-name is the NIC1 IP address of the node.

    • subnet is the NIC0 subnet.


      # pntadm -A 10.250.1.30 -i 01080020F9B360 -f PERMANENT+MANUAL \-m 
      10.250.1.30 10.250.1.0
      

    For the diskless node with the NIC0 IP address 10.250.1.30 and Ethernet address 01080020F9B360, type:

  4. Update the DHCP configuration table for the NIC1 interface of the diskless node:


    # dhtadm -A -m macro-name -d \

    ‘:NhCgtpAddr=local-cgtp-addr:NhNic0Addr=local-nic0-addr:NhNic1Addr=

    local-nic1-addr:Include=Common:BootFile=inetboot.sun4u.os:\

    SrootPTH=/export/root/diskless-node-name:\

    SswapPTH=/export/swap/diskless-node-name:Include=subnet:’


    • macro-name is the NIC1 IP address of the node.

    • local-cgtp-addr, local-nic0-addr, and local-nic1-addr are respectively the IP addresses of the cgtp0, nic0, and nic1 interfaces of the node.

    • os is the operating system. Specify Solaris_9 or Solaris_10 depending on the Solaris version you installed.

    • diskless-node-name is the name of the node.

    • subnet is the NIC1 subnet.


      # dhtadm -A -m 10.250.2.30 -d \

      ‘:NhCgtpAddr=10.250.3.30:NhNic0Addr=10.250.1.30:NhNic1Addr=

      10.250.2.30:Include=Common:BootFile=inetboot.sun4u.Solaris 9:\

      SrootPTH=/export/root/netraDISKLESS1:\

      SswapPTH=/export/swap/netraDISKLESS1:Include=10.250.2.0:’


    For the diskless node, netraDISKLESS1, with the NIC1 IP address 10.250.2.30 and Solaris 9, type:

  5. Update the DHCP container for the NIC1 interface of the diskless node:


    # pntadm -A IP-address \-i Ethernet-address \-f PERMANENT+MANUAL -m macro-name subnet
    

    • IP-address is the NIC1 IP address of the node.

    • Ethernet-address of the board of the node.

    • macro-name is the NIC1 IP address of the node.

    • subnet is the NIC1 subnet.


      # pntadm -A 10.250.2.30 -i 01080020F9B361 \-f PERMANENT+MANUAL -m 10.250.2.30 10.250.2.0
      

    For the diskless node with the NIC1 IP address 10.250.2.30 and Ethernet address 01080020F9B361, type:

procedure icon  To Configure the DHCP Client ID Boot Policy

This procedure can only be performed on nodes with CompactPCI technology. For information specific to the hardware you are using, see the corresponding hardware documentation.

  1. Create or retrieve the client ID for the diskless node.

    1. Log in to the diskless node as superuser.

    2. Get the ok prompt.

    3. Check for the client ID of the diskless node:


      ok> printenv dhcp-clientid
      

      If a client ID is not configured, configure it:


      ok> setenv dhcp-clientid client-id-name
      

      where client-id-name is an ASCII string. In this procedure, test is used as an example client ID.

    4. Convert the ASCII string to hexadecimal.

      For example, if test is your client ID, the hexadecimal equivalent is 74 65 73 74.

  2. Log in to the master node.

  3. Declare the diskless node's client ID in the /export/root/diskless-node-name/etc/default/dhcpagent file.

    For example, if the hexadecimal equivalent of your client ID is 74 65 73 74 on a Netra CT 810 machine, add the following line to the dhcpagent file:


    CLIENT_ID=0x74657374
    

    For information about the format of the CLIENT_ID on the hardware you are using, see the corresponding hardware documentation.

  4. Update the DHCP configuration table for the NIC0 interface of the diskless node:


    # dhtadm -A -m macro-name -d \

    ‘:NhCgtpAddr=local-cgtp-addr:NhNic0Addr=local-nic0-addr:NhNic1Addr=

    local-nic1-addr:Include=Common:BootFile=inetboot.sun4u.os:\

    SrootPTH=/export/root/diskless-node-name:\

    SswapPTH=/export/swap/diskless-node-name:Include=subnet:’


    • macro-name is the NIC0 IP address of the node.

    • local-cgtp-addr, local-nic0-addr, and local-nic1-addr are respectively the IP addresses of the cgtp0, nic0, and nic1 interfaces of the node.

    • os is the operating system. Specify Solaris_9 or Solaris_10 depending on the Solaris version you installed.

    • diskless-node-name is the name of the node.

    • subnet is the NIC0 subnet.


      # dhtadm -A -m 10.250.1.30 -d \

      ‘:NhCgtpAddr=10.250.3.30:NhNic0Addr=10.250.1.30:NhNic1Addr=

      10.250.2.30:Include=Common:BootFile=inetboot.sun4u.Solaris 9:\

      SrootPTH=/export/root/netraDISKLESS1:\

      SswapPTH=/export/swap/netraDISKLESS1:Include=10.250.1.0:’


    For a diskless node, netraDISKLESS1, with the NIC0 IP address 10.250.1.30 and Solaris 9, type:

  5. Update the DHCP network table for the NIC0 interface of the diskless node:


    # pntadm -A IP-address -i diskless-node-clientID \-f PERMANENT+MANUAL -m macro-name subnet
    

    • IP-address is the NIC0 IP address of the node.

    • diskless-node-clientID is the hexadecimal equivalent of the client ID.

    • macro-name is the NIC0 IP address of the node.

    • subnet is the subnet of the NIC0 interface.


      # pntadm -A 10.250.1.30 -i 74657374 \-f PERMANENT+MANUAL -m 10.250.1.30 10.250.1.0
      

    For a Netra CT 810 diskless node with the NIC0 IP address 10.250.1.30 and client ID 74657374, type:

    For information about the format of the CLIENT_ID on the hardware you are using, see the corresponding hardware documentation.

  6. Update the DHCP configuration table for the NIC1 interface of the diskless node:


    # dhtadm -A -m macro-name -d \

    ‘:NhCgtpAddr=local-cgtp-addr:NhNic0Addr=local-nic0-addr:NhNic1Addr=

    local-nic1-addr:Include=Common:BootFile=inetboot.sun4u.os:\

    SrootPTH=/export/root/diskless-node-name:\

    SswapPTH=/export/swap/diskless-node-name:Include=subnet:’


    • macro-name is the NIC1 IP address of the node.

    • local-cgtp-addr, local-nic0-addr, and local-nic1-addr are respectively the IP addresses of the cgtp0, nic0, and nic1 interfaces of the node.

    • os is the operating system. Specify Solaris_9 or Solaris_10 depending on the Solaris version you installed.

    • diskless-node-name is the name of the node.

    • subnet is the NIC1 subnet.


      # dhtadm -A -m 10.250.2.30 -d \

      ‘:NhCgtpAddr=10.250.3.30:NhNic0Addr=10.250.1.30:NhNic1Addr=

      10.250.2.30:Include=Common:BootFile=inetboot.sun4u.Solaris 9:\

      SrootPTH=/export/root/netraDISKLESS1:\

      SswapPTH=/export/swap/netraDISKLESS1:Include=10.250.2.0:’


    For the diskless node, netraDISKLESS1, with the NIC1 IP address 10.250.2.30 and Solaris 9, type:

  7. Update the DHCP container for the NIC1 interface of the diskless node.


    # pntadm -A IP-address -i diskless-node-clientID \-f PERMANENT+MANUAL -m macro-name subnet
    

    • IP-address is the NIC1 IP address of the node.

    • diskless-node-clientID is the hexadecimal equivalent of the client ID.

    • macro-name is the NIC1 IP address of the node.

    • subnet is the NIC1 subnet.


      # pntadm -A 10.250.2.30 -i 74657374 \-f PERMANENT+MANUAL -m 10.250.2.30 10.250.2.0
      

    For the diskless node with NIC1 IP address 10.250.2.30 and client ID 74657374, type:

    For information about the format of the CLIENT_ID on the hardware you are using, see the corresponding hardware documentation.


Installing the Netra HA Suite Software on a Diskless Node

The packages that are installed in the partitions for diskless nodes are a subset of the Netra HA Suite packages already installed on the master-eligible nodes. The following Netra HA Suite packages must be installed for each diskless node.


TABLE 3-2   Netra HA Suite Packages for Diskless Nodes 
Package Name Package Description
SUNWnhas-admintools Netra HA Suite administration tool
Netra HA Suite Sun CGTP
SUNWnhas-cgtp-cluster CGTP user-space components, configuration scripts, and files
SUNWnhas-cmm Netra HA Suite Cluster Membership Monitor
SUNWnhas-cmm-libs CMM developer package (.h and .so files)
SUNWnhas-common Netra HA Suite common components
SUNWnhas-common-libs Trace library
SUNWjsnmp Java SNMP API
SUNWnhas-nma-local Netra HA Suite Management Agent (initscripts and configuration files)
SUNWnhas-rnfs-client Netra HA Suite Reliable Network File Server (client binaries)
SUNWnhas-safclm-libs Netra HA Suite Service Availability Forum's Cluster Membership Service API (libraries)
SUNWnhas-pmd Netra HA Suite process monitor daemon
SUNWnhas-pmd-solaris Daemon monitor root file system (Solaris 9 OS only)

procedure icon  To Install the Netra HA Suite Packages

  1. Log in to the master node as superuser.

  2. Install the Netra HA Suite packages.

    For example, to install the Netra HA Suite packages and the Java DMK package on the Solaris 9 OS, run the following command:


    # pkgadd -R /export/root/diskless-node-name -d netraHASuite/Packages \
    SUNWnhas-admintools SUNWnhas-cgtp SUNWnhas-cgtp-cluster \
    SUNWnhas-common-libs SUNWnhas-common SUNnhas-cmm-libs \
    SUNWnhas-cmm SUNWnhas-rnfs-client SUNWnhas-pmd \
    SUNWnhas-pmd-solaris SUNWnhas-nma-local SUNWjdrt
    



    Note - Install SUNWnhas-pmd-solaris only on the Solaris 9 OS.



    In the preceding command, you also install the Java DMK 5.0 runtime classes in the root directory of each diskless node.

    CGTP enables a redundant network for your cluster.



    Note - If you do not require CGTP, do not install the CGTP packages. For more information about the impact of not installing CGTP, see Choosing a Cluster Network.



  3. Install the Java DMK SNMP manager API classes package in the shared /usr directory for the diskless nodes as follows:

    On SPARC diskless nodes:


    # pkgadd -R /export/Solaris_x/usr_sparc_all/ \-d /NetraHASuite/Packages SUNWjsnmp
    

    On x64 diskless nodes:


    # pkgadd -R /export/Solaris_x/usr_i386_all/ \-d /NetraHASuite/Packages SUNWjsnmp
    

    where x is 9 or 10 depending on the Solaris version installed.


Configuring the Netra HA Suite for a Diskless Node

To configure the Netra HA Suite for a diskless node, see the following procedures:

procedure icon  To Update the Network Files for the Diskless Node

  1. Log in to the master node as superuser.

  2. Create the /export/root/diskless-node-name/etc/hostname.NIC0 and /export/root/diskless-node-name/etc/hostname.NIC1 files.

    where diskless-node-name is the hostname of the diskless node.


    # touch /export/root/diskless-node-name/etc/hostname.NIC0
    # touch /export/root/diskless-node-name/etc/hostname.NIC1
    

    For example, if you are using a CP2160 board, create the files:


    /export/root/diskless-node-name/etc/hostname.eri0

    /export/root/diskless-node-name/etc/hostname.eri1




    Note - All four files must remain empty.



  3. Create the /export/root/diskless-node-name/etc/hosts file.

  4. Edit the /export/root/diskless-node-name/etc/hosts file to include the IP addresses and node names for all the network interfaces of all the nodes.

    The interfaces are the NIC0, NIC1, and cgtp0 interfaces.


    127.0.0.1       localhost
    10.250.1.10     netraMEN1-nic0
    10.250.2.10     netraMEN1-nic1
    10.250.3.10     netraMEN1-cgtp
    10.250.1.20     netraMEN2-nic0
    10.250.2.20     netraMEN2-nic1
    10.250.3.20     netraMEN2-cgtp
    10.250.1.30     netraDISKLESS1-nic0
    10.250.2.30     netraDISKLESS1-nic1
    10.250.3.30     netraDISKLESS1-cgtp
    10.250.1.1      master-nic0
    10.250.2.1      master-nic1
    10.250.3.1      master-cgtp
    

  5. Create the /export/root/diskless-node-name/etc/nodename file.

  6. Edit the /export/root/diskless-node-name/etc/nodename file to include the node name that is associated with the IP address of one of the network interfaces.

    For example, add the node name associated with the IP address of the cgtp0 interface, that is, netraDISKLESS1–cgtp.

  7. Create the /export/root/diskless-node-name/etc/netmasks file.

  8. Edit the /export/root/diskless-node-name/etc/netmasks file to include a line for each subnet on the cluster:


    10.250.1.0    255.255.255.0
    10.250.2.0    255.255.255.0
    10.250.3.0    255.255.255.0
    

procedure icon  To Configure External IP Addresses

To configure external IP addresses for a diskless node, the node must have an extra physical network interface or logical network interface. A physical network interface is an unused interface on an existing Ethernet card or a supplemental HME Ethernet card or QFE Ethernet card, for example, hme2. A logical network interface is an interface that is configured on an existing Ethernet card, for example, hme1:101.

procedure icon  To Disable the Router Feature

Because the cluster network is not routable, you must disable the diskless node as a router.

  1. Log in to the master node as superuser.

  2. Create the notrouter file:


    # touch /export/root/diskless-node-name/etc/notrouter
    

    For a description of the advantages of using a private cluster network, see the “Cluster Addressing and Networking” in Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

procedure icon  To Set Up File Systems for a Diskless Node

To set up file systems for a diskless node, create the mount points /SUNWcgha/remote, SUNWcgha/services, and /SUNWcgha/swdb. Add the NFS mount points for the directories that contain middleware data and services on the master node. Update the /etc/vfstab file in the root directory for the diskless node. Then, these file systems are exported from the master node through the NFS, and are automatically mounted for the diskless nodes at boot time.

The following table explains the file systems that are exported on the master node and the corresponding mount points for the diskless nodes. For information about how to export these file systems on the master node, see To Set Up File Systems on the Master-Eligible Nodes.


Description Exported Mount Point on the Master Node Mount Point for Diskless Nodes
Root file systems /export/root/diskless-node-name /
Netra HA Suite data used locally /SUNWcgha/local Not exported
Netra HA Suite exported data /SUNWcgha/local/export/data /SUNWcgha/remote
Netra HA Suite exported data /SUNWcgha/local/export/services/ha_3.0/opt /SUNWcgha/services
Netra HA Suite exported data /SUNWcgha/local/export/services/ha_3.0 /SUNWcgha/swdb

All file systems that you mount using NFS must be mounted with the options fg, hard, and intr. You can also set the noac mount option, which suppresses data and attribute caching. Use the noac option only if the impact on performance is acceptable.

  1. Log in to the master node as superuser.

  2. Edit the entries in the /export/root/diskless-node-name/etc/vfstab file.

    • If you have configured the CGTP, replace the host name of the master node with the host name associated with the floating IP address for the cgtp0 interface that is assigned to the master role, for example, master-cgtp.

      For more information, see To Create the Floating Address Triplet Assigned to the Master Role.

    • If you have not configured the CGTP, replace the host name of the master node with the host name associated with the floating IP address for the NIC0 interface that is assigned to the master role, for example, master-nic0.

  3. Define the mount points /SUNWcgha/remote, SUNWcgha/services, and /SUNWcgha/swdb.

    • If you have configured the CGTP, use the floating IP address for the cgtp0 interface that is assigned to the master role to define the mount points:


      master-cgtp:/SUNWcgha/local/export/data -       \
      /SUNWcgha/remote        nfs     -       yes     \
      rw,hard,fg,intr
      master-cgtp:/SUNWcgha/local/export/services/ha_3.0/opt   \
      -       /SUNWcgha/services      nfs     -       yes    \
      rw,hard,fg,intr
      master-cgtp:/SUNWcgha/local/export/services/ha_3.0 -     \
      /SUNWcgha/swdb  nfs    -       yes     rw,hard,fg,intr
      

    • If you have not configured the CGTP, use the floating IP address for the NIC0 interface that is assigned to the master role.


      master-nic0:/SUNWcgha/local/export/data -       \
      /SUNWcgha/remote        nfs     -       yes     \
      rw,hard,fg,intr
      master-nic0:/SUNWcgha/local/export/services/ha_3.0/opt   \
      -       /SUNWcgha/services      nfs     -       yes    \
      rw,hard,fg,intr
      master-nic0:/SUNWcgha/local/export/servicesha_3.0 -     \
      /SUNWcgha/swdb  nfs    -       yes     rw,hard,fg,intr
      



    Note - Do not use IP addresses in the /etc/vfstab file for the diskless nodes. Instead, use logical host names. Otherwise, the pkgadd R command fails and returns the following message: “WARNING: cannot install to or verify on master_ip>”



  4. In the diskless node directory /export/root/diskless-node-name, create the mount points:


    # mkdir -p SUNWcgha/remote
    # mkdir -p SUNWcgha/services
    # mkdir -p SUNWcgha/swdb
    

  5. Repeat Step 2 through Step 8 for all diskless nodes.

procedure icon  To Create the nhfs.conf File for a Diskless Node

Each node in the cluster has a cluster configuration file, nhfs.conf. Create this file for the new diskless node by performing the following procedure.

  1. Log in to the master node as superuser.

  2. Create the nhfs.conf file for the diskless node:


    # cp /etc/opt/SUNWcgha/nhfs.conf.template \
    /export/root/diskless-node-name/etc/opt/SUNWcgha/nhfs.conf
    

  3. Configure the /export/root/diskless-node-name/etc/opt/SUNWcgha/nhfs.conf file.

    An example file for a diskless node on a cluster with the domain ID 250, with network interfaces eri0, eri1, and cgtp0 would be as follows:


    Node.NodeId=30
    Node.NIC0=eri0
    Node.NIC1=eri1
    Node.NICCGTP=cgtp0
    Node.UseCGTP=True
    Node.Type=Diskless
    Node.DomainId=250
    CMM.IsEligible=False
    CMM.LocalConfig.Dir=/etc/opt/SUNWcgha
    

    For more information, see the nhfs.conf4 man page.

    If you have not installed the CGTP patches and packages, do the following:

    • Disable the Node.NIC1 and Node.NICCGTP parameters.

      To disable these parameters, add a comment mark (#) at the beginning of the line containing the parameter if this mark is not already present.

    • Configure the Node.UseCGTP and the Node.NIC0 parameters:

      • Node.UseCGTP=False

      • Node.NIC0=interface-name

        where interface-name is the name of the NIC0 interface, for example, hme0, qfe0, or eri0.

  4. Repeat Step 2 and Step 3 for all diskless nodes.


Integrating a Diskless Node Into the Cluster

You must update the /etc/hosts file on each peer node in the cluster to include the IP addresses of the diskless node. You must also update the nhfs.conf file and the cluster_nodes_table file on the master-eligible nodes to include the diskless node. See the following procedures.

procedure icon  To Update the /etc/hosts File on Each Peer Node

To declare the diskless node to all peer nodes in the cluster, perform the following procedure:

  1. Log in to the master node as superuser.

  2. Edit the /etc/hosts file to add the following lines:


    IP-address-NIC0 nic0-diskless-node-nameIP-address-NIC1 
    nic1-diskless-node-nameIP-address-cgtp0 cgtp0-diskless-node-name
    

    Now, the master node can “see” the three network interfaces of the new diskless node.

  3. Log in to the vice-master node as superuser.

  4. Repeat Step 2.

    Now, the vice-master node can “see” the three network interfaces of the new diskless node.

  5. Log in to a diskless or dataless node that is part of the cluster, if one already exists.

  6. Repeat Step 2.

    Now, the diskless node can “see” the three network interfaces of the new diskless node.

  7. Repeat Step 5 and Step 6 on all other diskless or dataless nodes that are already part of the cluster.

procedure icon  To Add the Diskless Node to the cluster_nodes_table File

Update the cluster node table file, cluster_nodes_table, and the cluster configuration file, nhfs.conf, with the addressing information for the new diskless node.

  1. Log in to the master node as superuser.

  2. Using the following format, edit the /etc/opt/SUNWcgha/cluster_nodes_table file to add an entry for the diskless node:


    #NodeId     Domain_id  Name             Attributesnodeid        domainid   diskless-node-name    -
    

    The nodeid that you define in the cluster_nodes_table file must be the decimal representation of the host part of the node's IP address. For more information, see the cluster_nodes_table4 man page.

  3. Create the cluster_nodes_table file on the master node disk:


    # /opt/SUNWcgha/sbin/nhcmmstat -c reload
    

  4. Repeat Step 2 for each diskless node you are adding to the cluster.

procedure icon  To Update the Shared Directory Configuration

Specify the shared directory configuration in the nhfs.conf file on the master node and the vice-master node. Ensure that there is no existing shared directory configuration already specified in the /etc/dfs/dfstab file.

  1. Log in to the master node as superuser.

  2. Edit the /etc/opt/SUNWcgha/nhfs.conf file to add the following:


    Rnfs.Share.0=share -F nfs -o rw=nic0-diskless-node-name: \
    nic1-diskless-node-name:cgtp0-diskless-node-name, \
    root=nic0-diskless-node-name:nic1-diskless-node-name: \cgtp-diskless-node-name 
    /export/swap/diskless-node-nameRnfs.Share.1=share -F nfs -o
    rw=nic0-diskless-node-name: \nic1-diskless-node-name:cgtp0-diskless-node-name, \
    root=nic0-diskless-node-name:nic1-diskless-node-name: \cgtp0-diskless-node-name 
    /export/root/diskless-node-name
    

  3. Update the RNFS.Share.0 parameter that is used to share the /SUNWcgha/local/export directory to include the cgtp0-diskless-node-name of the diskless node.

  4. Log in to the vice-master node.

  5. Repeat Step 2 and Step 3 on the vice-master node.

  6. On the master node, edit the /etc/dfs/dfstab file to remove all uncommented lines.


Starting the Cluster

To integrate the new diskless node into the cluster, delete the not_configured file and reboot the master-eligible nodes. When the Solaris Operating System and the Netra HA Suite have been booted onto the diskless nodes, verify the new configuration before the cluster is restarted.

procedure icon  To Delete the not_configured File

The /export/root/diskless-node-name/etc/opt/SUNWcgha/not_configured file is automatically created during the installation of the CMM packages for the diskless node. This file enables you to reboot a cluster node during the installation and configuration process without starting the Netra HA Suite.

procedure icon  To Boot a Diskless Node

  1. Log in to the master node as superuser.

  2. Reboot the master node.



    Note - For detailed information about rebooting a node on the operating system version in use at your site, refer to the Netra High Availability Suite 3.0 1/08 Foundation Services Cluster Administration Guide.



  3. After the master node has completed booting, log in to the vice-master node as superuser.

  4. Reboot the vice-master node:



    Note - For detailed information about rebooting a node on the operating system version in use at your site, refer to the Netra High Availability Suite 3.0 1/08 Foundation Services Cluster Administration Guide.



  5. After the vice-master node has completed booting, get the ok prompt on the diskless node:


    # halt
    # Control-C
    telnet> send brk
    Type 'go' to resume
    ok>
    

  6. For SPARC diskless nodes, set the OpenBoot PROM parameters that exist on your system to the values below:


    ok> setenv local-mac-address? true
    ok> setenv auto-boot-retry? true
    ok> setenv diag-switch? false
    ok> setenv boot-device net:dhcp,,,,,5 net2:dhcp,,,,,5
    

    For x64 diskless nodes, refer to the hardware manual.



    Note - If you are going to use client_id on a diskless node, configure it on the diskless node. For more information, refer to the configuration information provided with the hardware.



  7. For SPARC diskless nodes, reboot the diskless node as follows:


    ok> boot
    

    For x64 diskless nodes, refer to the hardware manual.

procedure icon  To Verify the Cluster Configuration

Use the nhadm tool to verify that the diskless nodes have been configured correctly and are integrated into the cluster.

  1. Log in to the diskless node as superuser.

  2. Run the nhadm tool to validate the configuration:


    # nhadm check
    

    If all checks pass the validation, the installation of the Netra HA Suite software was successful. For more information, see the nhadm1M man page.