C H A P T E R  2

Installing the Software on the Master-Eligible Nodes

After you have set up the installation environment, you are ready to manually install the Solaris Operating System and the Foundation Services manually on the master-eligible nodes of the cluster. The master-eligible nodes take on the roles of master node and vice-master node in the cluster. For more information about the types of nodes, see the Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

To manually install and configure Netra HA Suite software on the master-eligible nodes of your cluster, see the following sections:



Note - Do not use the nhcmmstat or scmadm tools to monitor the cluster during the installation procedure. Use these tools only after the installation and configuration procedures have been completed on all nodes.




Defining Disk Partitions on the Master-Eligible Nodes

The master-eligible nodes store current data for all nodes in the cluster, whether the cluster has diskless nodes or dataless nodes. One master-eligible node is to be the master node, while the other master-eligible node is to be the vice-master node. The vice-master node takes over the role of master in case the master node fails or is taken offline for maintenance. Therefore, the disks of both these nodes must have exactly the same partitions. Create the disk partitions of the master-eligible node according to the needs of your cluster. For example, the disks of the master-eligible nodes must be configured differently if diskless nodes are part of the cluster.

The following table lists the space requirements for example disk partitions of master-eligible nodes in a cluster with diskless nodes.


TABLE 2-1   Example Disk Partitions of Master-Eligible Nodes for IP Replication 
Disk Partition File System Name Description Example Size
0 / The root file system, boot partition, and volume management software. This partition must be mounted with the logging option. 2 Gbytes minimum
1 /swap Minimum size when physical memory is less than 1 Gbyte. 1 Gbyte
2 overlap Entire disk. Size of entire disk
3 /export Exported file system reserved for diskless nodes. This partition must be mounted with the logging option. This partition is further partitioned if diskless nodes are added to the cluster. 1 Gbyte + 100 Mbytes per diskless node
4 /SUNWcgha/local This partition is reserved for NFS status files, services, and configuration files. This partition must be mounted with the logging option. 2 Gbytes
5 Reserved for Reliable NFS internal use Bitmap partition reserved for the nhcrfsd daemon. This partition is associated with the /export file system. See TABLE 2-3
6 Reserved for Reliable NFS internal use Bitmap partition reserved for the nhcrfsd daemon. This partition is associated with the /SUNWcgha/local file system. See TABLE 2-3
7 /mypartition For any additional applications. The remaining space


TABLE 2-2   Example Disk Partitions of Master-Eligible Nodes for Shared Disk 
Disk Partition File System Name Description Example Size
0 / Data partition for diskless Solaris images 2 Gbytes minimum
1 /swap Data partition for middleware data and binaries 1 Gbyte
2 overlap Entire disk. Size of entire disk
7   SVM replica 20 MBytes

For replication, create a bitmap partition for each partition containing an exported, replicated file system on the master-eligible nodes. The bitmap partition must be at least the following size.

1 Kbyte + 4 Kbytes per Gbyte of data in the associated data partition

In this example, the bitmaps are created on partitions 5 and 6. The bitmap partition sizes can be as shown in the following table.


TABLE 2-3   Example Bitmap Partitions 
File System Name Bitmap Partition File System (Mbytes) Bitmap File (Kbytes) Bitmap Size (Block)
/export /dev/rdsk/c0t0d0s5 2000 9216 18
/SUNWcgha/local /dev/rdsk/c0t0d0s6 1512 7072 14

For information, see the Sun Storage Tek Availability Suite 3.1 Remote Mirror Software Installation Guide in the Sun Storage Tektrademark Availability Suite 3.1 documentation set.



Note - In a cluster without diskless nodes, the /export file system and the associated bitmap partition are not required.




Installing the Solaris Operating System on the Master-Eligible Nodes

To install the Solaris Operating System on each master-eligible node, use the Solaris JumpStart tool on the installation server. The Solaris JumpStart tool requires the Solaris distribution to be on the installation server. For information about creating a Solaris distribution, see Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide.

procedure icon  To Install the Solaris Operating System on the Master-Eligible Nodes

  1. Log in to the installation server as superuser.

  2. Create the Solaris JumpStart environment on the installation server by using the appropriate document for the Solaris release:

    • Solaris 9 or Solaris 10 Installation Guide

    You can access these documents on http://www.oracle.com/technetwork/indexes/documentation/index.html.

  3. In the /etc/hosts file, add the names and IP addresses of the master-eligible nodes.

  4. Share the Solaris-distribution-dir and Jumpstart-dir directories by adding these lines to the /etc/dfs/dfstab file:


    share -F nfs -o ro,anon=0 Solaris-distribution-dirshare -F nfs -o ro,anon=0 Jumpstart-dir
    

    • Solaris-distribution is the directory that contains the Solaris distribution.

    • Jumpstart-dir is the directory that contains the Solaris JumpStart files.

  5. Share the directories that are defined in the /etc/dfs/dfstab file:


    # shareall
    

  6. Change to the directory where the add_install_client command is located:


    # cd Solaris-dir/Solaris_x/Tools
    

    • Solaris-dir is the directory that contains the Solaris installation software. This directory could be on a CD-ROM or in an NFS-shared directory.

    • x is 9 or 10 depending on the Solaris version installed.

  7. Run the add_install_client command for each master-eligible node.

    For information, see the add_install_client1M man page.

  8. Connect to the console of each master-eligible node.

  9. Boot each master-eligible node with the appropriate command using a network boot.

    If you are unsure of the appropriate command, refer to the hardware documentation for your platform. The common command for SPARC systems is shown in the following example:


    ok> boot net - install
    

    If the installation server is connected to the second Ethernet interface, type:


    ok> boot net2 - install
    

    This command installs the Solaris Operating System on the master-eligible nodes.

    For information about performing this task on the AMD64 platform, refer to the hardware documentation.


Setting Up the Master-Eligible Nodes

To prepare the master-eligible nodes for the installation of the Netra HA Suite, you must configure the master-eligible nodes. You must also mount the installation server directory that contains the Netra HA Suite distribution.

procedure icon  To Configure the Master-Eligible Nodes

  1. Log in to a master-eligible node as superuser.

  2. Create /etc/notrouter file:


    # touch /etc/notrouter
    

  3. Modify the /etc/default/login file so that you can connect to a node from a remote system as superuser:


    # mv /etc/default/login /etc/default/login.orig
    # chmod 644 /etc/default/login.orig
    # sed '1,$s/^CONSOLE/#CONSOLE/' /etc/default/login.orig > /etc/default/login
    # chmod 444 /etc/default/login
    

  4. Disable power management:


    # touch /noautoshutdown
    

  5. Modify the .rhosts file according to the security policy for your cluster:


    # touch /.rhosts
    # cp /.rhosts /.rhosts.orig
    # echo "+ root" > /.rhosts
    # chmod 444 /.rhosts
    

  6. Set the boot parameters:


    # /usr/sbin/eeprom local-mac-address?=true
    # /usr/sbin/eeprom auto-boot?=true
    # /usr/sbin/eeprom diag-switch?=false
    

    The preceding example is for SPARC-based hardware. For the commands required on the x64 platform, refer to the hardware documentation.

  7. If you are using the Network Time Protocol (NTP) to run an external clock, configure the master-eligible node as an NTP server.

    This procedure is described in the Solaris documentation.

  8. If your master-eligible node has an IDE disk, edit the /usr/kernel/drv/sdbc.conf file.

    Change the value of the sdbc_max_fbas parameter from 1024 to 256.

  9. Create the data/etc and data/var/dhcp directories in the /SUNWcgha/local/export/ file system on the master-eligible node:


    # mkdir -p /SUNWcgha/local/export/data/etc
    # mkdir -p /SUNWcgha/local/export/data/var/dhcp
    

    • /SUNWcgha/local/export/data/etc directory is required for the Cluster Membership Manager (CMM).

    • /SUNWcgha/local/export/data/var/dhcp directory is required for the Reliable Boot Service.

  10. Repeat Step 1 through Step 9 on the second master-eligible node.

procedure icon  To Mount an Installation Server Directory on the Master-Eligible Nodes

  1. Log in to the installation server as superuser.

  2. Check that the mountd and nfsd daemons are running on the installation server.

    For example, use the ps command:


    # ps -ef | grep mountd
    root   184     1  0   Aug 03 ?        0:01 /usr/lib/autofs/automountd
    root   290     1  0   Aug 03 ?        0:00 /usr/lib/nfs/mountd
    root  2978  2974  0 17:40:34 pts/2    0:00 grep mountd
    # ps -ef | grep nfsd
    root   292     1  0   Aug 03 ?        0:00 /usr/lib/nfs/nfsd -a 16
    root  2980  2974  0 17:40:50 pts/2    0:00 grep nfsd
    # 
    

    If a process ID is not returned for the mountd and nfsd daemons, start the NFS daemons as follows:

    On the Solaris 9 OS, use the following command


    # /etc/init.d/nfs.server start
    

    On the Solaris 10 OS, use the following command


    # svcadm enable svc:/network/nfs/server
    

  3. Share the directory containing the distributions for the Netra HA Suite and the Solaris Operating System by adding the following lines to the /etc/dfs/dfstab file:


    share -F nfs -o ro,anon=0 software-distribution-dir
    

    where software-distribution-dir is the directory that contains the Netra HA Suite packages and Solaris patches.

  4. Share the directories that are defined in the /etc/dfs/dfstab file:


    # shareall
    

  5. Log in to the a master-eligible node as superuser.

  6. Create the mount point directories Solaris and NetraHASuite on the master-eligible node:


    # mkdir /NetraHASuite
    # mkdir /Solaris
    

  7. Mount the Netra HA Suite and Solaris distribution directories on the installation server:


    # mount -F nfs \installation-server-IP-address:/software-distribution-dir/Product/NetraHASuite_3.0/
    FoundationServices/Solaris_x/sparc
    # mount -F nfs installation-server-IP-address:/Solaris-distribution-dir/Solaris
    

    • installation-server-IP-address is the IP address of the cluster network interface that is connected to the installation server.

    • software-distribution-dir is the directory that contains the Netra HA Suite packages. (Note that in the preceding example, this line wraps, due to space constraints; however, when typing this path, it should contain no spaces or line breaks.)

    • x is the Solaris OS version.

    • Solaris-distribution-dir is the directory that contains the Solaris distribution.

  8. Repeat Step 5 through Step 7 on the other master-eligible node.

procedure icon  To Install Solaris Patches

After you have completed the Solaris installation, you must install the Solaris patches delivered in the Netra HA Suite distribution. See the Netra High Availability Suite 3.0 1/08 Foundation Services README for the list of patches.



Note - Some of these patches are required for CGTP. If you do not plan to install CGTP, do not install the CGTP patches. For more information about the impact of not installing CGTP, see Choosing a Cluster Network.



  1. Log in to each master-eligible node as superuser.

  2. Install the necessary Solaris patches on each master-eligible node:


    # patchadd -M /NetraHASuite/Patches/ patch-number
    


Installing the Man Pages on the Master-Eligible Nodes

procedure icon  To Install the Man Pages on the Master-Eligible Nodes

  1. Log in to a master-eligible node as superuser.

  2. Add the man page package:


    # pkgadd -d /NetraHASuite/Packages/ SUNWnhas-manpages
    

    The man pages are installed in the /opt/SUNWcgha/man directory. To access the man pages, see the Netra High Availability Suite 3.0 1/08 Foundation Services Reference Manual.

  3. Repeat Step 1 and Step 2 on the other master-eligible node.


Installing the Netra HA Suite on the Master-Eligible Nodes

The following procedures explain how to install the Netra HA Suite on the master-eligible nodes:

procedure icon  To Install the Cluster Membership Manager

procedure icon  To Install the nhadm Tool

The nhadm tool is a cluster administration tool that can verify that the installation was completed correctly. You can run this tool when your cluster is up and running.

procedure icon  To Install the Carrier Grade Transport Protocol

CGTP enables a redundant network for your cluster.



Note - If you do not require CGTP, do not install the CGTP packages. For more information about the impact of not installing CGTP, see Choosing a Cluster Network.



  1. Before you install the CGTP packages, make sure that you have installed the Solaris patches for CGTP.

    See To Install Solaris Patches.

  2. As superuser, install the following CGTP packages on each master-eligible node:


    # pkgadd -d /NetraHASuite/Packages/ SUNWnhas-cgtp SUNWnhas-cgtp-cluster
    

procedure icon  To Install the Node State Manager

procedure icon  To Install the External Address Manager

  1. Become superuser.

  2. Type the following command:


    # pkgadd -d /NetraHASuite/Packages/ SUNWnhas-eam

    For information on configuring the EAM, see the Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

procedure icon  To Install the Reliable NFS When Using IP-Based Replication

Install the Reliable NFS packages to enable the Reliable NFS service and data-replication features of Netra HA Suite. For a description of the Reliable NFS service, see “File Sharing and Data Replication” in Netra High Availability Suite 3.0 1/08 Foundation Services Overview. The Reliable NFS feature is enabled by the StorEdge Network Data Replicator (SNDR), which is provided with the Reliable NFS packages.



Note - SNDR is supplied for use only with the Netra HA Suite. Any use of this product other than on a Netra HA Suite cluster is not supported.



  1. As superuser, install the following Reliable NFS and SNDR packages on a master-eligible node in the following order:


    # pkgadd -d /NetraHASuite/Packages/ SUNWscmr SUNWscmu SUNWspsvr \  SUNWspsvu SUNWrdcr SUNWrdcu SUNWnhas-rnfs-client \  SUNWnhas-rnfs-server
    



    Note - During the installation of the SNDR package SUNWscmu, you might be asked to specify a database configuration location. You can choose to use the SNDR directory that is automatically created. This directory is of the format /sndrxy where x.y is the version of the SNDR release.



  2. Repeat Step 1 on the second master-eligible node.

  3. Install the SNDR patches on each master-eligible node.

    See the Netra High Availability Suite 3.0 1/08 Foundation Services README for a list of SNDR patches.

  4. Edit the /usr/kernel/drv/rdc.conf file on each master-eligible node to change the value of the rdc_bitmap_mode parameter.

    To have changes to the bitmaps written on the disk at each update, change the value of the rdc_bitmap_mode parameter to 1.

    To have changes to the bitmaps stored in memory at each update, change the value of the rdc_bitmap_mode parameter to 2. In this case, changes are written on the disk when the node is shut down. However, if both master-eligible nodes fail, both disks must be synchronized.

    For example: rdc_bitmap_mode=2.

procedure icon  To Install the Reliable NFS When Using Shared Disk

Install the Reliable NFS packages to enable the Reliable NFS service and disk mirroring features of Netra HA Suite. For a description of the Reliable NFS service, see “File Sharing and Data Replication” in Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

  1. As superuser, install the following Reliable NFS packages on a master-eligible node in the following order:


    # pkgadd -d /NetraHASuite/Packages/ SUNWnhas-rnfs-client \
    #   SUNWnhas-rnfs-server
    

  2. Repeat Step 1 on the second master-eligible node.

procedure icon  To Install the Node Management Agent

Install the Node Management Agent (NMA) packages to gather statistics on Reliable NFS, CGTP, and CMM. For a description of the NMA, see the Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

The NMA consists of four packages. One NMA package is installed on both master-eligible nodes. Three packages are NFS-mounted as shared middleware software on the first master-eligible node. The first master-eligible node is the node that is booted first after you complete installing and configuring all the services on the master-eligible nodes.

The NMA requires the Javatrademark DMK packages, SUNWjsnmp and SUNWjdrt, to run. For information about installing the entire Java DMK software, see the Java Dynamic Management Kit 5.0 Installation Guide.

The following table describes the packages that are required on each type of node.


Package Description Installed On
SUNWjsnmp Java DMK 5.0 Simple Network Management Protocol (SNMP) manager API classes Both master-eligible nodes
SUNWjdrt Java DMK 5.0 dynamic management runtime classes First master-eligible node
SUNWnhas-nma-local NMA configuration and startup script Both master-eligible nodes
SUNWnhas-nma-shared NMA shared component First master-eligible node

Follow this procedure to install and configure the NMA.

  1. As superuser, install the following NMA package and Java DMK package on both master-eligible nodes:


    # pkgadd -d /NetraHASuite/Packages/ SUNWnhas-nma-local SUNWjsnmp
    



    Note - If you plan to use shared disks, do not advance to Step 2 until the metadevice used for shared disks has been created. See Step 2 in To Set Up File Systems on the Master-Eligible Nodes.



  2. On the first master-eligible node, install the following shared Java DMK package and NMA packages:


    # pkgadd -d /NetraHASuite/Packages/ \-M -R /SUNWcgha/local/export/services/ha_3.0 \ SUNWjdrt SUNWnhas-nma-shared
    

    The packages are installed with a predefined root path in the /SUNWcgha/local/export/services/ha_3.0 directory.



    Note - Ignore error messages related to packages that have not been installed. Always answer Y to continue the installation.



  3. To configure the NMA, see the Netra High Availability Suite 3.0 1/08 Foundation Services NMA Programming Guide.

procedure icon  To Install the Process Monitor Daemon


Configuring the Master-Eligible Node Addresses

Before assigning IP addresses to the network interfaces of the master-eligible nodes, see “Cluster Addressing and Networking” in the Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

In the Netra HA Suite, three IP addresses must be configured for each master-eligible node:

The IP addresses can be IPv4 addresses of any class with the following structure:


network_id.host_id

When you configure the IP addresses, make sure that the node ID, nodeid, is the decimal equivalent of host_id. You define the nodeid in the cluster_nodes_table file and the nhfs.conf file. For more information, see Configuring the Netra HA Suite on the Master-Eligible Nodes.

The following procedures explain how to create and configure IP addresses for master-eligible nodes.

Examples in these procedures use IPv4 Class C addresses.

procedure icon  To Create the IP Addresses for the Network Interfaces

  1. Log in to each master-eligible node as superuser.

  2. In the /etc/hosts file on each master-eligible node, add the three IP addresses, followed by the name of each interface:


    10.250.1.10     netraMEN1-nic0
    10.250.2.10     netraMEN1-nic1
    10.250.3.10     netraMEN1-cgtp
    10.250.1.20     netraMEN2-nic0
    10.250.2.20     netraMEN2-nic1
    10.250.3.20     netraMEN2-cgtp
    10.250.1.1      master-nic0
    10.250.2.1      master-nic1
    10.250.3.1      master-cgtp
    

    In the rest of this book, the node netraMEN1 is the first master-eligible node. The first master-eligible node is the node that is booted first after you complete installing the Netra HA Suite. The node netraMEN2 is the second master-eligible node that is booted after the first master-eligible node has completed booting.

procedure icon  To Update the Network Files

In the /etc directory on each master-eligible node, you must create a hostname file for each of the three interfaces. In addition, update the nodename and netmasks files.

  1. Create or update the file /etc/hostname.NIC0 for the NIC0 interface.

    This file must contain the name of the master-eligible node on the first interface, for example, netraMEN1-nic0.

  2. Create or update the file /etc/hostname.NIC1 for the NIC1 interface.

    This file must contain the name of the master-eligible node on the second interface, for example, netraMEN1-nic1.

  3. Create or update the file /etc/hostname.cgtp0 for the cgtp0 interface.

    This file must contain the name of the master-eligible node on the cgtp0 interface, for example, netraMEN1-cgtp.

  4. Update the /etc/nodename file with the IP address of the master-eligible node.

    • If you have installed CGTP, add the name set on the CGTP interface, for example, netraMEN1-cgtp.

    • If you have not installed CGTP, add the name set on the NIC0 interface, for example, netraMEN1-nic0.

  5. Create a /etc/netmasks file with a netmask of 255.255.255.0 for all subnetworks in the cluster.

procedure icon  To Configure External IP Addresses

To configure external IP addresses for a master-eligible node, the node must have an extra physical network interface or logical network interface. An extra physical network interface is an unused interface on an existing Ethernet card or a supplemental Ethernet card, for example, hme2. A logical network interface is an interface that is configured on an existing Ethernet card, for example, hme1:101.

procedure icon  To Configure an External Floating Address Using a Single Link



Note - The procedure described in this section requires the installation of the External Address Manager.



  1. If required, add the hostname associated with the external floating address in /etc/host on each master-eligible node.


    129.253.1.13     ext-float

  2. Add, if required, the associated netmask for the subnetwork in /etc/netmasks on each master-eligible node.


    129.253.1.0     255.255.255.0

  3. Create or update the file /etc/hostname.interface for the interface supporting the external floating address on each master-eligible node.

    If the file does not exist, create the following lines (the file must contain at least two lines for the arguments to be taken into account):


    ext-float netmask + broadcast +

    down


    If the file already exists, add the following line:


    addif ext-float netmask + broadcast + down

  4. Configure the external floating address parameter in the nhfs.conf file on each master-eligible node.

    For more information, see the nhfs.conf4 man page.

procedure icon  To Configure an External Floating Address Using Redundant Links Managed by IPMP



Note - The procedure described in this section requires the installation of the External Address Manager.



To configure the external floating address, the node must have two network interfaces not already used for a CGTP network. Using a different VLAN can be considered if no network interfaces are available.Each interface must be configured with a special IP address used for monitoring. The external floating address must be configured in one of them, and all of these IP addresses must be part of the same subnetwork.

  1. Add, if required, the hostname associated to test IP addresses and the external floating address in /etc/host on each master-eligible node.

    IP addresses for testing must be different on each node.


    129.253.1.11     test-ipmp-1

    129.253.1.12     test-ipmp-2

    129.253.1.30     ipmp-float


  2. Add, if required, the associated netmask for the subnetwork in /etc/netmasks on each master-eligible node.


    129.253.1.0     255.255.255.0

  3. Create or update the file /etc/hostname.interface for the first interface on each master-eligible node.

    The file must contain the definition of the test IP address for this interface and the external floating address in this format:

    test IP address #1 netmask + broadcast + -failover deprecated group name up addif floating address netmask + broadcast + failover down

    For example:


    test-ipmp-1 netmask + broadcast + -failover deprecated group ipmp-group up

    addif ipmp-float netmask + broadcast + failover down


  4. Create or update the file /etc/hostname.interface for the second interface on each master-eligible node.

    The file must contain the definition of the test IP address for this interface in this format:

    test IP address #1 netmask + broadcast + -failover deprecated group name up

    For instance:


    test-ipmp-2 netmask + broadcast + -failover deprecated group ipmp-group up

  5. Configure the external floating address parameters (floating address and IPMP group to be monitored) in the nhfs.conf file on each master-eligible node.

    For more information, see the nhfs.conf4 man page.


Configuring the Netra HA Suite on the Master-Eligible Nodes

Configure the services that are installed on the master-eligible nodes by modifying the nhfs.conf and the cluster_nodes_table files on each master-eligible node in the cluster. Master-eligible nodes have read-write access to these files. Diskless nodes or dataless nodes in the cluster have read-only access to these files.

Configuring the nhfs.conf File

The following procedures describe how to configure the nhfs.conf file.

procedure icon  To Configure the nhfs.conf File Properties

The nhfs.conf file enables you to configure the node after you have installed the Netra HA Suite on the node. This file provides parameters for configuring the node, CMM, Reliable NFS, the direct link between the master-eligible nodes, the Node State Manager, and daemon scheduling.

  1. As superuser, copy the template /etc/opt/SUNWcgha/nhfs.conf.template file:


    # cp /etc/opt/SUNWcgha/nhfs.conf.template \ /etc/opt/SUNWcgha/nhfs.conf
    

  2. For each property that you want to change, uncomment the associated parameter (delete the comment mark at the beginning of the line).

  3. Modify the value of each parameter that you want to change.

    For descriptions of each parameter, see the nhfs.conf4 man page.

    If you have not installed the CGTP patches and packages, do the following:

    • Disable the Node.NIC1 and Node.NICCGTP parameters.

    • Configure the Node.UseCGTP and the Node.NIC0 parameters:

      • Node.UseCGTP=False

      • Node.NIC0=interface-name

        where interface-name is the name of the NIC0 interface, for example, hme0, qfe0, or eri0.

procedure icon  To Create the Floating Address Triplet Assigned to the Master Role

The floating address triplet is a triplet of three logical addresses active on the node holding the master role. When the cluster is started, the floating address triplet is activated on the master node. In the event of a switchover or a failover, these addresses are activated on the new master node. Simultaneously, the floating address triplet is deactivated automatically on the old master node, that is, the new vice-master node.

  •   To create the floating address triplet, you must define the master ID in the nhfs.conf file.

    The floating address triplet is calculated from the master ID, the netmask, and the network interface addresses.

    For more information about the floating address triplet of the master node, see “Cluster Addressing and Networking” in Netra High Availability Suite 3.0 1/08 Foundation Services Overview.

procedure icon  To Configure a Direct Link Between the Master-Eligible Nodes

You can configure a direct link between the master-eligible nodes to prevent a split brain cluster. A split brain cluster is a cluster that has two master nodes because the network between the master node and the vice-master node has failed.

  1. Connect the serial ports of the master-eligible nodes.

    For an illustration of the connection between the master-eligible nodes, see the Netra High Availability Suite 3.0 1/08 Foundation Services Getting Started Guide.

  2. Configure the direct link parameters.

    For more information, see the nhfs.conf4 man page.

Creating the cluster_nodes_table File

The cluster_nodes_table file contains the configuration data for each node in the cluster. Create this file on each master-eligible node. Once the cluster is running, this file is accessed by all nodes in the cluster. Therefore, the cluster_nodes_table on both master-eligible nodes must be exactly the same.

procedure icon  To Create the cluster_nodes_table File

  1. Log in to a master-eligible node as superuser.

  2. Copy the template file from /etc/opt/SUNWcgha/cluster_nodes_table.template to /etc/opt/SUNWcgha/cluster_nodes_table.

    You can save the cluster_nodes_table file in a directory other than the /etc/opt/SUNWcgha directory. By default, the cluster_nodes_table file is located in the /etc/opt/SUNWcgha directory.

  3. Edit the cluster_nodes_table file to add a line for each node in the cluster.

    For more information, see the cluster_nodes_table4 man page.

  4. Edit the nhfs.conf file to specify the directory that contains the cluster_nodes_table file:


    CMM.LocalConfig.Dir=/etc/opt/SUNWcgha
    

    For more information, see the nhfs.conf4 man page.

  5. Log in to the other master-eligible node as superuser.

  6. Copy the /etc/opt/SUNWcgha/cluster_nodes_table file from the first master-eligible node to the same directory on the second master-eligible node.

    If you saved the cluster_nodes_table file in a directory other than /etc/opt/SUNWcgha, copy the file to that other directory on the second master-eligible node. The cluster_nodes_table file must be available in the same directory on both master-eligible nodes.

  7. Repeat Step 4 on the second master-eligible node.



    Note - When there is a change in the attribute of a node, the cluster_nodes_table file is updated by the nhcmmd daemon on each master-eligible node. If a switchover or failover occurs, the diskless nodes or dataless nodes in the cluster access the cluster_nodes_table file on the new master node. Only master-eligible nodes can write information to the cluster_nodes_table file.




Configuring Solaris Volume Manager With Reliable NFS and Shared Disk

procedure icon  To Configure Solaris Volume Manager for Use with Reliable NFS and a Shared Disk

This procedure uses the following values for its code examples:

  • c0t0d0 is the system disk

  • c1t8d0 is the primary shared disk

  • c1t9d0 is the secondary shared disk used to mirror the primary one

Detailed information about Solaris VM and how to set up a shared disk can be found in the Solaris Volume Manager Administration Guide.

  1. On the first master-eligible node, change the node name with the name of the host associated to the CGTP interface:


    # uname -S netraMEN1-cgtp

  2. Repeat for the second master-eligible node:


    # uname -S netraMEN2-cgtp

  3. On the first master-eligible node, restart the rpcbind daemon to make it use the new node name:


    # pkill -x -u 0 rpcbind

    # /usr/sbin/rpcbind -w


  4. Repeat Step 3 on the second master-eligible node.

  5. Create the database replicas for the dedicated root disk slice on each master-eligible node:


    # metadb -a -c 3 -f /dev/rdsk/c0t0d0s7

  6. Repeat Step 9 for the second master-eligible node:


    # cat /etc/nodename

    netraMEN2-cgtp


  7. (Optional) If you plan to use CGTP, configure a temporary network interface on the first private network and make it match the name and IP address of the CGTP interface on the first master-eligible node:


    # ifconfig hme0:111 plumb

    # ifconfig hme0:111 10.250.3.10 netmask + broadcast + up

    Setting netmask of hme0:111 to 255.255.255.0


  8. (Optional) If you plan to use CGTP, repeat Step 7 for the second master-eligible node:


    # ifconfig hme0:111 plumb

    # ifconfig hme0:111 10.250.3.11 netmask + broadcast + up

    Setting netmask of hme0:111 to 255.255.255.0


  9. On the first master-eligible node, verify that the /etc/nodename file matches the name of the CGTP interface (or the name of the private network interface, if CGTP is not used):


    # cat /etc/nodename

    netraMEN1-cgtp




    Note - The rest of the procedure only applies to the first master-eligible node.



  10. Create the Solaris VM diskset that manages the shared disks:


    # metaset -s nhas_diskset -a -h netraMEN1-cgtp netraMEN2-cgtp

  11. Remove any possible existing SCSI3-PGR keys from the shared disks.

    In the following example, there was no key lying on the disks:


    # /opt/SUNWcgha/sbin/nhscsitool /dev/rdsk/c1t8d0s2

    Performing a SCSI bus reset ... done.

    There are no keys on disk ’/dev/rdsk/c1t8d0s2’.

    # /opt/SUNWcgha/sbin/nhscsitool /dev/rdsk/c1t9d0s2

    Performing a SCSI bus reset ... done.

    There are no keys on disk ’/dev/rdsk/c1t9d0s2’.


  12. On x64 master-eligible nodes, repartition the shared disk as described in .

  13. Add the names of the shared disks to the previously created diskset:


    # metaset -s nhas_diskset -a /dev/rdsk/c1t8d0 /dev/rdsk/c1t9d0



    Note - This step will reformat the shared disks, and all existing data on the shared disks will be lost.



  14. Verify that the SVM configuration is set up correctly:


    # metaset

    Set name = nhas_diskset, Set number = 1

    Host Owner

    netraMEN1-cgtp Yes

    netraMEN2-cgtp

    Drive Dbase

    c1t8d0 Yes

    c1t9d0 Yes




    Note - If you do not plan to install diskless nodes, skip to Step 17



  15. On SPARC master-eligible nodes, repartition the shared disk as described in .

  16. Create the metadevices for partition mapping and mirroring.

    Create the metadevices on the primary disk:


    # metainit -s nhas_diskset d11 1 1 /dev/rdsk/c1t8d0s0

    # metainit -s nhas_diskset d12 1 1 /dev/rdsk/c1t8d0s1


    Create the metadevices on the secondary disk:


    # metainit -s nhas_diskset d21 1 1 /dev/rdsk/c1t9d0s0

    # metainit -s nhas_diskset d22 1 1 /dev/rdsk/c1t9d0s1


    Create the mirror sets:


    # metainit -s nhas_diskset d1 -m d11

    # metattach -s nhas_diskset d1 d21

    # metainit -s nhas_diskset d2 -m d12

    # metattach -s nhas_diskset d2 d22




    Note - This ends the section specific to the configuration for diskless installation. To complete diskless installation, jump to Step 19.



  17. Create your specific Solaris VM RAID configuration (refer to the Solaris Volume Manager Administration Guide for information on specific configurations).

    In the following example, the two disks form a mirror called d0:


    # metainit -s nhas_diskset d18 1 1 /dev/rdsk/c1t8d0s0

    # metainit -s nhas_diskset d19 1 1 /dev/rdsk/c1t9d0s0

    # metainit -s nhas_diskset d0 -m d18

    # metattach -s nhas_diskset d0 d19


  18. Create soft partitions to host the shared data.

    These soft partitions are the file systems managed by Reliable NFS. In the following example, d1 and d2 are managed by Reliable NFS.


    # metainit -s nhas_diskset d1 -p d0 2g

    # metainit -s nhas_diskset d2 -p d0 2g


    The devices managed by Reliable NFS are now accessible through /dev/md/nhas_diskset/dsk/d1 and /dev/md/nhas_diskset/dsk/d2.

  19. Create the file systems on the soft partitions:


    # newfs /dev/md/nhas_diskset/rdsk/d1

    # newfs /dev/md/nhas_diskset/rdsk/d2


  20. Create the following directories on both master-eligible nodes:


    # mkdir /SUNWcgha

    # mkdir /SUNWcgha/local


  21. Mount the file systems on the metadevice on the first node:


    # mount /dev/md/nhas_diskset/dsk/d1 /export

    # mount /dev/md/nhas_diskset/dsk/d2 /SUNWcgha/local


procedure icon  To Partition a Shared Disk for Diskless Support

  1. Retrieve disk geometry information using the prtvtoc command.

    A known problem in the diskless management tool, smosservice, prevents the creation of the diskless environment on a metadevice. To avoid this problem, mount the /export directory on a physical partition during the diskless environment creation.

    To support access to the /export via a metadevice without preventing its access on a physical partition, the disk must be re-partitioned in a particular way after it has been inserted into a diskset. This re-partitioning preserves data already stored by SVM, since there is no formatting of created partitions.

    The following table gives an example of the prtvtoc command output after inserting a disk into a diskset.


    # prtvtoc /dev/rdsk/c1t8d0s0
    * /dev/rdsk/c1t8d0s0 partition map
    *
    * Dimensions:
    * 512 bytes/sector
    * 107 sectors/track
    * 27 tracks/cylinder
    * 2889 sectors/cylinder
    * 24622 cylinders
    * 24620 accessible cylinders
    *
    * Flags:
    * 1: unmountable
    * 10: read-only
    *
    * First  Sector   Last
    * Partition Tag Flags Sector Count Sector Mount Directory
          0 4 00 8667 71118513 71127179
          7 4 01 0 8667 8666

  2. Create the data file using the fmthard command.

    The fmthard command (see its man page for more information) is used to create physical partitions. It requires you to input a data file describing the partitions to be created. There is one entry per partition, using the following format:


    slice #  tag  flag  starting sector  size in sectors

    starting sector and size in sectors values must be rounded to a cylinder boundary and must be computed as explained below.

    • starting sector = starting sector of the previous slice + size in sectors of the previous slice

    • size in sectors = the required partition size in bytes divided by bytes per sector, the result being rounded to sectors per cylinder (upper value)

      Three particular slices must be created:

      • Slice 7 containing the meta-database (also called metadb). This slice must be created the same size as that created by the Solaris VM to overlap the existing one (to preserve data).

      • A slice to support /export (diskless environment)

      • A slice to support /SUNWcgha/local (shared Netra HA Suite packages and files)

        Other slices can be added depending on your application requirements. The following table gives an example for partitioning:


        Slice Number Usage Size in MBytes
        0 /export 4096
        1 /SUNWcgha/local 2048
        7 metadb Not Applicable

        The following slice constraints must be respected:

        • Slice 7 (metadb) is the first slice of the disk starting at sector # 0, with the size size of slice 7 with tag 4 (user partition) and flag 0x01: (unmountable)

        • Slice 2: On SPARC, Slice 2 maps the whole disk: size in bytes = accessible cylinders* sectors per cylinder * bytes per sector with tag 5 (backup) and flag 0x01 (unmountable). On x64, Slice 2 maps the whole disk except slice 7 with tag 5 (backup) and flag 0x01 (unmountable). This particular partitioning may provoke the display of warning message while formatting the disk.

        • Other slices use tag 0 (unassigned) and flag 0x00 (mountable in R/W)

          An example of computing for slice 0 (located after slice 7):

          • Starting sector = (0 + 8667) = 8667

          • Size in bytes = (4096 * 10242) = 4294967296

          • Size in sectors = 4294967296 / 512 = 8388608

          • Size in sector rounded to cylinder boundaries (2889) = 8389656


            7 4 01 0 8667

            0 0 00 8667 8389656

            1 0 00 8398323 4194828

            2 5 01 0 71127180



            7 4 01 0 8667

            0 0 00 8667 8389656

            1 0 00 8398323 4194828

            2 5 01 8667 71118513


    These values would display the following content in the data file (datafile.txt):

    On x64, the content of the file appears as follows:

    Note that this example leaves some unallocated spaces on the disk that can be used for user-specific partitions.

  3. Re-partition the disk.

    Execute the following commands for the primary and for the secondary disk:


    # fmthard -s datafile.txt /dev/rdsk/c1t8d0s2

    # fmthard -s datafile.txt /dev/rdsk/c1t9d0s2



Setting Up File Systems on the Master-Eligible Nodes

procedure icon  To Set Up File Systems on the Master-Eligible Nodes

  1. Ensure that the following directories exist on the first master-eligible node:


    # mkdir /SUNWcgha/local/export
    # mkdir /SUNWcgha/local/export/data
    # mkdir /SUNWcgha/local/export/services
    # mkdir /SUNWcgha/local/export/services/NetraHASuite_version/opt
    

    where NetraHASuite_version is the version of the Netra HA Suite you install, for example, ha_3.0.

    These directories contain packages and data shared between the master-eligible nodes.

  2. If you are using shared disks, install the shared Java DMK package and NMA packages onto the first master-eligible node as explained in Step 2 of To Install the Node Management Agent.

  3. Create the following mount points on each master-eligible node:


    # mkdir /SUNWcgha/services
    # mkdir /SUNWcgha/remote
    # mkdir /SUNWcgha/swdb
    

    These directories are used as mount points for the directories that contain shared data.

  4. Add the following lines to the /etc/vfstab file on each master-eligible node:

    • If you have configured the CGTP, use the floating IP address for the cgtp0 interface that is assigned to the master role to define the mount points.


      master-cgtp:/SUNWcgha/local/export/data -       \
      /SUNWcgha/remote  nfs     -       no    rw,hard,fg,intr,noac
      master-cgtp:/SUNWcgha/local/export/services/ha_3.0/opt  \
      -    /SUNWcgha/services   nfs     -   no   rw,hard,fg,intr,noac
      master-cgtp:/SUNWcgha/local/export/services/ha_3.0 -     \
      /SUNWcgha/swdb  nfs    -       no     rw,hard,fg,intr,noac
      

      where master-cgtp is the host name associated with the floating address of the cgtp0 interface of the master node. For more information, see To Create the Floating Address Triplet Assigned to the Master Role.

    • If you have not configured the CGTP, use the floating IP address for the NIC0 interface that is assigned to the master role.


      master-nic0:/SUNWcgha/local/export/data -       \
      /SUNWcgha/remote  nfs     -       no    rw,hard,fg,intr,noac
      master-nic0:/SUNWcgha/local/export/services/ha_3.0/opt  \
      -    /SUNWcgha/services   nfs     -   no   rw,hard,fg,intr,noac
      master-nic0:/SUNWcgha/local/export/services/ha_3.0 -     \
      /SUNWcgha/swdb  nfs    -       no     rw,hard,fg,intr,noac
      

      where master-nic0 is the host name associated with the floating address of the NIC0 interface of the master node. For more information, see To Create the Floating Address Triplet Assigned to the Master Role.



    Note - The noac mount option suppresses data and attribute caching. Use the noac option only if the impact on performance is acceptable.



  5. Check the following in the /etc/vfstab file:

    • The mount at boot field is set to no for all RNFS-managed partitions.

      Example for SNDR:


      /dev/dsk/c0t0d0s1 /dev/rdsk/c0t0d0s1 /SUNWcgha/local ufs 2 no logging
      

      Example for Solaris VM:


      /dev/md/nhas_diskset/dsk/d1 /dev/md/nhas_diskset/rdsk/dl /SUNWcgha/local ufs 2 no logging
      

    • The root file system (/) has the logging option.


      /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  /  ufs  1 logging
      



    Note - Only partitions identified in the nhfs.conf file can be managed by RNFS. For more information about the nhfs.conf file, see Configuring the nhfs.conf File.



  6. (Only applicable to SNDR) Create the file systems on the replicated partitions:


    # newfs /dev/rdsk/c0t0d0s3

    # newfs /dev/rdsk/c0t0d0s4


procedure icon  To Verify File Systems Managed by Reliable NFS

The Reliable NFS daemon, nhcrfsd, is installed on each master-eligible node. To determine which partitions are managed by this daemon, do the following:


Starting the Master-Eligible Nodes

procedure icon  To Delete the not_configured File

The /etc/opt/SUNWcgha/not_configured file was installed automatically when you installed the CMM packages. This file enables you to reboot a cluster node during the installation process without starting the Netra HA Suite.

procedure icon  To Boot the Master-Eligible Nodes

  1. Unmount the shared file system, /NetraHASuite, on each master-eligible node by using the umount command.

    See the umount1M man page and To Mount an Installation Server Directory on the Master-Eligible Nodes.

  2. Reboot the first master-eligible node, which becomes the master node:



    Note - For detailed information about rebooting a node on the operating system version in use at your site, refer to the Netra High Availability Suite 3.0 1/08 Foundation Services Cluster Administration Guide.



  3. After the first master-eligible node has completed rebooting, reboot the second master-eligible node:



    Note - For detailed information about rebooting a node on the operating system version in use at your site, refer to the Netra High Availability Suite 3.0 1/08 Foundation Services Cluster Administration Guide.



    This node becomes the vice-master node. To check the role of each node in the cluster, see the nhcmmrole1M man page.

  4. Create the INST_RELEASE file to allow patching of shared packages:


    # /opt/SUNWcgha/sbin/nhadm confshare
    

procedure icon  To Verify the Cluster Configuration

Use the nhadm tool to verify that the master-eligible nodes have been configured correctly.

  1. Log in to the master-eligible node as superuser.

  2. Run the nhadm tool to validate the configuration:


    # nhadm check starting
    

    If all checks pass the validation, the installation of the Netra HA Suite was successful. See the nhadm1M man page.