C H A P T E R 4 |
After you have installed and configured the master-eligible nodes, you can add diskless nodes and dataless nodes to the cluster.
To add a dataless node to the cluster, see the following sections:
Perform the following procedures before installing and configuring a dataless node:
To connect a dataless node to a cluster, connect the two Ethernet interfaces of the dataless node to the two switches of the cluster. Connect NIC0 to switch 1 and NIC1 to switch 2.
For more information, see the Netra High Availability Suite 3.0 1/08 Foundation Services Getting Started Guide.
Create the disk partitions of the dataless node according to the requirements of your cluster.
The following table provides the space requirements for example disk partitions of a dataless node in a cluster.
Disk Partition | File System Name | Description | Example Size |
---|---|---|---|
0 | / | The root file system, boot partition, and volume management software. This partition must be mounted with the logging option. | 2 Gbytes |
1 | /swap | Minimum size when physical memory is less than 1 Gbyte. | 1 Gbyte |
2 | overlap | Entire disk. | Size of entire disk |
3 | /mypartition | For any additional applications. | The remaining space |
To install the Solaris Operating System on a dataless node, use the Solaris JumpStart tool. The Solaris JumpStart tool requires the Solaris distribution to be on the installation server. For information about creating a Solaris distribution, see Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide.
If not already created, create the Solaris JumpStart environment on the installation server by using the appropriate document for the Solaris release:
At the end of this process, you have a Jumpstart-dir directory that contains the Solaris JumpStart files that are needed to install the Solaris Operating System on the node.
In the /etc/hosts file, add the name and IP addresses of the dataless node.
In the /etc/ethers file, add the Ethernet address of the dataless node's network interface that is connected to the same switch as the installation server, for example, NIC0.
Share the Solaris-distribution-dir and Jumpstart-dir directories by adding these lines to the /etc/dfs/dfstab file:
share -F nfs -o rw Solaris-distribution-dirshare -F nfs -o rw Jumpstart-dir |
Change to the directory where the add_install_client command is located:
# cd Solaris-dir/Solaris_x/Tools |
Run the add_install_client command for each dataless node:
# ./add_install_client -i IP-address \-e Ethernet-address \ -s iserver:Solaris-distribution-dir \-c iserver:Jumpstart-dir \-p iserver:sysidcfg-dir \ -n name-service host-name platform-group |
Ethernet-address is the Ethernet address of the dataless node.
iserver is the IP address of the installation server for the cluster
Solaris-distribution-dir is the directory that contains the Solaris distribution.
Jumpstart-dir is the directory that contains the Solaris JumpStart files.
sysidcfg-dir is the directory that contains the sysidcfg file. This directory is a subdirectory of the Jumpstart-dir directory.
name-service is the naming service you would like to use, for example, NIS or NIS+.
platform-group is the hardware platform of the dataless node, for example, sun4u.
At the ok prompt, boot the dataless node by using the net device alias:
ok> boot net - install |
If the installation server is connected to the second Ethernet interface, type:
ok> boot net2 - install |
This command installs the Solaris Operating System on the dataless node.
Note - For x64 diskless nodes, refer to the hardware manual for information about booting. |
After you have completed the Solaris installation, install the necessary Solaris patches. The Netra High Availability Suite 3.0 1/08 Foundation Services README contains the list of Solaris patches that you must install, depending on the version of Solaris you installed.
Note - Some of these patches are required for CGTP. If you do not plan to install CGTP, do not install the CGTP patches. For more information about the impact of not installing CGTP, see Choosing a Cluster Network. |
Mount the directory from the installation server that contains the Solaris patches.
See To Mount an Installation Server Directory on the Master-Eligible Nodes.
Install the patches on the dataless node:
# patchadd -M /NetraHASuite/Patches/ patch-name |
After the Solaris Operating System has been installed on the dataless node, install the Netra HA Suite on the dataless node.
The set of services to be installed on the dataless node is a subset of the Netra HA Suite installed on the master-eligible nodes. Install the packages that are listed as needed for dataless nodes in TABLE 4-1.
Mount the installation server directory on the dataless node as described in To Mount an Installation Server Directory on the Master-Eligible Nodes.
Install the packages by using the pkgadd command:
# pkgadd -d /NetraHASuite/Packages/ package-name |
where /NetraHASuite/Packages is the installation server directory that is mounted on the dataless node.
CGTP enables a redundant network for your cluster.
Note - If you do not require CGTP, do not install the CGTP packages. For more information about the impact of not installing CGTP, see Choosing a Cluster Network. |
The following procedures explain how to configure the Netra HA Suite on a dataless node.
# touch /etc/notrouter |
Because the cluster network is not routable, the dataless nodes must be disabled as routers.
Modify the /etc/default/login file so you can connect to the node from a remote system as superuser:
# mv /etc/default/login /etc/default/login.orig # chmod 644 /etc/default/login.orig # sed '1,$s/^CONSOLE/#CONSOLE/' /etc/default/login.orig > /etc/default/login # chmod 444 /etc/default/login |
# touch /noautoshutdown |
Modify the .rhosts file according to the security policy for your cluster:
# cp /.rhosts /.rhosts.orig # echo "+ root" > /.rhosts # chmod 444 /.rhosts |
# /usr/sbin/eeprom local-mac-address?=true # /usr/sbin/eeprom auto-boot?=true # /usr/sbin/eeprom diag-switch?=false |
Note - On x64, refer to the hardware documentation for information about performing this task. |
If using the Network Time Protocol (NTP) to run an external clock, configure the dataless node as an NTP server.
To configure external IP addresses for a dataless node, the node must have an extra physical network interface or logical network interface. A physical network interface is an unused interface on an existing Ethernet card or a supplemental HME or QFE Ethernet card, for example, hme2. A logical network interface is an interface configured on an existing Ethernet card, for example, hme1:101.
Log in to the dataless node as superuser.
As for the master-eligible nodes, three IP addresses are configured for each dataless node:
The IP addresses can be IPv4 addresses of any class. However, the nodeid that you later define in the cluster_nodes_table file and the nhfs.conf file must be a decimal representation of the host part of the node's IP address. For information about the files, see To Create the nhfs.conf File for a Dataless Node and To Update the Cluster Node Table.
Create or update the file /etc/hostname.NIC0 for the NIC0 interface.
This file must contain the cluster network name of the dataless node on the second interface, for example, netraDATALESS1-nic0.
Create or update the file /etc/hostname.NIC1 for the NIC1 interface.
This file must contain the cluster network name of the master-eligible node on the second interface, for example, netraDATALESS1-nic1.
Create or update the file /etc/hostname.cgtp0 for the cgtp0 interface.
This file must contain the cluster network name of the dataless node on the cgtp0 interface, for example, netraDATALESS1-cgtp.
In the /etc/hosts file, add the IP address and node name for the NIC0, NIC01, and cgtp0 network interfaces of all the nodes in the cluster:
127.0.0.1 localhost 10.250.1.10 netraMEN1 10.250.2.10 netraMEN1-nic1 10.250.3.10 netraMEN1-cgtp 10.250.1.20 netraMEN2 10.250.2.20 netraMEN2-nic1 10.250.3.20 netraMEN2-cgtp 10.250.1.30 netraDATALESS1-nic0 loghost netraDATALESS1.localdomain 10.250.2.30 netraDATALESS1-nic1 netraDATALESS1-nic1.localdomain 10.250.3.30 netraDATALESS1-cgtp netraDATALESS1-cgtp.localdomain 10.250.1.1 master 10.250.2.1 master-nic1 10.250.3.1 master-cgtp |
Update the /etc/nodename file with the name corresponding to the address of one of the network interfaces, for example, netraDATALESS1-cgtp.
Create the /etc/netmasks file by adding one line for each subnet on the cluster:
10.250.1.0 255.255.255.0 10.250.2.0 255.255.255.0 10.250.3.0 255.255.255.0 |
Create the nhfs.conf file for the dataless node:
# cp /etc/opt/SUNWcgha/nhfs.conf.template /etc/opt/SUNWcgha/nhfs.conf |
Edit the nhfs.conf file to suit your cluster configuration.
An example file for a dataless node on a cluster with the domain ID 250, with network interfaces eri0, eri1, and cgtp0 would be as follows:
Node.NodeId=40 Node.NIC0=eri0 Node.NIC1=eri1 Node.NICCGTP=cgtp0 Node.UseCGTP=True Node.Type=Dataless Node.DomainId=250 CMM.IsEligible=False CMM.LocalConfig.Dir=/etc/opt/SUNWcgha |
Choose a unique nodeid and unique node name for the dataless node. To view the nodeid of each node already in the cluster, see the /etc/opt/SUNWcgha/cluster_nodes_table file on the master node. For more information, see the nhfs.conf4 man page.
If you have not installed the CGTP patches and packages, do the following:
Update the /etc/vfstab file in the dataless node's root directory to add the NFS mount points for master node directories that contain middleware data and services.
Edit the entries in the /etc/vfstab file.
If you have configured the CGTP, replace the host name of the master node with the host name associated with the floating IP address for the cgtp0 interface that is assigned to the master role, for example, master-cgtp.
If you have not configured the CGTP, replace the host name of the master node with the host name associated with the floating IP address for the NIC0 interface that is assigned to the master role, for example, master-nic0.
For more information about floating addresses of the master nodes, see To Create the Floating Address Triplet Assigned to the Master Role.
Define the mount points /SUNWcgha/remote, SUNWcgha/services, and /SUNWcgha/swdb:
If you have configured the CGTP, use the floating IP address for the cgtp0 interface that is assigned to the master role to define the mount points:
master-cgtp:/SUNWcgha/local/export/data - \ /SUNWcgha/remote nfs - yes rw,hard,fg,intr master-cgtp:/SUNWcgha/local/export/services/ha_3.0/opt - \ /SUNWcgha/services nfs - yes rw,hard,fg,intr master-cgtp:/SUNWcgha/local/export/services/ha_3.0 - \ /SUNWcgha/swdb nfs - yes rw,hard,fg,intr |
If you have not configured the CGTP, use the floating IP address for the NIC0 interface that is assigned to the master role:
master-nic0:/SUNWcgha/local/export/data - \ /SUNWcgha/remote nfs - yes rw,hard,fg,intr master-nic0:/SUNWcgha/local/export/services/ha_3.0/opt - \ /SUNWcgha/services nfs - yes rw,hard,fg,intr master-nic0:/SUNWcgha/local/export/services/ha_3.0 - \ /SUNWcgha/swdb nfs - yes rw,hard,fg,intr |
All file systems that you mount by using NFS must be mounted with the options fg, hard, and intr. You can also set the noac mount option, which suppresses data and attribute caching. Use the noac option only if the impact on performance is acceptable.
Create the mount points /SUNWcgha/remote, /SUNWcgha/services, and /SUNWcgha/swdb:
# mkdir -p SUNWcgha/remote # mkdir -p SUNWcgha/services # mkdir -p SUNWcgha/swdb |
Repeat Step 1 through Step 8 for all dataless nodes in the cluster.
The following procedures explain how to integrate a dataless node into the cluster:
Edit the /etc/hosts file to add the following lines:
IP-address-NIC0 nic0-dataless-node-nameIP-address-NIC1 nic1-dataless-node-nameIP-address-cgtp0 cgtp0-dataless-node-name |
This modification enables the master node to “see” the network interfaces of the dataless node.
Repeat Step 2.
This modification enables the vice-master node to “see” the three network interfaces of the dataless node.
Log in to a dataless node that is part of the cluster, if a dataless node already exists.
Repeat Step 2.
This modification enables the dataless node to “see” the three network interfaces of the dataless node.
Repeat Step 5 and Step 6 on all other diskless and dataless nodes that are already part of the cluster.
Edit the cluster_nodes_table file on the master node with the node information for a dataless node:
#NodeId Domain_id Name Attributes nodeid domainid dataless-node-name - |
The nodeid that you define in the cluster_nodes_table file must be the decimal representation of the host part of the node's IP address. For more information about the cluster_nodes_table file, see the cluster_nodes_table4 man page.
Create the cluster_nodes_table file on the master node disk:
# /opt/SUNWcgha/sbin/nhcmmstat -c reload |
Repeat Step 2 for each dataless node you are adding to the cluster.
To integrate the dataless node into the cluster, delete the not_configured file and reboot all the nodes. After the nodes have completed booting, verify the configuration before the cluster is restarted.
During the installation of the CMM packages, the /etc/opt/SUNWcgha/not_configured file is automatically created. This file enables you to reboot a cluster node during the installation and configuration process without starting the Netra HA Suite.
After the master node has completed rebooting, reboot the vice-master node as superuser.
After the vice-master node has completed rebooting, boot the master-ineligible nodes as superuser.
Copyright © 2008, Sun Microsystems, Inc. All rights reserved.