Installing the Software in an NFS Configuration
|
This chapter contains the following sections:
Overview
The following is a summary of the basic operations involved in installing Sun HPC ClusterTools software in an NFS client/server configuration:
- Before you use an NFS server to install Sun HPC ClusterTools 8 software on the nodes in your cluster, you must increase the number of concurrent server requests allowed.
- Use the ctnfssvr command to install the Sun HPC ClusterTools software packages on the NFS server and to enable the Sun HPC ClusterTools software.
- Export the file system on which you installed the Sun HPC ClusterTools software so that the client nodes can access the shared file system.
- Once the NFS server has been set up and the file system has been made accessible, you can install the Sun HPC ClusterTools software on the client nodes. For more information about the client setup, see Installing the Client Software in an NFS Configuration.
About the Software CLI Utilities
You configure the Sun HPC ClusterTools software using the CLI utilities:
- ctinstall - Install the software on the cluster nodes.
- ctact - Activate the software on the cluster nodes.
- ctdeact - Deactivate the software on the cluster nodes.
- ctremove - Remove the software from the cluster nodes.
The procedures in this chapter show how to install Sun HPC ClusterTools software in an NFS configuration. For information about how to install the software in a non-NFS configuration, see Chapter 3.
Adjusting Server Requests for NFS Installations
To Set Up Installation From a Server Running Solaris 9 Software
|
For installations from an NFS server, increase the number of concurrent server requests allowed from 16 to 256.
1. Change the relevant line in /etc/default/nfs and restart the NFS server:
TABLE 4-1
if [$startnfsd -ne 0];
then
/usr/lib/nfs/mountd
/usr/lib/nfs/nfsd -a 256
fi
|
2. Restart the NFS server daemon.
To Set Up Installation From a Server Running Solaris 10 Software
|
Note - This procedure applies to both SPARC- and AMD Opteron and Intel x64-based servers.
|
1. Open the /etc/default/nfs file in a text editor.
2. Change the value for NFSD_SERVER in the file from 16 to 256. the portion of the file that you edit looks like the following:
TABLE 4-2
# Maximum number of concurrent NFS requests.
# Equivalent to last numeric argument on nfsd command line.
NFSD_SERVERS=16
|
Installing the Sun HPC ClusterTools Software Packages on an NFS Server
Use the ctnfssvr command to install the Sun HPC ClusterTools software packages on the NFS server and to enable the Sun HPC ClusterTools software.
Note - You can set up the server and clients in any order, but the Sun HPC ClusterTools software cannot be activated until the NFS server is installed and the shared file system has been exported.
|
See TABLE 4-3 for a summary of the ctnfssvr options.
TABLE 4-3 ctnfssvr Options
Options
|
Description
|
Operations
|
|
-i
|
Install packages. The following options can be used only with -i.
|
|
-d Specify a non-default install from location. The default is distribution/Product, relative to the directory where ctnfssvr is invoked.
|
|
-t Specify a non-default install to location. The default is /export.
|
-r
|
Remove the packages.
|
Other
|
|
-h
|
Command help.
|
-x
|
Turn on command debug mode.
|
Note - You must be logged in as superuser to run ctnfssvr.
|
To Install the Software on the NFS Server
|
CODE EXAMPLE 4-1
# ./ctnfssvr -i
|
CODE EXAMPLE 4-1 sets up the NFS server and installs the packages from the Sun HPC ClusterTools software suite. However, the NFS server has not been enabled to service Sun HPC ClusterTools software requests from the NFS clients (that is, the cluster nodes).
Exporting the Shared File System
After you have installed the Sun HPC ClusterTools software on the NFS server, you need to make the file system on which you installed the software available to the client nodes.
To Export the Shared File System
|
Type the following command:
TABLE 4-4
% share /export/SUNWhpc/HPC8.0
|
If you installed the software in a location other than the default (/opt/SUNWhpc/HPC8.0), then specify the path to the file system where you installed the Sun HPC ClusterTools software in place of the default path.
Special Note for Installing Sun HPC ClusterTools Software in an NFS Configuration
When you want to install or activate Sun HPC ClusterTools 8 software in an NFS configuration, you must ensure that ctinstall and the other CLI commands are available to all nodes on the shared mount point. The ctinstall -c command is able to locate the shared mount point on the NFS server (see CODE EXAMPLE 4-4). ctinstall ensures that you have access to the activation tools by mounting the correct directory in /opt/SUNWhpc/HPC8.0.
This means that your mount point can be any location except /opt/SUNWhpc/HPC8.0.
You can do this in either of the following ways:
- Use the activation tool under the NFS mount point on an NFS client:
CODE EXAMPLE 4-2
mount_point/*/SUNWhpc/HPC8.0/bin/Install_Utilities/bin
|
- Or, explicitly mount the directory in which SUNWhpc is installed. For example, if SUNWhpc is installed in /export, enter the following on an NFS client:
CODE EXAMPLE 4-3
# mount server:/export mount_point
|
Installing the Client Software in an NFS Configuration
Use the ctinstall command to install Sun HPC ClusterTools software on cluster nodes. See TABLE 4-5 for a summary of the ctinstall options.
TABLE 4-5 ctinstall Options
Options
|
Description
|
General
|
|
-h
|
Command help.
|
-l
|
Execute the command on the local node only.
|
-R
|
Specify the full path to be used as the root path.
|
-x
|
Turn on command debug at the specified nodes.
|
Command Specific
|
|
-a
|
Activate automatically after installation completes.
|
-c
|
Specify the server and mount path for the software if you are performing an NFS installation.
|
-d
|
Specify a non-default install from location. The default is distribution/Product, relative to the directory where ctinstall is invoked.
|
-p
|
List of packages to be installed. Separate names by comma.
|
-t
|
Specify a nondefault install to location. The default is /opt.
|
Centralized Operations Only
|
|
-g
|
Generate node lists of successful and unsuccessful installations.
|
-k
|
Specify a central location for storing log files of all specified nodes.
|
-n
|
List of nodes targeted for installation. Separate names by comma.
|
-N
|
File containing list of nodes targeted for installation. One node per line.
|
-r
|
Remote connection method: rsh, ssh, or telnet.
|
-S
|
Specify full path to an alternate ssh executable.
|
Installing the Client Software From a Central Host
This section shows examples of software installations in which the ctinstall command is initiated from a central host in an NFS configuration.
Note - Sun HPC ClusterTools 8 software is ready to run after installation and does not need to be activated. The acrivation step sets up symbolic links to the software. If you plan to run Sun HPC ClusterTools software from its installed location (/opt/SUNWhpc/HPC8.0/bin by default), you do not need to activate the software.
|
To Install the Client Software Without Activating
|
CODE EXAMPLE 4-4
# ./ctinstall -c myserver:/export/SUNWhpc/HPC8.0 -n node1,node2 -r rsh
|
CODE EXAMPLE 4-4 is the same as CODE EXAMPLE 3-1, except that node1 and node2 are NFS client nodes. The -c option specifies the server and mount path for the installation. If the NFS server is to be used as a cluster node, run this command on it as well.
Use ctnfssvr to set up the NFS server and install the packages on it.
To Install and Activate Automatically
|
CODE EXAMPLE 4-5
# ./ctinstall -c myserver:/export/SUNWhpc/HPC8.0 -n node1,node2 -r rsh -a
|
CODE EXAMPLE 4-5 is the same as CODE EXAMPLE 4-4, except it includes the option -a, which causes the software to be activated automatically.
Installing Software Locally in NFS Configurations
This section shows examples of software installations in which the ctinstall command is initiated on the local node in NFS configurations.
To Install Locally Without Activating
|
CODE EXAMPLE 4-6
# ./ctinstall -c myserver:/export/SUNWhpc/HPC8.0 -l
|
CODE EXAMPLE 4-6 installs the Sun HPC ClusterTools software packages on the local node.
To Install Locally and Activate Automatically
|
CODE EXAMPLE 4-7
# ./ctinstall -c myserver:/export/SUNWhpc/HPC8.0 -l -a
|
CODE EXAMPLE 4-7 is the same as CODE EXAMPLE 4-6, except the software is activated as soon as the installation completes.
For more information about activating Sun HPC ClusterTools software, see the following section.
Activating Sun HPC ClusterTools Software
In Sun HPC ClusterTools 8 software, the activation step sets up symbolic links to the program binaries. These symbolic links are convenient, but not required. You may skip the activation step and run Sun HPC ClusterTools 8 software from the directory in which it is installed (by default /opt/SUNWhpc/HPC8.0/bin).
Use the ctact command to activate Sun HPC ClusterTools software on cluster nodes. See TABLE 4-6 for a summary of the ctact options.
Note - The general options and options specific to centralized operations serve essentially the same role for ctact as for ctinstall. Consequently, fewer examples are used to illustrate ctact than were used for ctinstall.
|
TABLE 4-6 ctact Options
Options
|
Description
|
General
|
|
-h
|
Command help.
|
-l
|
Execute the command on the local node only.
|
-R
|
Specify the full path to be used as the root path.
|
-x
|
Turn on command debug at the specified nodes.
|
Command Specific
|
|
-c
|
Specify that you are activating on an NFS client node.
|
Centralized Operations Only
|
|
-g
|
Generate node lists of successful and unsuccessful activation.
|
-k
|
Specify a central location for storing copies of local log files.
|
-n
|
List of nodes targeted for activation. Separate names by comma.
|
-N
|
File containing list of nodes targeted for activation. One node per line.
|
-r
|
Remote connection method: rsh, ssh, or telnet.
|
-S
|
Specify full path to an alternate ssh executable.
|
Activating Nodes From a Central Host
This section shows examples of software activation in which the ctact command is initiated from a central host.
To Activate the Client Software From a Central Host
|
CODE EXAMPLE 4-8
# ./ctact -c myserver:/export/SUNWhpc/HPC8.0 -n node1,node2 -r rsh
|
Activating Nodes Locally
To Activate the Client Software Locally
|
CODE EXAMPLE 4-9
# ./ctact -c myserver:/export/SUNWhpc/HPC8.0 -l
|
CODE EXAMPLE 4-9 activates the Sun HPC ClusterTools software packages on the local node.
Deactivating Sun HPC ClusterTools Software
Use the ctdeact command to deactivate Sun HPC ClusterTools software on cluster nodes. See TABLE 4-7 for a summary of the ctdeact options.
TABLE 4-7 ctdeact Options
Options
|
Description
|
General
|
|
-h
|
Command help.
|
-l
|
Execute the command on the local node only.
|
-R
|
Specify the full path to be used as the root path.
|
-x
|
Turn on command debug at the specified nodes.
|
Centralized Operations Only
|
|
-g
|
Generate node lists of successful and unsuccessful deactivation.
|
-k
|
Specify a central location for storing copies of local log files.
|
-n
|
List of nodes targeted for deactivation. Separate names by comma.
|
-N
|
File containing list of nodes to be deactivated. One node per line.
|
-r
|
Remote connection method: rsh, ssh, or telnet.
|
-S
|
Specify full path to an alternate ssh executable.
|
Deactivating Software From a Central Host
This section shows examples of software deactivation in which the ctdeact command is initiated from a central host.
To Deactivate Specified Cluster Nodes in an NFS Configuration
|
CODE EXAMPLE 4-10
TABLE 4-8
# ./ctdeact -n node1,node2 -r rsh
|
CODE EXAMPLE 4-10 deactivates the software on the nodes node1 and node2 from the central server.
Deactivating Software Locally
To Deactivate Software on the Local Node
|
CODE EXAMPLE 4-11
CODE EXAMPLE 4-11 deactivates the software on the local node.
Removing Sun HPC ClusterTools Software
Use the ctremove command to remove Sun HPC ClusterTools software from cluster nodes. See TABLE 4-10 for a summary of the ctremove options.
Note - If the nodes are active at the time ctremove is initiated, they will be deactivated automatically before the removal process begins.
|
TABLE 4-10 ctremove Options
Options
|
Description
|
General
|
|
-h
|
Command help.
|
-l
|
Execute the command on the local node only.
|
-R
|
Specify the full path to be used as the root path.
|
-x
|
Turn on command debug at the specified nodes.
|
Command Specific
|
|
-p
|
List of packages to be selectively removed. Separate names by comma.
|
-c
|
Specify that you are removing Sun HPC ClusterTools 8 from the NFS client node.
|
Centralized Operations Only
|
|
-g
|
Generate node lists of successful and unsuccessful removals.
|
-k
|
Specify a central location for storing copies of local log files.
|
-n
|
List of nodes targeted for removal. Separate names by comma.
|
-N
|
File containing list of nodes targeted for removal. One node per line.
|
-r
|
Remote connection method: rsh, ssh, or telnet.
|
-S
|
Specify full path to an alternate ssh executable.
|
Removing Client Nodes From a Central Host
This section shows examples of software removal in which the ctremove command is initiated from a central host.
To Remove Software From Specified Cluster Nodes in an NFS Configuration
|
CODE EXAMPLE 4-12 removes the software from nodes node1 and node2 from the server myserver. In this example, telnet is the connection type.
CODE EXAMPLE 4-12
TABLE 4-11
# ./ctremove -c -n node1,node2 -r telnet
|
Note - The ctremove command unmounts the nodes from the mount point, but it does not unshare the shared file system.
|
To Unshare the Shared File System
|
Type the following command:
TABLE 4-12
# unshare /export/SUNWhpc/HPC8.0
|
If you installed the Sun HPC ClusterTools 8 software to a location other than the default, then substitute the path to the file system where you installed the Sun HPC ClusterTools software for/export/SUNWhpc/HPC8.0.
Removing the Server Software
To Remove Software From the NFS Server
|
CODE EXAMPLE 4-13
# ./ctnfssvr -r
|
CODE EXAMPLE 4-13 removes all NFS server software.
Sun HPC ClusterTools 8 Software Installation Guide
|
820-3175-10
|
  
|
Copyright © 2008 Sun Microsystems, Inc. All Rights Reserved.