The following table lists the tasks to perform to configure your cluster. Before you start to perform these tasks, ensure that you completed the following tasks.
Cluster framework installation as described in "Installing the Software"
Volume manager installation and configuration as described in "Installing and Configuring Solstice DiskSuite Software" or "Installing and Configuring VxVM Software"
Task |
For Instructions, Go To ... |
---|---|
Create and mount cluster file systems. | |
(Optional) Configure additional public network adapters. | |
Configure Public Network Management (PNM) and set up NAFO groups. | |
(Optional) Change a node's private hostname. | |
Edit the /etc/inet/ntp.conf file to update node name entries. | |
(Optional) Install the Sun Cluster module to Sun Management Center software. |
"Installing the Sun Cluster Module for Sun Management Center" Sun Management Center documentation |
Install third-party applications and configure the applications, data services, and resource groups. |
Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide "Data Service Configuration Worksheets and Examples" in the Sun Cluster 3.0 Release Notes Third-party application documentation |
Perform this procedure for each cluster file system you add.
Any data on the disks is destroyed when you create a file system. Be sure you specify the correct disk device name. If you specify the wrong device name, you will erase data that you might not intend to delete.
If you used SunPlex Manager to install data services, one or more cluster file systems already exist if there were sufficient shared disks on which to create the cluster file systems.
Ensure that volume manager software is installed and configured.
For volume manager installation procedures, see "Installing and Configuring Solstice DiskSuite Software" or "Installing and Configuring VxVM Software".
Do you intend to install VERITAS File System (VxFS) software?
If yes, follow the procedures in your VxFS installation documentation to install VxFS software on each node of the cluster.
If no, go to Step 3.
Become superuser on any node in the cluster.
For faster file system creation, become superuser on the current primary of the global device you create a file system for.
Create a file system by using the newfs(1M) command.
# newfs raw-disk-device |
The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.
Table 2-11 Sample Raw Disk Device Names
Volume Manager |
Sample Disk Device Name |
Description |
---|---|---|
Solstice DiskSuite |
/dev/md/oracle/rdsk/d1 |
Raw disk device d1 within the oracle diskset |
VERITAS Volume Manager |
/dev/vx/rdsk/oradg/vol01 |
Raw disk device vol01 within the oradg disk group |
None |
/dev/global/rdsk/d1s3 |
Raw disk device d1s3 |
On each node in the cluster, create a mount-point directory for the cluster file system.
A mount point is required on each node, even if the cluster file system will not be accessed on that node.
For ease of administration, create the mount point in the /global/device-group directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.
# mkdir -p /global/device-group/mountpoint |
Name of the directory that corresponds to the name of the device group that contains the device
Name of the directory on which to mount the cluster file system
On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.
Use the following required mount options.
Logging is required for all cluster file systems.
Solaris UFS logging - Use the global,logging mount options. See the mount_ufs(1M) man page for more information about UFS mount options.
The syncdir mount option is not required for UFS cluster file systems. If you specify syncdir, you are guaranteed POSIX-compliant file system behavior. If you do not, you will have the same behavior that is seen with UFS file systems. When you do not specify syncdir, performance of writes that allocate disk blocks, such as when appending data to a file, can significantly improve. However, in some cases, without syncdir you would not discover an out-of-space condition until you close a file. The cases in which you could have problems if you do not specify syncdir are rare. With syncdir (and POSIX behavior), the out-of-space condition would be discovered before the close.
Solstice DiskSuite trans metadevice - Use the global mount option (do not use the logging mount option). See your Solstice DiskSuite documentation for information about setting up trans metadevices.
VxFS logging - Use the global, log mount options. See the mount_vxfs(1M) man page for more information about VxFS mount options.
To automatically mount the cluster file system, set the mount at boot field to yes.
Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.
Ensure that the entries in each node's /etc/vfstab file list devices in the same order.
Check the boot order dependencies of the file systems.
For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle, and phys-schost-2 mounts disk device d1 on /global/oracle/logs. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs only after phys-schost-1 boots and mounts /global/oracle.
See the vfstab(4) man page for details.
On any node in the cluster, verify that mount points exist and /etc/vfstab file entries are correct on all nodes of the cluster.
# sccheck |
If no errors occur, nothing is returned.
From any node in the cluster, mount the cluster file system.
# mount /global/device-group/mountpoint |
On each node of the cluster, verify that the cluster file system is mounted.
You can use either the df(1M) or mount(1M) command to list mounted file systems.
To manage a VxFS cluster file system in a Sun Cluster environment, run administrative commands only from the primary node on which the VxFS cluster file system is mounted.
Are your cluster nodes connected to more than one public subnet?
If yes, go to "How to Configure Additional Public Network Adapters" to configure additional public network adapters.
If no, go to "How to Configure Public Network Management (PNM)" to configure PNM and set up NAFO groups.
The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.
# newfs /dev/md/oracle/rdsk/d1 ... (on each node) # mkdir -p /global/oracle/d1 # vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging (save and exit) (on one node) # sccheck # mount /global/oracle/d1 # mount ... /global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/ largefiles on Sun Oct 3 08:56:16 2000 |
If the nodes in the cluster are connected to more than one public subnet, you can configure additional public network adapters for the secondary subnets. This task is optional.
Configure only public network adapters, not private network adapters.
Have available your completed "Public Networks Worksheet" from the Sun Cluster 3.0 Release Notes.
Become superuser on the node to configure for additional public network adapters.
Create a file named /etc/hostname.adapter, where adapter is the adapter name.
In each NAFO group, an /etc/hostname.adapter file should exist for only one adapter in the group.
Type the hostname of the public network adapter IP address in the /etc/hostname.adapter file.
The following example shows the file /etc/hostname.hme3, created for the adapter hme3, which contains the hostname phys-schost-1.
# vi /etc/hostname.hme3 phys-schost-1 |
On each cluster node, ensure that the /etc/inet/hosts file contains the IP address and corresponding hostname assigned to the public network adapter.
The following example shows the entry for phys-schost-1.
# vi /etc/inet/hosts ... 192.29.75.101 phys-schost-1 ... |
If you use a naming service, this information should also exist in the naming service database.
On each cluster node, turn on the adapter.
# ifconfig adapter plumb # ifconfig adapter hostname netmask + broadcast + -trailers up |
Verify that the adapter is configured correctly.
# ifconfig adapter |
The output should contain the correct IP address for the adapter.
Configure PNM and set up NAFO groups.
Go to "How to Configure Public Network Management (PNM)".
Each public network adapter to be managed by the Resource Group Manager (RGM) must belong to a NAFO group.
Perform this task on each node of the cluster.
All public network adapters must belong to a Network Adapter Failover (NAFO) group. Also, each node can have only one NAFO group per subnet.
Have available your completed "Public Networks Worksheet" from the Sun Cluster 3.0 Release Notes.
Become superuser on the node to configure for a NAFO group.
Create the NAFO group.
# pnmset -c nafo-group -o create adapter [adapter ...] |
Configures the NAFO group nafo-group
Creates a new NAFO group that contains one or more public network adapters
See the pnmset(1M) man page for more information.
Verify the status of the NAFO group.
# pnmstat -l |
See the pnmstat(1M) man page for more information.
Do you intend to change any private hostnames?
If yes, go to "How to Change Private Hostnames".
If no, go to "How to Update Network Time Protocol (NTP)" to update the /etc/inet/ntp.conf file.
The following example creates NAFO group nafo0, which uses public network adapters qfe1 and qfe5.
# pnmset -c nafo0 -o create qfe1 qfe5 # pnmstat -l group adapters status fo_time act_adp nafo0 qfe1:qfe5 OK NEVER qfe5 nafo1 qfe6 OK NEVER qfe6 |
Perform this task if you do not want to use the default private hostnames (clusternodenodeid-priv) assigned during Sun Cluster software installation.
Do not perform after applications and data services have been configured and started. Otherwise, an application or data service might continue to use the old private hostname after it is renamed, which would cause hostname conflicts. If any applications or data services are running, stop them before you perform this procedure.
Become superuser on a node in the cluster.
Start the scsetup(1M) utility.
# scsetup |
To work with private hostnames, type 5 (Private hostnames).
To change a private hostname, type 1 (Change a private hostname).
Follow the prompts to change the private hostname. Repeat for each private hostname to change.
Verify the new private hostnames.
# scconf -pv | grep 'private hostname' (phys-schost-1) Node private hostname: phys-schost-1-priv (phys-schost-3) Node private hostname: phys-schost-3-priv (phys-schost-2) Node private hostname: phys-schost-2-priv |
Update the /etc/inet/ntp.conf file.
Perform this task on each node.
Become superuser on the cluster node.
Edit the /etc/inet/ntp.conf file.
The scinstall(1M) command copies a template file, ntp.cluster, to /etc/inet/ntp.conf as part of standard cluster installation. But if an ntp.conf file already exists before Sun Cluster software is installed, that existing file remains unchanged. If cluster packages are installed by using other means, such as direct use of pkgadd(1M), you need to configure NTP.
Remove all entries for private hostnames that are not used by the cluster.
If the ntp.conf file contains non-existent private hostnames, when a node is rebooted, error messages are generated on the node's attempts to contact those private hostnames.
If you changed any private hostnames after Sun Cluster software installation, update each file entry with the new private hostname.
If necessary, make other modifications to meet your NTP requirements.
The primary requirement when you configure NTP, or any time synchronization facility, within the cluster is that all cluster nodes be synchronized to the same time. Consider accuracy of time on individual nodes secondary to the synchronization of time among nodes. You are free to configure NTP as best meets your individual needs, as long as this basic requirement for synchronization is met.
See Sun Cluster 3.0 12/01 Concepts for further information about cluster time. See the ntp.cluster template for guidelines on how to configure NTP for a Sun Cluster configuration.
Restart the NTP daemon.
# /etc/init.d/xntpd stop # /etc/init.d/xntpd start |
Do you intend to use Sun Management Center to configure resource groups or monitor the cluster?
If yes, go to "Installing the Sun Cluster Module for Sun Management Center".
If no, install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation supplied with the application software and the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.