Sun Cluster 3.1 10/03 Software Installation Guide

Configuring the Cluster

The following table lists the tasks to perform to configure your cluster. Before you start to perform these tasks, ensure that you completed the following tasks.

Table 2–4 Task Map: Configuring the Cluster

Task 

Instructions 

Create and mount cluster file systems. 

How to Add Cluster File Systems

Configure IP Network Multipathing groups. 

How to Configure Internet Protocol (IP) Network Multipathing Groups

(Optional) Change a node's private hostname.

How to Change Private Hostnames

Create or modify the NTP configuration file. 

How to Configure Network Time Protocol (NTP)

(Optional) Install the Sun Cluster module to Sun Management Center software.

Installing the Sun Cluster Module for Sun Management Center

Sun Management Center documentation 

Install third-party applications and configure the applications, data services, and resource groups. 

Sun Cluster 3.1 Data Service Planning and Administration Guide

Third-party application documentation 

How to Add Cluster File Systems

Perform this procedure for each cluster file system that you add.


Caution – Caution –

Any data on the disks is destroyed when you create a file system. Be sure you specify the correct disk device name. If you specify the wrong device name, you erase data that you might not intend to delete.


If you used SunPlex Manager to install data services, SunPlex Manager might have already created one or more cluster file systems.

  1. Ensure that volume-manager software is installed and configured.

    For volume-manager installation procedures, see Installing and Configuring Solstice DiskSuite/Solaris Volume Manager Software or Installing and Configuring VxVM Software.

  2. Become superuser on any node in the cluster.


    Tip –

    For faster file-system creation, become superuser on the current primary of the global device for which you create a file system.


  3. Create a file system.

    • For a VxFS file system, follow procedures that are provided in your VxFS documentation.

    • For a UFS file system, use the newfs(1M) command.


      # newfs raw-disk-device
      

    The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

    Volume Manager 

    Sample Disk Device Name 

    Description 

    Solstice DiskSuite/Solaris Volume Manager 

    /dev/md/oracle/rdsk/d1

    Raw disk device d1 within the oracle diskset

    VERITAS Volume Manager 

    /dev/vx/rdsk/oradg/vol01

    Raw disk device vol01 within the oradg disk group

    None 

    /dev/global/rdsk/d1s3

    Raw disk device d1s3

  4. On each node in the cluster, create a mount-point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system is not accessed on that node.


    Tip –

    For ease of administration, create the mount point in the /global/device-group directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.



    # mkdir -p /global/device-group/mountpoint
    
    device-group

    Name of the directory that corresponds to the name of the device group that contains the device

    mountpoint

    Name of the directory on which to mount the cluster file system

  5. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    See the vfstab(4) man page for details.

    1. Use the following required mount options.


      Note –

      Logging is required for all cluster file systems.


      • Solaris UFS logging – Use the global,logging mount options. For use by Oracle Parallel Server/Real Application Clusters RDBMS data files, log files, and control files, also use the forcedirectio mount option. See the mount_ufs(1M) man page for more information about UFS mount options.


        Note –

        The syncdir mount option is not required for UFS cluster file systems.

        • If you specify syncdir, you are guaranteed POSIX-compliant file system behavior for the write() system call. If a write() succeeds, then this mount option ensures that sufficient space is on the disk.

        • If you do not specify syncdir, the same behavior occurs that is seen with UFS file systems. When you do not specify syncdir, performance of writes that allocate disk blocks, such as when appending data to a file, can significantly improve. However, in some cases, without syncdir you would not discover an out-of-space condition (ENOSPC) until you close a file.

        You see ENOSPC on close only during a very short time after a failover. With syncdir (and POSIX behavior), the out-of-space condition would be discovered before the close.


      • Solstice DiskSuite trans metadevice or Solaris Volume Manager transactional volume – Use the global mount option only. Do not use the logging mount option.


        Note –

        Solaris Volume Managertransactional-volume logging (formerly Solstice DiskSuite trans-metadevice logging) is scheduled to be removed from the Solaris operating environment in an upcoming Solaris release. Solaris UFS logging provides the same capabilities but superior performance, as well as lower system administration requirements and overhead.


        See your Solstice DiskSuite documentation for information about setting up trans metadevices, or see your Solaris Volume Manager documentation for information about setting up transactional volumes.

      • VxFS logging – Use the global, log mount options. See the VxFS mount_vxfs man page and “Administering Cluster File Systems Overview” in Sun Cluster 3.1 10/03 System Administration Guide for more information about VxFS mount options.

    2. To automatically mount the cluster file system, set the mount at boot field to yes.

    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.

    4. Ensure that the entries in each node's /etc/vfstab file list devices in the same order.

    5. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle, and phys-schost-2 mounts disk device d1 on /global/oracle/logs. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs only after phys-schost-1 boots and mounts /global/oracle.

  6. On any node in the cluster, verify that mount points exist. Also verify that /etc/vfstab file entries are correct on all nodes of the cluster.


    # sccheck
    

    If no errors occur, nothing is returned.

  7. From any node in the cluster, mount the cluster file system.


    # mount /global/device-group/mountpoint
    


    Note –

    For VERITAS File System (VxFS), mount the file system from the current master of device-group to ensure that the file system mounts successfully. In addition, unmount a VxFS file system from the current master of device-group to ensure that the file system unmounts successfully.


  8. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df(1M) or mount(1M) command to list mounted file systems.


    Note –

    To manage a VxFS cluster file system in a Sun Cluster environment, run administrative commands only from the primary node on which the VxFS cluster file system is mounted.


  9. Configure IP Network Multipathing groups.

    Go to How to Configure Internet Protocol (IP) Network Multipathing Groups.

Example – Creating a Cluster File System

The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.


# newfs /dev/md/oracle/rdsk/d1
…
 
(on each node)
# mkdir -p /global/oracle/d1
# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type   ; pass    at boot options
#                     
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
(save and exit)
 
(on one node)
# sccheck
# mount /global/oracle/d1
# mount
…
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles 
on Sun Oct 3 08:56:16 2000

How to Configure Internet Protocol (IP) Network Multipathing Groups

Perform this task on each node of the cluster. If you used SunPlex Manager to install Sun Cluster HA for Apache or Sun Cluster HA for NFS, SunPlex Manager configured IP Network Multipathing groups for the public network adapters those data services use. You must configure IP Network Multipathing groups for the remaining public network adapters.


Note –

All public network adapters must belong to an IP Network Multipathing group.


  1. Have available your completed Public Networks Worksheet.

  2. Configure IP Network Multipathing groups.

    Perform procedures for IPv4 addresses in “Deploying Network Multipathing” in IP Network Multipathing Administration Guide (Solaris 8) or“Administering Network Multipathing (Task)” in System Administration Guide: IP Services (Solaris 9).

    Follow these additional guidelines to configure IP Network Multipathing groups in a Sun Cluster configuration:

    • Each public network adapter must belong to a multipathing group.

    • For multipathing groups that contain two or more adapters, you must configure a test IP address for each adapter in the group. If a multipathing group contains only one adapter, you do not need to configure a test IP address.

    • Test IP addresses for all adapters in the same multipathing group must belong to a single IP subnet.

    • Test IP addresses must not be used by normal applications because the test IP addresses are not highly available.

    • In the /etc/default/mpathd file, do no change the value of TRACK_INTERFACES_ONLY_WITH_GROUPS from yes to no.

    • The name of a multipathing group has no requirements or restrictions.

  3. Do you intend to change any private hostnames?

  4. Did you install your own /etc/inet/ntp.conf file before you installed Sun Cluster software?

  5. Do you intend to use Sun Management Center to monitor the cluster?

How to Change Private Hostnames

Perform this task if you do not want to use the default private hostnames, clusternodenodeid-priv, that are assigned during Sun Cluster software installation.


Note –

Do not perform this procedure after applications and data services have been configured and have been started. Otherwise, an application or data service might continue to use the old private hostname after the hostname is renamed, which would cause hostname conflicts. If any applications or data services are running, stop them before you perform this procedure.


  1. Become superuser on a node in the cluster.

  2. Start the scsetup(1M) utility.


    # scsetup
    

  3. To work with private hostnames, type 5 (Private hostnames).

  4. To change a private hostname, type 1 (Change a private hostname).

  5. Follow the prompts to change the private hostname.

    Repeat for each private hostname to change.

  6. Verify the new private hostnames.


    # scconf -pv | grep "private hostname"
    (phys-schost-1) Node private hostname:      phys-schost-1-priv
    (phys-schost-3) Node private hostname:      phys-schost-3-priv
    (phys-schost-2) Node private hostname:      phys-schost-2-priv

  7. Did you install your own /etc/inet/ntp.conf file before you installed Sun Cluster software?

  8. Do you intend to use Sun Management Center to monitor the cluster?

How to Configure Network Time Protocol (NTP)

Perform this task to create or modify the NTP configuration file after you install Sun Cluster software. You must also modify the NTP configuration file when you add a node to an existing cluster, as well as when you change the private hostname of a node in the cluster.


Note –

The primary requirement when you configure NTP, or any time synchronization facility within the cluster, is that all cluster nodes must be synchronized to the same time. Consider accuracy of time on individual nodes to be of secondary importance to the synchronization of time among nodes. You are free to configure NTP as best meets your individual needs if this basic requirement for synchronization is met.

See the Sun Cluster 3.1 10/03 Concepts Guide for further information about cluster time. See the /etc/inet/ntp.cluster template file for additional guidelines on how to configure NTP for a Sun Cluster configuration.


  1. Did you install your own /etc/inet/ntp.conf file before you installed Sun Cluster software?

    • If yes, you do not need to modify your ntp.conf file. Skip to Step 8.

    • If no, proceed to Step 2.

  2. Become superuser on a cluster node.

  3. Do you have your own /etc/inet/ntp.conf file to install on the cluster nodes?

    • If no, proceed to Step 4.

    • If yes, copy your /etc/inet/ntp.conf file to each node of the cluster, then skip to Step 6.


      Note –

      All cluster nodes must be synchronized to the same time.


  4. On one node of the cluster, edit the private hostnames in the /etc/inet/ntp.conf.cluster file.

    Sun Cluster software creates the /etc/inet/ntp.conf.cluster file as the NTP configuration file if an /etc/inet/ntp.conf file is not already present on the node.


    Note –

    Do not rename the ntp.conf.cluster file as ntp.conf.


    If the /etc/inet/ntp.conf.cluster file does not exist on the node, you might have an /etc/inet/ntp.conf file from an earlier installation of Sun Cluster software. If so, perform the following edits on that ntp.conf file.

    1. Ensure that an entry exists for the private hostname of each cluster node.

    2. Remove any unused private hostnames.

      The ntp.conf.cluster file might contain nonexistent private hostnames. When a node is rebooted, the system generates error messages as the node attempts to contact those nonexistent private hostnames.

    3. If you changed any node's private hostname, ensure that the NTP configuration file contains the new private hostname.

    4. If necessary, make other modifications to meet your NTP requirements.

  5. Copy the NTP configuration file to all nodes in the cluster.

    The contents of the NTP configuration file must be identical on all cluster nodes.

  6. Stop the NTP daemon on each node.

    Wait for the stop command to complete successfully on each node before you proceed to Step 7.


    # /etc/init.d/xntpd stop
    

  7. Restart the NTP daemon on each node.

    • If you use the ntp.conf.cluster file, run the following command:


      # /etc/init.d/xntpd.cluster start
      

      The xntpd.cluster startup script first looks for the /etc/inet/ntp.conf file. If that file exists, the script exits immediately without starting the NTP daemon. If the ntp.conf file does not exist but the ntp.conf.cluster file does exist, the script starts the NTP daemon by using the ntp.conf.cluster file as the NTP configuration file.

    • If you use the ntp.conf file, run the following command:


      # /etc/init.d/xntpd start
      
  8. Do you intend to use Sun Management Center to monitor the cluster?