Sun Cluster 3.0 U1 Installation Guide

Configuring the Cluster

The following table lists the tasks to perform to configure your cluster. Before you start to perform these tasks, ensure that you completed the following tasks.

Table 2-6 Task Map: Configuring the Cluster

Task 

For Instructions, Go To ... 

Create and mount cluster file systems. 

"How to Add Cluster File Systems"

(Optional) Configure additional public network adapters.

"How to Configure Additional Public Network Adapters"

Configure Public Network Management (PNM) and set up NAFO groups. 

"How to Configure Public Network Management (PNM)"

(Optional) Change a node's private hostname.

"How to Change Private Hostnames"

Edit the /etc/inet/ntp.conf file to update node name entries.

"How to Update Network Time Protocol (NTP)"

(Optional) Install the Sun Cluster module to Sun Management Center software.

"Installing the Sun Cluster Module for Sun Management Center"

Sun Management Center documentation 

Install third-party applications and configure the applications, data services, and resource groups. 

Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide

"Data Service Configuration Worksheets and Examples" in the Sun Cluster 3.0 Release Notes

Third-party application documentation 

How to Add Cluster File Systems

Perform this procedure for each cluster file system you add.


Caution - Caution -

Any data on the disks is destroyed when you create a file system. Be sure you specify the correct disk device name. If you specify the wrong device name, you will erase data that you might not intend to delete.


If you used SunPlex Manager to install data services, one or more cluster file systems already exist if there were sufficient shared disks on which to create the cluster file systems.

  1. Ensure that volume manager software is installed and configured.

    For volume manager installation procedures, see "Installing and Configuring Solstice DiskSuite Software" or "Installing and Configuring VxVM Software".

  2. Become superuser on any node in the cluster.


    Tip -

    For faster file system creation, become superuser on the current primary of the global device you create a file system for.


  3. Create a file system by using the newfs(1M) command.


    # newfs raw-disk-device
    

    The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

    Table 2-7 Sample Raw Disk Device Names

    Volume Manager 

    Sample Disk Device Name 

    Description 

    Solstice DiskSuite 

    /dev/md/oracle/rdsk/d1

    Raw disk device d1 within the oracle diskset

    VERITAS Volume Manager 

    /dev/vx/rdsk/oradg/vol01

    Raw disk device vol01 within the oradg disk group

    None 

    /dev/global/rdsk/d1s3

    Raw disk device d1s3

  4. On each node in the cluster, create a mount-point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system will not be accessed on that node.


    Tip -

    For ease of administration, create the mount point in the /global/device-group directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.



    # mkdir -p /global/device-group/mountpoint
    
    device-group

    Name of the directory that corresponds to the name of the device group that contains the device

    mountpoint

    Name of the directory on which to mount the cluster file system

  5. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    1. Use the following required mount options.

      Logging is required for all cluster file systems.

      • Solaris UFS logging - Use the global,logging mount options. See the mount_ufs(1M) man page for more information about UFS mount options.


        Note -

        The syncdir mount option is not required for UFS cluster file systems. If you specify syncdir, you are guaranteed POSIX-compliant file system behavior. If you do not, you will have the same behavior that is seen with UFS file systems. When you do not specify syncdir, performance of writes that allocate disk blocks, such as when appending data to a file, can significantly improve. However, in some cases, without syncdir you would not discover an out-of-space condition until you close a file. The cases in which you could have problems if you do not specify syncdir are rare. With syncdir (and POSIX behavior), the out-of-space condition would be discovered before the close.


      • Solstice DiskSuite trans metadevice - Use the global mount option (do not use the logging mount option). See your Solstice DiskSuite documentation for information about setting up trans metadevices.

    2. To automatically mount the cluster file system, set the mount at boot field to yes.

    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.

    4. Ensure that the entries in each node's /etc/vfstab file list devices in the same order.

    5. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle, and phys-schost-2 mounts disk device d1 on /global/oracle/logs. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs only after phys-schost-1 boots and mounts /global/oracle.

    See the vfstab(4) man page for details.

  6. On any node in the cluster, verify that mount points exist and /etc/vfstab file entries are correct on all nodes of the cluster.


    # sccheck
    

    If no errors occur, nothing is returned.

  7. From any node in the cluster, mount the cluster file system.


    # mount /global/device-group/mountpoint
    

  8. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df(1M) or mount(1M) command to list mounted file systems.

  9. Are your cluster nodes are connected to more than one public subnet?

Example--Creating a Cluster File System

The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.


# newfs /dev/md/oracle/rdsk/d1
...
 
(on each node)
# mkdir -p /global/oracle/d1
# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
(save and exit)
 
(on one node)
# sccheck
# mount /global/oracle/d1
# mount
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/
largefiles on Sun Oct 3 08:56:16 2000

How to Configure Additional Public Network Adapters

If the nodes in the cluster are connected to more than one public subnet, you can configure additional public network adapters for the secondary subnets. This task is optional.


Note -

Configure only public network adapters, not private network adapters.


  1. Have available your completed "Public Networks Worksheet" from the Sun Cluster 3.0 Release Notes.

  2. Become superuser on the node to configure for additional public network adapters.

  3. Create a file named /etc/hostname.adapter, where adapter is the adapter name.


    Note -

    In each NAFO group, an /etc/hostname.adapter file should exist for only one adapter in the group.


  4. Type the hostname of the public network adapter IP address in the /etc/hostname.adapter file.

    The following example shows the file /etc/hostname.hme3, created for the adapter hme3, which contains the hostname phys-schost-1.


    # vi /etc/hostname.hme3
    phys-schost-1 

  5. On each cluster node, ensure that the /etc/inet/hosts file contains the IP address and corresponding hostname assigned to the public network adapter.

    The following example shows the entry for phys-schost-1.


    # vi /etc/inet/hosts
    ...
    192.29.75.101 phys-schost-1
    ...


    Note -

    If you use a naming service, this information should also exist in the naming service database.


  6. On each cluster node, turn on the adapter.


    # ifconfig adapter plumb
    # ifconfig adapter hostname netmask + broadcast + -trailers up
    

  7. Verify that the adapter is configured correctly.


    # ifconfig adapter
    

    The output should contain the correct IP address for the adapter.

  8. Configure PNM and set up NAFO groups.

    Go to "How to Configure Public Network Management (PNM)".

    Each public network adapter to be managed by the Resource Group Manager (RGM) must belong to a NAFO group.

How to Configure Public Network Management (PNM)

Perform this task on each node of the cluster.


Note -

All public network adapters must belong to a Network Adapter Failover (NAFO) group. Also, each node can have only one NAFO group per subnet.


  1. Have available your completed "Public Networks Worksheet" from the Sun Cluster 3.0 Release Notes.

  2. Become superuser on the node to configure for a NAFO group.

  3. Create the NAFO group.


    # pnmset -c nafo-group -o create adapter [adapter ...]
    -c nafo-group

    Configures the NAFO group nafo-group

    -o create adapter

    Creates a new NAFO group that contains one or more public network adapters

    See the pnmset(1M) man page for more information.

  4. Verify the status of the NAFO group.


    # pnmstat -l
    

    See the pnmstat(1M) man page for more information.

  5. Do you intend to change any private hostnames?

Example--Configuring PNM

The following example creates NAFO group nafo0, which uses public network adapters qfe1 and qfe5.


# pnmset -c nafo0 -o create qfe1 qfe5
# pnmstat -l
group  adapters       status  fo_time    act_adp
nafo0  qfe1:qfe5      OK      NEVER      qfe5
nafo1  qfe6           OK      NEVER      qfe6

How to Change Private Hostnames

Perform this task if you do not want to use the default private hostnames (clusternodenodeid-priv) assigned during Sun Cluster software installation.


Note -

Do not perform after applications and data services have been configured and started. Otherwise, an application or data service might continue to use the old private hostname after it is renamed, which would cause hostname conflicts. If any applications or data services are running, stop them before you perform this procedure.


  1. Become superuser on a node in the cluster.

  2. Start the scsetup(1M) utility.


    # scsetup
    

  3. To work with private hostnames, type 5 (Private hostnames).

  4. To change a private hostname, type 1 (Change a private hostname).

    Follow the prompts to change the private hostname. Repeat for each private hostname to change.

  5. Verify the new private hostnames.


    # scconf -pv | grep 'private hostname'
    (phys-schost-1) Node private hostname:      phys-schost-1-priv
    (phys-schost-3) Node private hostname:      phys-schost-3-priv
    (phys-schost-2) Node private hostname:      phys-schost-2-priv

  6. Update the /etc/inet/ntp.conf file.

    Go to "How to Update Network Time Protocol (NTP)".

How to Update Network Time Protocol (NTP)

Perform this task on each node.

  1. Become superuser on the cluster node.

  2. Edit the /etc/inet/ntp.conf file.

    The scinstall(1M) command copies a template file, ntp.cluster, to /etc/inet/ntp.conf as part of standard cluster installation. But if an ntp.conf file already exists before Sun Cluster software is installed, that existing file remains unchanged. If cluster packages are installed by using other means, such as direct use of pkgadd(1M), you need to configure NTP.

    1. Remove all entries for private hostnames that are not used by the cluster.

      If the ntp.conf file contains non-existent private hostnames, when a node is rebooted, error messages are generated on the node's attempts to contact those private hostnames.

    2. If you changed any private hostnames after Sun Cluster software installation, update each file entry with the new private hostname.

    3. If necessary, make other modifications to meet your NTP requirements.

      The primary requirement when you configure NTP, or any time synchronization facility, within the cluster is that all cluster nodes be synchronized to the same time. Consider accuracy of time on individual nodes secondary to the synchronization of time among nodes. You are free to configure NTP as best meets your individual needs, as long as this basic requirement for synchronization is met.

      See Sun Cluster 3.0 U1 Concepts for further information about cluster time. See the ntp.cluster template for guidelines on how to configure NTP for a Sun Cluster configuration.

  3. Restart the NTP daemon.


    # /etc/init.d/xntpd stop
    # /etc/init.d/xntpd start
    

  4. Do you intend to use Sun Management Center to configure resource groups or monitor the cluster?

    • If yes, go to "Installing the Sun Cluster Module for Sun Management Center".

    • If no, install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation supplied with the application software and the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide.