Sun Cluster 3.0 Installation Guide

Configuring the Cluster

The following table lists the tasks to perform to configure your cluster.

Table 2-2 Task Map: Configuring the Cluster

Task 

For Instructions, Go To ... 

Perform post-installation setup 

"How to Perform Post-Installation Setup"

Configure the Solstice DiskSuite or VERITAS Volume Manager volume manager and device groups. 

"How to Configure Volume Manager Software", and volume manager documentation

Create and mount cluster file systems. 

"How to Add Cluster File Systems"

(Optional) Configure additional public network adapters.

"How to Configure Additional Public Network Adapters"

Configure Public Network Management (PNM) and set up NAFO groups. 

"How to Configure Public Network Management (PNM)"

(Optional) Change a node's private hostname.

"How to Change Private Hostnames"

Edit the /etc/inet/ntp.conf file to update node name entries.

"How to Update Network Time Protocol (NTP)"

(Optional) Install the Sun Cluster module to Sun Management Center software.

"Installation Requirements for Sun Management Center Software for Sun Cluster Monitoring" and Sun Management Center documentation

Install third-party applications and configure the applications, data services, and resource groups. 

Sun Cluster 3.0 Data Services Installation and Configuration Guide, and third-party application documentation

How to Perform Post-Installation Setup

Perform this procedure one time only, after the cluster is fully formed.

  1. Verify that all nodes have joined the cluster.

    1. From one node, display a list of cluster nodes to verify that all nodes have joined the cluster.

      You do not need to be logged in as superuser to run this command.


      % scstat -n
      

      Output resembles the following.


      -- Cluster Nodes --
                         Node name      Status
                         ---------      ------
        Cluster node:    phys-schost-1  Online
        Cluster node:    phys-schost-2  Online
    2. On each node, display a list of all the devices that the system checks to verify their connectivity to the cluster nodes.

      You do not need to be logged in as superuser to run this command.


      % scdidadm -L
      

      The list on each node should be the same. Output resembles the following.


      1       phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
      2       phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
      2       phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
      3       phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
      3       phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
      ...
    3. Identify from the scdidadm output the global device ID (DID) name of each shared disk you will configure as a quorum device.

      For example, the output in the previous substep shows that global device d2 is shared by phys-schost-1 and phys-schost-2. You need this information in Step 4. Refer to "Quorum Devices" for further information about planning quorum devices.

  2. Become superuser on one node of the cluster.

  3. Start the scsetup(1M) utility.


    # scsetup
    

    The Initial Cluster Setup screen is displayed.


    Note -

    If the Main Menu is displayed instead, this procedure has already been successfully performed.


  4. Respond to the prompts.

    1. At the prompt Do you want to add any quorum disks?, configure at least one shared quorum device if your cluster is a two-node cluster.

      A two-node cluster remains in install mode until a shared quorum device is configured. After the scsetup utility configures the quorum device, the message Command completed successfully is displayed. If your cluster has three or more nodes, configuring a quorum device is optional.

    2. At the prompt Is it okay to reset "installmode"?, answer Yes.

      After the scsetup utility sets quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed and the utility returns you to the Main Menu.


    Note -

    If the quorum setup process is interrupted or fails to complete successfully, rerun Step 3 and Step 4.


  5. From any node, verify that cluster install mode is disabled.


    # scconf -p | grep 'Cluster install mode:'
    Cluster install mode:                                  disabled

Where to Go From Here

To configure volume manager software, go to "How to Configure Volume Manager Software".

How to Configure Volume Manager Software

  1. Have available the following information.

    • Mappings of your storage disk drives

    • The following completed configuration planning worksheets from Sun Cluster 3.0 Release Notes.

      • "Local File System Layout Worksheet"

      • "Disk Device Group Configurations Worksheet"

      • "Volume Manager Configurations Worksheet"

      • "Metadevices Worksheet (Solstice DiskSuite)"

      See Chapter 1, Planning the Sun Cluster Configuration for planning guidelines.

  2. Follow the appropriate configuration procedures for your volume manager.

    Volume Manager 

    Documentation 

    Solstice DiskSuite 

    Appendix A, Configuring Solstice DiskSuite Software

    Solstice DiskSuite documentation 

    VERITAS Volume Manager 

    Appendix B, Configuring VERITAS Volume Manager

    VERITAS Volume Manager documentation 

Where to Go From Here

After configuring your volume manager, to create a cluster file system, go to "How to Add Cluster File Systems".

How to Add Cluster File Systems

Perform this task for each cluster file system you add.


Caution - Caution -

Creating a file system destroys any data on the disks. Be sure you have specified the correct disk device name. If you specify the wrong device name, you erase its contents when the new file system is created.


  1. Become superuser on any node in the cluster.


    Tip -

    For faster file-system creation, become superuser on the current primary of the global device for which you are creating a file system.


  2. Create a file system by using the newfs(1M) command.


    # newfs raw-disk-device
    

    The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

    Table 2-3 Sample Raw Disk Device Names

    Volume Manager 

    Sample Disk Device Name 

    Description 

    Solstice DiskSuite 

    /dev/md/oracle/rdsk/d1

    Raw disk device d1 within the oracle diskset

    VERITAS Volume Manager 

    /dev/vx/rdsk/oradg/vol01

    Raw disk device vol01 within the oradg disk group

    None 

    /dev/global/rdsk/d1s3

    Raw disk device d1s3

  3. On each node in the cluster, create a mount-point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system will not be accessed on that node.


    # mkdir -p /global/device-group/mount-point
    
    device-group

    Name of the directory that corresponds to the name of the device group that contains the device

    mount-point

    Name of the directory on which to mount the cluster file system


    Tip -

    For ease of administration, create the mount point in the /global/device-group directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.


  4. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.


    Note -

    The syncdir mount option is not required for cluster file systems. If you specify syncdir, you are guaranteed POSIX-compliant file system behavior. If you do not, you will have the same behavior that is seen with UFS file systems. Not specifying syncdir can significantly improve performance of writes that allocate disk blocks, such as when appending data to a file. However, in some cases, without syncdir you would not discover an out-of-space condition until you close a file. The cases in which you could have problems if you do not specify syncdir are rare. With syncdir (and POSIX behavior), the out-of-space condition would be discovered before the close.


    1. To automatically mount the cluster file system, set the mount at boot field to yes.

    2. Use the following required mount options.

      • If you are using Solaris UFS logging, use the global,logging mount options.

      • If a cluster file system uses a Solstice DiskSuite trans metadevice, use the global mount option (do not use the logging mount option). Refer to Solstice DiskSuite documentation for information about setting up trans metadevices.


      Note -

      Logging is required for all cluster file systems.


    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.

    4. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle, and phys-schost-2 mounts disk device d1 on /global/oracle/logs. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs only after phys-schost-1 boots and mounts /global/oracle.

    5. Make sure the entries in each node's /etc/vfstab file list devices in the same order.

    Refer to the vfstab(4) man page for details.

  5. On any node in the cluster, verify that mount points exist and /etc/vfstab file entries are correct on all nodes of the cluster.


    # sccheck
    

    If no errors occur, nothing is returned.

  6. From any node in the cluster, mount the cluster file system.


    # mount /global/device-group/mount-point
    
  7. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df(1M) or mount(1M) command to list mounted file systems.

Example--Creating a Cluster File System

The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.


# newfs /dev/md/oracle/rdsk/d1
...
 
(on each node:)
# mkdir -p /global/oracle/d1
# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
(save and exit)
 
(on one node:)
# sccheck
# mount /global/oracle/d1
# mount
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/
largefiles on Sun Oct 3 08:56:16 1999

Where to Go From Here

If your cluster nodes are connected to more than one public subnet, to configure additional public network adapters, go to "How to Configure Additional Public Network Adapters".

Otherwise, to configure PNM and set up NAFO groups, go to "How to Configure Public Network Management (PNM)".

How to Configure Additional Public Network Adapters

If the nodes in the cluster are connected to more than one public subnet, you can configure additional public network adapters for the secondary subnets. However, configuring secondary subnets is not required.


Note -

Configure only public network adapters, not private network adapters.


  1. Have available your completed "Public Networks Worksheet" from Sun Cluster 3.0 Release Notes.

  2. Become superuser on the node being configured for additional public network adapters.

  3. Create a file named /etc/hostname.adapter, where adapter is the adapter name.


    Note -

    In each NAFO group, an /etc/hostname.adapter file should exist for only one adapter in the group.


  4. Type the hostname of the public network adapter IP address in the /etc/hostname.adapter file.

    For example, the following shows the file /etc/hostname.hme3, created for the adapter hme3, which contains the hostname phys-schost-1.


    # vi /etc/hostname.hme3
    phys-schost-1 
  5. On each cluster node, ensure that the /etc/inet/hosts file contains the IP address and corresponding hostname assigned to the public network adapter.

    For example, the following shows the entry for phys-schost-1.


    # vi /etc/inet/hosts
    ...
    192.29.75.101 phys-schost-1
    ...


    Note -

    If you use a naming service, this information should also exist in the naming service database.


  6. On each cluster node, turn on the adapter.


    # ifconfig adapter plumb
    # ifconfig adapter hostname netmask + broadcast + -trailers up
    

  7. Verify that the adapter is configured correctly.


    # ifconfig adapter
    

    The output should contain the correct IP address for the adapter.

Where to Go From Here

Each public network adapter to be managed by the Resource Group Manager (RGM) must belong to a NAFO group. To configure PNM and set up NAFO groups, go to "How to Configure Public Network Management (PNM)".

How to Configure Public Network Management (PNM)

Perform this task on each node of the cluster.


Note -

All public network adapters must belong to a Network Adapter Failover (NAFO) group. Also, each node can have only one NAFO group per subnet.


  1. Have available your completed "Public Networks Worksheet" from Sun Cluster 3.0 Release Notes.

  2. Become superuser on the node being configured for a NAFO group.

  3. Create the NAFO group.


    # pnmset -c nafo_group -o create adapter [adapter ...]
    -c nafo_group

    Configures the NAFO group nafo_group

    -o create adapter

    Creates a new NAFO group containing one or more public network adapters

    Refer to the pnmset(1M) man page for more information.

  4. Verify the status of the NAFO group.


    # pnmstat -l
    

    Refer to the pnmstat(1M) man page for more information.

Example--Configuring PNM

The following example creates NAFO group nafo0, which uses public network adapters qfe1 and qfe5.


# pnmset -c nafo0 -o create qfe1 qfe5
# pnmstat -l
group  adapters       status  fo_time    act_adp
nafo0  qfe1:qfe5      OK      NEVER      qfe5
nafo1  qfe6           OK      NEVER      qfe6

Where to Go From Here

If you want to change any private hostnames, go to "How to Change Private Hostnames". Otherwise, to update the /etc/inet/ntp.conf file, go to "How to Update Network Time Protocol (NTP)".

How to Change Private Hostnames

Perform this task if you do not want to use the default private hostnames (clusternodenodeid-priv) assigned during Sun Cluster software installation.


Note -

This procedure should not be performed after applications and data services have been configured and started. Otherwise, an application or data service might continue using the old private hostname after it has been renamed, causing hostname conflicts. If any applications or data services are running, stop them before performing this procedure.


  1. Become superuser on a node in the cluster.

  2. Start the scsetup(1M) utility.


    # scsetup
    
  3. To work with private hostnames, type 4 (Private hostnames).

  4. To change a private hostname, type 1 (Change a private hostname).

    Follow the prompts to change the private hostname. Repeat for each private hostname you want to change.

  5. Verify the new private hostnames.


    # scconf -pv | grep 'private hostname'
    (phys-schost-1) Node private hostname:      phys-schost-1-priv
    (phys-schost-3) Node private hostname:      phys-schost-3-priv
    (phys-schost-2) Node private hostname:      phys-schost-2-priv

Where to Go From Here

To update the /etc/inet/ntp.conf file, go to "How to Update Network Time Protocol (NTP)".

How to Update Network Time Protocol (NTP)

Perform this task on each node.

  1. Become superuser on the cluster node.

  2. Edit the /etc/inet/ntp.conf file.

    The scinstall(1M) command copies a template file, ntp.cluster, to /etc/inet/ntp.conf as part of standard cluster installation. But if an ntp.conf file already exists before Sun Cluster software is installed, that existing file remains unchanged. If cluster packages are installed by using other means, such as direct use of pkgadd(1M), you need to configure NTP.

    1. Remove all entries for private hostnames that are not used by the cluster.

      If the ntp.conf file contains non-existent private hostnames, when a node is rebooted, error messages are generated on the node's attempts to contact those private hostnames.

    2. If you changed any private hostnames after Sun Cluster software installation, update each file entry with the new private hostname.

    3. If necessary, make other modifications to meet your NTP requirements.

      The primary requirement when configuring NTP, or any time synchronization facility, within the cluster is that all cluster nodes be synchronized to the same time. Consider accuracy of time on individual nodes secondary to the synchronization of time among nodes. You are free to configure NTP as best meets your individual needs, as long as this basic requirement for synchronization is met.

      Refer to Sun Cluster 3.0 Concepts for further information about cluster time and to the ntp.cluster template for guidelines on configuring NTP for a Sun Cluster configuration.

  3. Restart the NTP daemon.


    # /etc/init.d/xntpd stop
    # /etc/init.d/xntpd start
    

Where to Go From Here

If you want to use the Sun Management Center product to configure resource groups or monitor the cluster, go to "Installation Requirements for Sun Management Center Software for Sun Cluster Monitoring".

Otherwise, to install third-party applications, refer to the documentation supplied with the application software and to Sun Cluster 3.0 Data Services Installation and Configuration Guide. To register resource types, set up resource groups, and configure data services, refer to Sun Cluster 3.0 Data Services Installation and Configuration Guide.