Go to main content

Oracle® Solaris Cluster 4.3 Software Installation Guide

Exit Print View

Updated: June 2019
 
 

How to Configure Quorum Devices


Note -  You do not need to configure quorum devices in the following circumstances:
  • You chose automatic quorum configuration during Oracle Solaris Cluster software configuration.

  • You installed a single-node global cluster.

  • You added a node to an existing global cluster and already have sufficient quorum votes assigned.

If you chose automatic quorum configuration when you established the cluster, do not perform this procedure. Instead, proceed to How to Verify the Quorum Configuration and Installation Mode.


Perform this procedure one time only, after the new cluster is fully formed. Use this procedure to assign quorum votes and then to remove the cluster from installation mode.

Before You Begin

  • Quorum servers – To configure a quorum server as a quorum device, do the following:

    • Install the Oracle Solaris Cluster Quorum Server software on the quorum server host machine and start the quorum server. For information about installing and starting the quorum server, see How to Install and Configure Oracle Solaris Cluster Quorum Server Software.

    • Ensure that network switches that are directly connected to cluster nodes meet one of the following criteria:

      • The switch supports Rapid Spanning Tree Protocol (RSTP).

      • Fast port mode is enabled on the switch.

      One of these features is required to ensure immediate communication between cluster nodes and the quorum server. If this communication is significantly delayed by the switch, the cluster interprets this prevention of communication as loss of the quorum device.

    • Have available the following information:

      • A name to assign to the configured quorum device

      • The IP address of the quorum server host machine

      • The port number of the quorum server

  • NAS devices – To configure a network-attached storage (NAS) device as a quorum device, do the following:

  1. If both of the following conditions apply, ensure that the correct prefix length is set for the public-network addresses.
    • You intend to use a quorum server.

    • The public network uses variable-length subnet masking, also called classless inter domain routing (CIDR).

    # ipadm show-addr
    ADDROBJ           TYPE     STATE        ADDR
    lo0/v4            static   ok           127.0.0.1/8
    ipmp0/v4          static   ok           10.134.94.58/24 

    Note -  If you use a quorum server but the public network uses classful subnets as defined in RFC 791, you do not need to perform this step.
  2. On one node, assume the root role.

    Alternatively, if your user account is assigned the System Administrator profile, issue commands as non-root through a profile shell, or prefix the command with the pfexec command.

  3. Ensure that all cluster nodes are online.
    phys-schost# cluster status -t node
  4. To use a shared disk as a quorum device, verify device connectivity to the cluster nodes and choose the device to configure.
    1. From one node of the cluster, display a list of all the devices that the system checks.

      You do not need to be logged in as the root role to run this command.

      phys-schost-1# cldevice list -v

      Output resembles the following:

      DID Device          Full Device Path
      ----------          ----------------
      d1                  phys-schost-1:/dev/rdsk/c0t0d0
      d2                  phys-schost-1:/dev/rdsk/c0t6d0
      d3                  phys-schost-2:/dev/rdsk/c1t1d0
      d3                  phys-schost-1:/dev/rdsk/c1t1d0
      …
    2. Ensure that the output shows all connections between cluster nodes and storage devices.
    3. Determine the global device ID of each shared disk that you are configuring as a quorum device.

      Note -  Any shared disk that you choose must be qualified for use as a quorum device. See Quorum Devices for further information about choosing quorum devices.

      Use the cldevice output from Step 4.a to identify the device ID of each shared disk that you are configuring as a quorum device. For example, the output in Step 4.a shows that global device d3 is shared by phys-schost-1 and phys-schost-2.

  5. To use a shared disk that does not support the SCSI protocol, ensure that fencing is disabled for that shared disk.
    1. Display the fencing setting for the individual disk.
      phys-schost# cldevice show device
      
      === DID Device Instances ===
      DID Device Name:                                      /dev/did/rdsk/dN
      …
      default_fencing:                                     nofencing
      • If fencing for the disk is set to nofencing or nofencing-noscrub, fencing is disabled for that disk. Go to Step 6.
      • If fencing for the disk is set to pathcount or scsi, disable fencing for the disk. Skip to Step 5.c.
      • If fencing for the disk is set to global, determine whether fencing is also disabled globally. Proceed to Step 5.b.

        Alternatively, you can simply disable fencing for the individual disk, which overrides for that disk whatever value the global_fencing property is set to. Skip to Step 5.c to disable fencing for the individual disk.

    2. Determine whether fencing is disabled globally.
      phys-schost# cluster show -t global
      
      === Cluster ===
      Cluster name:                                         cluster
      …
      global_fencing:                                      nofencing
      • If global fencing is set to nofencing or nofencing-noscrub, fencing is disabled for the shared disk whose default_fencing property is set to global. Go to Step 6.
      • If global fencing is set to pathcount or prefer3, disable fencing for the shared disk. Proceed to Step 5.c.

      Note -  If an individual disk has its default_fencing property set to global, the fencing for that individual disk is disabled only while the cluster-wide global_fencing property is set to nofencing or nofencing-noscrub. If the global_fencing property is changed to a value that enables fencing, then fencing becomes enabled for all disks whose default_fencing property is set to global.
    3. Disable fencing for the shared disk.
      phys-schost# cldevice set \
      -p default_fencing=nofencing-noscrub device
    4. Verify that fencing for the shared disk is now disabled.
      phys-schost# cldevice show device
  6. Start the clsetup utility.
    phys-schost# clsetup

    The Initial Cluster Setup screen is displayed.


    Note -  If the Main Menu is displayed instead, the initial cluster setup was already successfully performed. Skip to Step 11.
  7. Indicate whether you want to add any quorum devices.
    • If your cluster is a two-node cluster, you must configure at least one shared quorum device. Type Yes to configure one or more quorum devices.
    • If your cluster has three or more nodes, quorum device configuration is optional.
      • Type No if you do not want to configure additional quorum devices. Then skip to Step 10.
      • Type Yes to configure additional quorum devices.
  8. Specify what type of device you want to configure as a quorum device.
    Quorum Device Type
    Description
    shared_disk
    Shared LUNs from the following:
    • Shared SCSI disk

    • Serial Attached Technology Attachment (SATA) storage

    • Oracle ZFS Storage Appliance

    quorum_server
    Quorum server
  9. Specify the name of the device to configure as a quorum device and provide any required additional information.
    • For a quorum server, also specify the following information:

      • The IP address of the quorum server host

      • The port number that is used by the quorum server to communicate with the cluster nodes

  10. Type Yes to verify that it is okay to reset installmode.

    After the clsetup utility sets the quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed. The utility returns you to the Main Menu.

  11. Quit the clsetup utility.

Next Steps

Verify the quorum configuration and that installation mode is disabled. Go to How to Verify the Quorum Configuration and Installation Mode.

Troubleshooting

scinstall fails to perform an automatic configuration – If scinstall fails to automatically configure a shared disk as a quorum device, or If the cluster's installmode state is still enabled, you can configure a quorum device and reset installmode by using the clsetup utility after the scinstall processing is completed.

Interrupted clsetup processing – If the quorum setup process is interrupted or fails to be completed successfully, rerun clsetup.

Changes to quorum vote count – If you later increase or decrease the number of node attachments to a quorum device, the quorum vote count is not automatically recalculated. You can reestablish the correct quorum vote by removing each quorum device and then adding it back into the configuration, one quorum device at a time. For a two-node cluster, temporarily add a new quorum device before you remove and add back the original quorum device. Then remove the temporary quorum device. See the procedure “How to Modify a Quorum Device Node List” in Chapter 6, Administering Quorum in Oracle Solaris Cluster 4.3 System Administration Guide.

Unreachable quorum device – If you see messages on the cluster nodes that a quorum device is unreachable or if you see failures of cluster nodes with the message CMM: Unable to acquire the quorum device, there might be a problem with the quorum device or the path to it. Check that both the quorum device and the path to it are functional.

If the problem persists, use a different quorum device. Or, if you want to use the same quorum device, increase the quorum timeout to a high value, as follows:


Note -  For Oracle RAC (Oracle RAC), do not change the default quorum timeout of 25 seconds. In certain split-brain scenarios, a longer timeout period might lead to the failure of Oracle RAC VIP failover, due to the VIP resource timing out. If the quorum device being used is not conforming with the default 25–second timeout, use a different quorum device.
  • 1. Assume the root role.

  • 2. On each cluster node, edit the /etc/system file as the root role to set the timeout to a high value.

    The following example sets the timeout to 700 seconds.

    phys-schost# pfedit /etc/system
    …
    set cl_haci:qd_acquisition_timer=700
  • 3. From one node, shut down the cluster.

    phys-schost-1# cluster shutdown -g0 -y
  • 4. Boot each node back into the cluster.

    Changes to the /etc/system file are initialized after the reboot.