2.14.7 Configure Quorum Disks for a High Redundancy Disk Group with Less Than Five Failure Groups

To ensure data availability and integrity, Oracle ASM high redundancy disk groups with fewer than five failure groups (storage servers) require quorum disks. Oracle Exadata Exachk verifies compliance with this requirement.

Additionally, beginning with Oracle Grid Infrastructure 19c release update 19.14, high redundancy sparse disk groups also support quorum disks.

Use this procedure to check your system and configure quorum disks as needed.

Perform the following checks before configuring quorum disks:

  1. Connect to your Oracle ASM instance as an ASM administrator, and run the following query to identify high redundancy disk groups with less than five failure groups and without the required quorum disks:

    SQL> SELECT dg.name DISK_GROUP_NAME, dg.state, dg.type REDUNDANCY, COUNT(distinct d.failgroup) FAILURE_GROUPS
         FROM v$asm_diskgroup dg JOIN v$asm_disk d USING (group_number)
         WHERE dg.type = 'HIGH' AND group_number NOT IN
             ( SELECT group_number
               FROM v$asm_disk
               WHERE failgroup_type = 'QUORUM'
               HAVING COUNT(group_number) < 2
               GROUP BY group_number )
         HAVING count(distinct d.failgroup) < 5
         GROUP BY dg.name, dg.state, dg.type;
    
    DISK_GROUP_NAME                STATE       REDUNDANCY FAILURE_GROUPS
    ------------------------------ ----------- ---------- --------------
    SPAR1                          MOUNTED     HIGH                    3

    The example output shows one disk group, named SPAR1, without the required quorum disks. The commands in the remainder of this procedure build on this example. Where necessary, ensure that you modify the example commands to suit your envionment.

    Note:

    If no rows are returned, no further action is required.

  2. Validate the status of any existing quorum disks.

    For example:

    SQL> set lines 160
    SQL> set pages 100
    SQL> SELECT dg.name disk_group, d.name, d.mode_status, d.state, d.header_status
         FROM v$asm_diskgroup dg JOIN v$asm_disk d USING (group_number) 
         WHERE d.failgroup_type = 'QUORUM' ;
    
    DISK_GROUP                     NAME                           MODE_STATUS STATE    HEADER_STATUS
    ------------------------------ ------------------------------ ----------- -------- ------------
    DATA1                          QD_DATA1_DBNODE01              ONLINE      NORMAL   MEMBER
    DATA1                          QD_DATA1_DBNODE02              ONLINE      NORMAL   MEMBER
    RECO1                          QD_RECO1_DBNODE01              ONLINE      NORMAL   MEMBER
    RECO1                          QD_RECO1_DBNODE02              ONLINE      NORMAL   MEMBER

    For existing quorum disks, check that MODE_STATUS=ONLINE, STATE=NORMAL, and each quorum disk is a member of the corresponding disk group.

    Note:

    If an existing quorum disk has an issue, stop and contact Oracle Support for assistance.

  3. As the root OS user, check the existing quorum disk configuration on all database nodes.

    For example:

    [root@dbnode01 ~]# /opt/oracle.SupportTools/quorumdiskmgr --list --config
    Owner: oracle
    Group: dba
    ifaces: exadata_re0 exadata_re1
    Initiatior name: iqn.1988-12.com.example:192.168.8.53
    
    [root@dbnode02 ~]# /opt/oracle.SupportTools/quorumdiskmgr --list --config
    Owner: oracle
    Group: dba
    ifaces: exadata_re0 exadata_re1
    Initiatior name: iqn.1988-12.com.example:192.168.8.55

    Note:

    Quorum disk configuration must exist on all cluster nodes. If absent on any node, stop and contact Oracle Support for assistance.

Use the following procedure to add quorum disks to high redundancy disk group with less than five failure groups:

  1. Obtain cluster interconnect interface names and IP addresses for each database node.
    1. To determine the cluster interconnect interface names, run the oifcfg utility from the Oracle Grid Infrastructure (GI) home as the root OS user.

      For example:

      [root@dbnode01 ~]# /u01/app/19.22.0.0/grid/bin/oifcfg getif
      bondeth0  10.128.0.0  global  public
      re0  192.168.8.0  global  cluster_interconnect,asm
      re1  192.168.8.0  global  cluster_interconnect,asm

      In the example output, the interface names are re0 and re1.

      Note:

      Alternatively, depending on the Exadata platform and deployment type, the interface names could be clre0 and clre1 or ib0 and ib1.

    2. On each database node, use the cluster interconnect interface names to obtain the associated IP addresses.

      For example:

      [root@dbnode01 ~]# ip addr show re0 | grep inet
          inet 192.168.8.53/24 brd 192.168.8.255 scope global noprefixroute re0
      [root@dbnode01 ~]# ip addr show re1 | grep inet
          inet 192.168.8.54/24 brd 192.168.8.255 scope global noprefixroute re1
      
      [root@dbnode02 ~]# ip addr show re0 | grep inet
          inet 192.168.8.55/24 brd 192.168.8.255 scope global noprefixroute re0
      [root@dbnode02 ~]# ip addr show re1 | grep inet
          inet 192.168.8.56/24 brd 192.168.8.255 scope global noprefixroute re1

      In the example output, the IP addresses are:

      • On dbnode01: 192.168.8.53, 192.168.8.54.

      • On dbnode02: 192.168.8.55, 192.168.8.56.

  2. Create quorum disk targets for the disk group on all database nodes.

    On each cluster node, as the root OS user, run:

    # /opt/oracle.SupportTools/quorumdiskmgr --create --target --asm-disk-group=<DISKGROUP_NAME> --visible-to="<IP1>,<IP2>,<IP3>,<IP4>"

    For example:

    [root@dbnode01 ~]# /opt/oracle.SupportTools/quorumdiskmgr --create --target --asm-disk-group=SPAR1 --visible-to="192.168.8.53,192.168.8.54,192.168.8.55,192.168.8.56"
    
    [INFO     ] [Success] Created logical volume /dev/VGExaDb/LVDbVdDBNODE01SPAR1.
    [INFO     ] [Success] Created backstore QD_SPAR1_DBNODE01.
    [INFO     ] [Success] Created target iqn.2015-05.com.example:qd--spar1--dbnode01.
    
    [root@dbnode02 ~]# /opt/oracle.SupportTools/quorumdiskmgr --create --target --asm-disk-group=SPAR1 --visible-to="192.168.8.53,192.168.8.54,192.168.8.55,192.168.8.56"
    
    [INFO     ] [Success] Created logical volume /dev/VGExaDb/LVDbVdDBNODE02SPAR1.
    [INFO     ] [Success] Created backstore QD_SPAR1_DBNODE02.
    [INFO     ] [Success] Created target iqn.2015-05.com.example:qd--spar1--dbnode02.

    Verify the newly created targets by running:

    # /opt/oracle.SupportTools/quorumdiskmgr --list --target

    For example:

    [root@dbnode01 ~]# /opt/oracle.SupportTools/quorumdiskmgr --list --target
    
    ...
    
    Name: iqn.2015-05.com.example:qd--spar1--dbnode01
    Host name:dbnode01
    ASM disk group name: SPAR1
    Visible to: iqn.1988-12.com.example:192.168.8.53, iqn.1988-12.com.example:192.168.8.54, iqn.1988-12.com.example:192.168.8.55, iqn.1988-12.com.example:192.168.8.56
    Discovered by: 192.168.8.53, 192.168.8.55, 192.168.8.56
    
    [root@dbnode02 ~]# /opt/oracle.SupportTools/quorumdiskmgr --list --target
    
    ...
    
    Name: iqn.2015-05.com.example:qd--spar1--dbnode02
    Host name:dbnode02
    ASM disk group name: SPAR1
    Visible to: iqn.1988-12.com.example:192.168.8.53, iqn.1988-12.com.example:192.168.8.54, iqn.1988-12.com.example:192.168.8.55, iqn.1988-12.com.example:192.168.8.56
    Discovered by: 192.168.8.53, 192.168.8.55

    Note:

    The example output is truncated for brevity. The command output normally shows all quorum disk targets on the current node.

  3. Create the quorum disk devices on all database nodes.

    On each cluster node, as the root OS user, run:

    /opt/oracle.SupportTools/quorumdiskmgr --create --device --target-ip-list="<IP1>,<IP2>,<IP3>,<IP4>"

    For example:

    [root@dbnode01 ~]# /opt/oracle.SupportTools/quorumdiskmgr --create --device --target-ip-list="192.168.8.53,192.168.8.54,192.168.8.55,192.168.8.56"
    
    [INFO     ] [Success] created all device(s) from target(s) on machine with IP address 192.168.8.54
    [INFO     ] [Success] created all device(s) from target(s) on machine with IP address 192.168.8.55
    [INFO     ] [Success] created all device(s) from target(s) on machine with IP address 192.168.8.56
    
    [root@dbnode02 ~]# /opt/oracle.SupportTools/quorumdiskmgr --create --device --target-ip-list="192.168.8.53,192.168.8.54,192.168.8.55,192.168.8.56"
    
    [INFO     ] [Success] created all device(s) from target(s) on machine with IP address 192.168.8.53
    [INFO     ] [Success] created all device(s) from target(s) on machine with IP address 192.168.8.54
    [INFO     ] [Success] created all device(s) from target(s) on machine with IP address 192.168.8.56

    Verify the newly created devices by running:

    /opt/oracle.SupportTools/quorumdiskmgr --list --device

    For example:

    [root@dbnode01 ~]#  /opt/oracle.SupportTools/quorumdiskmgr --list --device
    
    ...
    
    Device path: /dev/exadata_quorum/QD_SPAR1_DBNODE01
    Host name: dbnode01
    ASM disk group name: SPAR1
    Size: 128 MB
    
    Device path: /dev/exadata_quorum/QD_SPAR1_DBNODE02
    Host name: dbnode02
    ASM disk group name: SPAR1
    Size: 128 MB
    
    [root@dbnode02 ~]#  /opt/oracle.SupportTools/quorumdiskmgr --list --device
    
    ...
    
    Device path: /dev/exadata_quorum/QD_SPAR1_DBNODE01
    Host name: dbnode01
    ASM disk group name: SPAR1
    Size: 128 MB
    
    Device path: /dev/exadata_quorum/QD_SPAR1_DBNODE02
    Host name: dbnode02
    ASM disk group name: SPAR1
    Size: 128 MB

    Note:

    The example output is truncated for brevity. The command output normally shows all quorum disk devices on the system.

  4. Confirm new quorum disk devices are visible as CANDIDATE disks in Oracle ASM.

    Connect to your Oracle ASM instance as an ASM administrator, and run the following query to check the device status.

    
    SQL> set linesize 200 pagesize 100
    SQL> col path format a50
    SQL> select inst_id, label, path, mode_status, header_status 
         from gv$asm_disk 
         where path like '/dev/exadata_quorum/%' 
         order by header_status, inst_id;
    
       INST_ID LABEL                           PATH                                               MODE_STATUS HEADER_STATUS
    ---------- ------------------------------- -------------------------------------------------- ----------- ------------
             1 QD_SPAR1_DBNODE02               /dev/exadata_quorum/QD_SPAR1_DBNODE02              ONLINE      CANDIDATE
             1 QD_SPAR1_DBNODE01               /dev/exadata_quorum/QD_SPAR1_DBNODE01              ONLINE      CANDIDATE
             2 QD_SPAR1_DBNODE01               /dev/exadata_quorum/QD_SPAR1_DBNODE01              ONLINE      CANDIDATE
             2 QD_SPAR1_DBNODE02               /dev/exadata_quorum/QD_SPAR1_DBNODE02              ONLINE      CANDIDATE
    ...

    For each newly created quorum disk, ensure that the header status is CANDIDATE.

    Note:

    The example output is truncated for brevity. The query output normally shows all quorum disk devices on the system.

    If the newly created quorum disks are not visible, it is most likely because they are not discovered by the Oracle ASM asm_diskstring parameter. If this is the case on your system, you must update the asm_diskstring parameter to discover the newly created quorum disks.

    For example:

    1. Check the asm_diskstring parameter:

      SQL> show parameter asm_diskstring
      
      NAME                                 TYPE        VALUE
      ------------------------------------ ----------- -------------------------------------------------------------------
      asm_diskstring                       string      o/*/DATA1_*, o/*/RECO1_*, o/*/SPAR1_*, /dev/exadata_quorum/QD_DATA*
    2. Take a backup of the ASM server parameter file (spfile) before making any changes.

      SQL> create pfile='/tmp/asm_spfile_bakup.ora' from spfile;
      
      File created.
    3. Update the asm_diskstring parameter:

      SQL> alter system set asm_diskstring='o/*/DATA1_*', 'o/*/RECO1_*', 'o/*/SPAR1_*', '/dev/exadata_quorum/*' scope=both sid='*';

    After updating the ASM disk discovery string, rerun the query to confirm that the new quorum disk devices are visible as CANDIDATE disks in Oracle ASM.

  5. Add the new quorum disk devices to the disk group.

    Connect to your Oracle ASM instance as an ASM administrator, and run:

    
    SQL> ALTER DISKGROUP <DISKGROUP_NAME> ADD 
         QUORUM FAILGROUP <FAILGROUP_NAME1> DISK '<DEVICE_PATH1>'
         QUORUM FAILGROUP <FAILGROUP_NAME2> DISK '<DEVICE_PATH2>';

    For example:

    
    SQL> ALTER DISKGROUP SPAR1 ADD 
         QUORUM FAILGROUP DBNODE01 DISK '/dev/exadata_quorum/QD_SPAR1_DBNODE01'
         QUORUM FAILGROUP DBNODE02 DISK '/dev/exadata_quorum/QD_SPAR1_DBNODE02';

    Verify the quorum disk status and membership in ASM using the same query as in the previous step.

    
    SQL> set linesize 200 pagesize 100
    SQL> col path format a50
    SQL> select inst_id, label, path, mode_status, header_status 
         from gv$asm_disk 
         where path like '/dev/exadata_quorum/%' 
         order by header_status, inst_id;
    
       INST_ID LABEL                           PATH                                               MODE_STATUS HEADER_STATUS
    ---------- ------------------------------- -------------------------------------------------- ----------- ------------
             1 QD_SPAR1_DBNODE02               /dev/exadata_quorum/QD_SPAR1_DBNODE02              ONLINE      MEMBER
             1 QD_SPAR1_DBNODE01               /dev/exadata_quorum/QD_SPAR1_DBNODE01              ONLINE      MEMBER
             2 QD_SPAR1_DBNODE01               /dev/exadata_quorum/QD_SPAR1_DBNODE01              ONLINE      MEMBER
             2 QD_SPAR1_DBNODE02               /dev/exadata_quorum/QD_SPAR1_DBNODE02              ONLINE      MEMBER
    ...

    For each newly added quorum disk, ensure that the header status is now MEMBER.

    Note:

    The example output is truncated for brevity. The query output normally shows all quorum disk devices on the system.

  6. Run Exachk to verify the quorum disk configuration.

    Ensure that your environment passes the Exachk validation for correct quorum disk setup.

Quorum disks are now configured for your high redundancy disk group in compliance with Oracle Exadata best practices.

Perform regular checks on quorum disk status and rerun Exachk after major configuration changes or Oracle software updates.