Go to main content

Oracle® SuperCluster Quorum Disk Manager

Exit Print View

Updated: September 2017
 
 

Complete the Quorum Disk Configuration

Once you have added the quorum disks to the database zones or the database domains, follow these instructions to complete the quorum disk configuration.

  1. Determine if the database domain where the quorum disks are being created is a global zone or a non-global zone.
    • If the database domain is a global zone, then continue running the remaining steps in this procedure in this global zone.

    • If the database domain is a non-global zone (for example, a zone within a dedicated Database Domain), then run the remaining steps in this procedure (steps Step 2 through Step 12) in the non-global zone that is part of the database environment with high redundancy.

  2. Log in to the global zone or the non-global zone as an oracle user.
  3. Set the ORACLE_HOME and ORACLE_SID environment variables:
    export ORACLE_HOME=$GRID_HOME
    export ORACLE_SID=ASM_instance_id
  4. Alter the asm_diskstring initialization parameter and add /dev/exadata_quorum/* to the existing string.

    For example:

    SQL> alter system set asm_diskstring='o/*/DATAC1_*','o/*/RECOC1_*','/dev/exadata_quorum/*' scope=both sid='*'; 
  5. Verify the two quorum disk devices have been automatically discovered by ASM.
    SQL> set linesize 200
    SQL> col path format a50
    SQL> select inst_id, label, path, mode_status, header_status from gv$asm_disk where path like '/dev/exadata_quorum/%';   

    Output similar to the following appears:

    INST_ID LABEL          PATH                             MODE_STATUS HEADER_STATUS
    ------- -------------- ----------------------------------  ------   ---------
    1       QD_DATAC1_DB01 /dev/exadata_quorum/QD_DATAC1_DB01  ONLINE   MEMBER
    1       QD_DATAC1_DB02 /dev/exadata_quorum/QD_DATAC1_DB02  ONLINE   MEMBER
    2       QD_DATAC1_DB01 /dev/exadata_quorum/QD_DATAC1_DB01  ONLINE   MEMBER
    2       QD_DATAC1_DB02 /dev/exadata_quorum/QD_DATAC1_DB02  ONLINE   MEMBER 
  6. Add the two quorum disk devices to a high redundancy ASM disk group.

    Verify that there is at least one high redundancy disk group in place by listing the disk groups and their information:

    $GI_HOME/bin/asmcmd lsdg
    • If there is no high redundancy disk group, you have two options to create new diskgroup with high redundancy, both of which are described in My Oracle Support note 438580.1:

      • Drop the existing diskgroup after backing up data and create a new diskgroup with high redundancy.

      • Create a new high redundancy disk group and move the existing data to newly created diskgroup. For example:

        SQL> create diskgroup DATAC1 high redundancy quorum failgroup db01 disk '/dev/exadata_quorum/QD_DATAC1_DB01' quorum failgroup db02 disk '/dev/exadata_quorum/QD_DATAC1_DB02' ... 

        Note -  The "…" at the end of the command above signifies the intentional omission of the ASM disks. Refer to https://docs.oracle.com/cd/B28359_01/server.111/b31107/asmwithem.htm#OSTMG24000 for more information on creating disk groups.

      In both cases, add the two new quorum disks when you create the high redundancy ASM group.

    • If a high redundancy disk group already exists, add the two new quorum disks. For example:

      SQL> alter diskgroup datac1 add quorum failgroup db01 disk '/dev/exadata_quorum/QD_DATAC1_DB02' quorum failgroup db02 disk '/dev/exadata_quorum/QD_DATAC1_DB01';
  7. Query the voting disks to ensure that they are in the desired disk group.
    $ $GRID_HOME/bin/crsctl query css votedisk

    Output similar to the following appears:

    ##  STATE    File Universal Id                File Name                               Disk group
    --  -----    -----------------                ---------                               ---------
    1. ONLINE    ca2f1b57873f4ff4bf1dfb78824f2912 (o/192.168.10.42/DATAC1_CD_09_celadm12) [DATAC1]
    2. ONLINE    a8c3609a3dd44f53bf17c89429c6ebe6 (o/192.168.10.43/DATAC1_CD_09_celadm13) [DATAC1]
    3. ONLINE    cafb7e95a5be4f00bf10bc094469cad9 (o/192.168.10.44/DATAC1_CD_09_celadm14) [DATAC1]
    Located 3 voting disk(s).
  8. Relocate the existing voting files from the normal redundancy disk group to the high redundancy disk group.
    $ $GRID_HOME/bin/crsctl replace votedisk +DATAC1
  9. Verify that the voting disks have been successfully relocated to the high redundancy disk group and that five voting files exist.
    $ $GRID_HOME/bin/crsctl query css votedisk

    The output should show three voting disks from the storage servers and two voting disks from the database domains:

    ## STATE File Universal Id                 File Name                               Disk group
    -- ----- -----------------                 ---------                               ---------
    1. ONLINE ca2f1b57873f4ff4bf1dfb78824f2912 (o/192.168.10.42/DATAC1_CD_09_celadm12) [DATAC1]
    2. ONLINE a8c3609a3dd44f53bf17c89429c6ebe6 (o/192.168.10.43/DATAC1_CD_09_celadm13) [DATAC1]
    3. ONLINE cafb7e95a5be4f00bf10bc094469cad9 (o/192.168.10.44/DATAC1_CD_09_celadm14) [DATAC1]
    4. ONLINE 4dca8fb7bd594f6ebf8321ac23e53434 (/dev/exadata_quorum/QD_DATAC1_DB01)    [DATAC1]
    5. ONLINE 4948b73db0514f47bf94ee53b98fdb51 (/dev/exadata_quorum/QD_DATAC1_DB02)    [DATAC1]
    Located 5 voting disk(s). 
  10. Relocate the OCR files from the normal redundancy disk group to the high redundancy disk group.
    $GI_HOME/bin/ocrcheck
    $GI_HOME/bin/ocrconfig -add +DATAC1
    $GI_HOME/bin/ocrconfig -delete +RECOC1 
  11. Move the ASM password file and the ASM spfile to the high redundancy disk group.
    1. Move the ASM password file:
      1. Get the source ASM password file location.
        $ asmcmd pwget --asm
      2. Move the ASM password file to the high redundancy disk group.
        $ asmcmd pwmove --asm full_path_of_source_file full_path_of_destination_file

        For example:

        asmcmd pwmove --asm +recoc1/ASM/PASSWORD/pwdasm.256.898960531 +datac1/asmpwdfile
    2. Move the ASM spfile.
      1. Get the ASM spfile in use:
        $ asmcmd spget 
      2. Copy the ASM spfile to the high redundancy disk group.
        $ asmcmd spcopy full_path_of_source_file full_path_of_destination_file
      3. Modify the Grid Infrastructure configuration to use the relocated spfile on the next restart.
        $ asmcmd spset full_path_of_destination_file
      4. Determine if you can shut down the cluster at this time.
        • If you can shut down the cluster at this time, restart the Grid Infrastructure:

          # $GI_HOME/bin/crsctl stop crs
          # $GI_HOME/bin/crsctl start crs 
        • If can not shut down the cluster at this time, repeat step Step 11.b every time an initialization parameter modification to the ASM spfile is required, until the point where you can shut down the cluster and restart the Grid Infrastructure using the steps above.

  12. Relocate the MGMTDB to the high redundancy disk group.

    Move the mgmtdb (if running) to the high redundancy disk group using My Oracle Support note 1589394.1.

    Configure the mgmtdb to not use hugepages:

    export ORACLE_SID=-MGMTDB
    export ORACLE_HOME=$GRID_HOME
    sqlplus ”sys as sysdba”
    SQL> alter system set use_large_pages=false scope=spfile  sid='*';