Once you have added the quorum disks to the database zones or the database domains, follow these instructions to complete the quorum disk configuration.
If the database domain is a global zone, then continue running the remaining steps in this procedure in this global zone.
If the database domain is a non-global zone (for example, a zone within a dedicated Database Domain), then run the remaining steps in this procedure (steps Step 2 through Step 12) in the non-global zone that is part of the database environment with high redundancy.
export ORACLE_HOME=$GRID_HOME export ORACLE_SID=ASM_instance_id
For example:
SQL> alter system set asm_diskstring='o/*/DATAC1_*','o/*/RECOC1_*','/dev/exadata_quorum/*' scope=both sid='*';
SQL> set linesize 200 SQL> col path format a50 SQL> select inst_id, label, path, mode_status, header_status from gv$asm_disk where path like '/dev/exadata_quorum/%';
Output similar to the following appears:
INST_ID LABEL PATH MODE_STATUS HEADER_STATUS ------- -------------- ---------------------------------- ------ --------- 1 QD_DATAC1_DB01 /dev/exadata_quorum/QD_DATAC1_DB01 ONLINE MEMBER 1 QD_DATAC1_DB02 /dev/exadata_quorum/QD_DATAC1_DB02 ONLINE MEMBER 2 QD_DATAC1_DB01 /dev/exadata_quorum/QD_DATAC1_DB01 ONLINE MEMBER 2 QD_DATAC1_DB02 /dev/exadata_quorum/QD_DATAC1_DB02 ONLINE MEMBER
Verify that there is at least one high redundancy disk group in place by listing the disk groups and their information:
$GI_HOME/bin/asmcmd lsdg
If there is no high redundancy disk group, you have two options to create new diskgroup with high redundancy, both of which are described in My Oracle Support note 438580.1:
Drop the existing diskgroup after backing up data and create a new diskgroup with high redundancy.
Create a new high redundancy disk group and move the existing data to newly created diskgroup. For example:
SQL> create diskgroup DATAC1 high redundancy quorum failgroup db01 disk '/dev/exadata_quorum/QD_DATAC1_DB01' quorum failgroup db02 disk '/dev/exadata_quorum/QD_DATAC1_DB02' ...
In both cases, add the two new quorum disks when you create the high redundancy ASM group.
If a high redundancy disk group already exists, add the two new quorum disks. For example:
SQL> alter diskgroup datac1 add quorum failgroup db01 disk '/dev/exadata_quorum/QD_DATAC1_DB02' quorum failgroup db02 disk '/dev/exadata_quorum/QD_DATAC1_DB01';
$ $GRID_HOME/bin/crsctl query css votedisk
Output similar to the following appears:
## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE ca2f1b57873f4ff4bf1dfb78824f2912 (o/192.168.10.42/DATAC1_CD_09_celadm12) [DATAC1] 2. ONLINE a8c3609a3dd44f53bf17c89429c6ebe6 (o/192.168.10.43/DATAC1_CD_09_celadm13) [DATAC1] 3. ONLINE cafb7e95a5be4f00bf10bc094469cad9 (o/192.168.10.44/DATAC1_CD_09_celadm14) [DATAC1] Located 3 voting disk(s).
$ $GRID_HOME/bin/crsctl replace votedisk +DATAC1
$ $GRID_HOME/bin/crsctl query css votedisk
The output should show three voting disks from the storage servers and two voting disks from the database domains:
## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE ca2f1b57873f4ff4bf1dfb78824f2912 (o/192.168.10.42/DATAC1_CD_09_celadm12) [DATAC1] 2. ONLINE a8c3609a3dd44f53bf17c89429c6ebe6 (o/192.168.10.43/DATAC1_CD_09_celadm13) [DATAC1] 3. ONLINE cafb7e95a5be4f00bf10bc094469cad9 (o/192.168.10.44/DATAC1_CD_09_celadm14) [DATAC1] 4. ONLINE 4dca8fb7bd594f6ebf8321ac23e53434 (/dev/exadata_quorum/QD_DATAC1_DB01) [DATAC1] 5. ONLINE 4948b73db0514f47bf94ee53b98fdb51 (/dev/exadata_quorum/QD_DATAC1_DB02) [DATAC1] Located 5 voting disk(s).
$GI_HOME/bin/ocrcheck $GI_HOME/bin/ocrconfig -add +DATAC1 $GI_HOME/bin/ocrconfig -delete +RECOC1
$ asmcmd pwget --asm
$ asmcmd pwmove --asm full_path_of_source_file full_path_of_destination_file
For example:
asmcmd pwmove --asm +recoc1/ASM/PASSWORD/pwdasm.256.898960531 +datac1/asmpwdfile
$ asmcmd spget
$ asmcmd spcopy full_path_of_source_file full_path_of_destination_file
$ asmcmd spset full_path_of_destination_file
If you can shut down the cluster at this time, restart the Grid Infrastructure:
# $GI_HOME/bin/crsctl stop crs # $GI_HOME/bin/crsctl start crs
If can not shut down the cluster at this time, repeat step Step 11.b every time an initialization parameter modification to the ASM spfile is required, until the point where you can shut down the cluster and restart the Grid Infrastructure using the steps above.
Move the mgmtdb (if running) to the high redundancy disk group using My Oracle Support note 1589394.1.
Configure the mgmtdb to not use hugepages:
export ORACLE_SID=-MGMTDB export ORACLE_HOME=$GRID_HOME sqlplus ”sys as sysdba” SQL> alter system set use_large_pages=false scope=spfile sid='*';