3 Administering Oracle ASM Disk Groups on Oracle Exadata Storage Servers

This chapter explains how to administer Oracle Automatic Storage Management (Oracle ASM) disk groups with Oracle Exadata Storage Server grid disks. Figure 3-1 shows Oracle ASM disk groups with Oracle Exadata Storage Server grid disks. It represents a typical, but simplified configuration, that can be used as a model for building larger Oracle Exadata Storage Server grids with additional storage cells and disks.

Figure 3-1 Sample Oracle Exadata Storage Server Grid

Description of Figure 3-1 follows
Description of "Figure 3-1 Sample Oracle Exadata Storage Server Grid"

This Oracle Exadata Storage Server grid illustrates the following:

  • The storage cells in the grid are connected to the database servers that have a single-instance database and Oracle Real Application Clusters (Oracle RAC) database installation over an InfiniBand network.

  • Each storage cell is composed of physical disks.

  • Each cell disk represents a physical disk and a LUN.

  • Each cell disk is partitioned into grid disks.

  • Oracle ASM disk groups are set up to include the grid disks.

Oracle ASM failure groups are created to ensure that files are not mirrored on the same storage cell to tolerate the failure of a single storage cell. The number of failure groups equals the number of Exadata Cells. Each failure group is composed of a subset of grid disks in the Oracle ASM disk group that belong to a single storage cell.

This chapter contains the following topics:

3.1 Administering Oracle ASM Disk Groups Using Oracle Exadata Storage Servers

This section describes the basic Oracle ASM tasks needed to use Oracle Exadata Storage Servers. This section contains the following topics:

See Also:

3.1.1 Understanding Oracle ASM Disk Groups for Oracle Exadata Storage Servers

This topic explains Oracle ASM disk groups, and how to create an Oracle ASM disk group for Oracle Exadata Storage Server Software using the CREATE DISKGROUP SQL command.

Before creating an Oracle ASM disk group, determine which grid disks belong to the Oracle ASM disk group. It is recommended that you choose similar names for the Oracle ASM disk group and its grid disks whenever possible.

The Oracle Exadata Storage Server grid disks are specified with the following pattern:

o/cell_IPaddress/griddisk_name

In the preceding syntax, cell_IPaddress is the IP address of Oracle Exadata Storage Server, and griddisk_name is the name of the grid disk.

The cell discovery strings begin with the o/ prefix.

When specifying the grid disks to be added to the disk group, consider the following:

  • The default Oracle ASM disk name is the grid disk name. Oracle recommends using the default.

  • The default failure group name is the cell name. Oracle recommends using the default.

When a failure group is not specified, Oracle ASM adds each disk within its own failure group. However, when the disks are stored on Oracle Exadata Storage Servers and a failure group is not specified, Oracle ASM adds a disk to the failure group for that cell. The failure group name is the cell name.

Note:

If a cell is renamed, and a disk from that cell is added to an existing disk group that has disks from that cell, then Oracle ASM adds the new disk to a failure group using the new cell name. To ensure all the disks from the cell are in one failure group, add the disk to the disk group and specify the original failure group name.

To enable Smart Scan predicate offload processing, all disks in a disk group must be Oracle Exadata Storage Server grid disks. You cannot include conventional disks with Oracle Exadata Storage Server grid disks.

3.1.1.1 Setting the Oracle ASM Content Type When Using Normal Redundancy

When using normal redundancy with Oracle Grid Infrastructure release 11.2.0.3 or later, and the compatible.asm value is 11.2.0.3 or later, then set the content.type attribute for the DATA, RECO and DBFS_DG disk groups.

Setting the content.type attributes provides better recovery time objective (RTO) and recovery point objective (RPO or data loss tolerance). The content.type value for the DATA and SPARSE disk groups is data, the content.type value for the RECO disk group is recovery, and the content.type value for DBFS_DG is system.

Note:

  • Do not use the content.type attribute to distinguish the availability characteristics of disk groups that are used for a different purpose, such as those created to support of a particular service.

  • The database and grid infrastructure must be release 12.1.0.2.0 BP5 or later when using sparse grid disks.

The following is an example of adding content type while creating a disk group. The compatible.rdbms attribute is set to 11.2.0.2 in order to support both release 11.2.0.2 and release 11.2.0.3 databases in a consolidated environment.

CREATE DISKGROUP data NORMAL REDUNDANCY
DISK
'o/*/DATA*'
ATTRIBUTE 'content.type' = 'DATA',
'content.type' = 'DATA',
'AU_SIZE' = '4M',
'cell.smart_scan_capable'='TRUE',
'compatible.rdbms'='11.2.0.2',
'compatible.asm'='11.2.0.3';

To set the content.type attribute for an existing disk group, use the ALTER DISKGROUP command, and then rebalance the disk group. The following is an example of the commands:

ALTER DISKGROUP reco SET ATTRIBUTE 'content.type'='recovery';
ALTER DISKGROUP reco REBALANCE POWER preferred_power_setting ; 

The rebalance operation can take a long time, but the disk group's data is fully redundant throughout the operation. Oracle ASM monitors the rebalance operation, and Oracle Exadata Storage Server Software sends an e-mail message when the operation is complete.

To check the content.type attributes, use the following query:

SQL> select dg.name,a.value from v$asm_diskgroup dg,        \
     v$asm_attribute a where dg.group_number=a.group_number \
     and a.name='content.type' and (dg.name like 'DATA%'    \
     or dg.name like 'RECO%' or dg.name like 'DBFS_DG%');
 
NAME                 VALUE
-------------------- --------------------
DATA                 data
RECO                 recovery
DBFS_DG              system

3.1.1.2 Creating Oracle ASM Disk Groups

To create an Oracle ASM disk group to use Oracle Exadata Storage Server grid disks, perform the following procedure:

  1. Connect to the Oracle ASM instance.
  2. Ensure that the ORACLE_SID environment variable is set to the Oracle ASM instance using a command similar to the following:
    $ setenv ORACLE_SID ASM_instance_SID
    
  3. Start SQL*Plus on the Oracle ASM instance, and log in as a user with SYSASM administrative privileges.
    $ sqlplus / AS SYSASM
    
  4. Determine which Oracle Exadata Storage Server grid disks are available by querying the V$ASM_DISK view on the Oracle ASM instance, using the following syntax:
    SQL> SELECT PATH, header_status STATUS FROM V$ASM_DISK WHERE path LIKE 'o/%';
    
  5. Create an Oracle ASM disk group to include disks on the cells.

    In the following example, the ALTER command is needed to change compatible.rdbms for the disk group created during installation to hold the OCR and voting disks. The compatible.rdbms attribute is set to 11.2.0.2 in order to support both release 11.2.0.2 and release 11.2.0.3 databases in a consolidated environment.

    SQL> CREATE DISKGROUP data HIGH REDUNDANCY
    DISK 'o/*/DATA*'
    ATTRIBUTE 'AU_SIZE' = '4M',
              'content.type' = 'data',
              'cell.smart_scan_capable'='TRUE',
              'compatible.rdbms'='11.2.0.2',
              'compatible.asm'='11.2.0.3';
    
    SQL> CREATE DISKGROUP reco HIGH REDUNDANCY
    DISK 'o/*/RECO*'
    ATTRIBUTE 'AU_SIZE' = '4M',
              'content.type' = 'recovery',
              'cell.smart_scan_capable'='TRUE',
              'compatible.rdbms'='11.2.0.2',
              'compatible.asm'='11.2.0.3';
     
    SQL> ALTER DISKGROUP dbfs_dg SET ATTRIBUTE 
         'content.type' = 'system',
         'compatible.rdbms' = '11.2.0.2';
    
    

    The following example shows the use of a CREATE DISKGROUP command to create a disk group. The names of the disk groups are shown in bold. The compatible.rdbms attribute is set to 11.2.0.2 in order to support both release 11.2.0.2 and release 11.2.0.3 databases in a consolidated environment.

    SQL> CREATE DISKGROUP data HIGH REDUNDANCY 
    
    -- These grid disks are on cell01
       DISK 
       'o/*/data_CD_00_cell01',
       'o/*/data_CD_01_cell01',
       'o/*/data_CD_02_cell01',
    
    -- These grid disks are on cell02
       DISK
       'o/*/data_CD_00_cell02',
       'o/*/data_CD_01_cell02',
       'o/*/data_CD_02_cell02',
    
    -- These disk group attributes must be set for cell access
    -- Note that this disk group is set for cell only
       ATTRIBUTE 'compatible.rdbms' = '11.2.0.2', 
                 'content.type' = 'data',
                 'compatible.asm' = '11.2.0.3',
                 'au_size' = '4M',
                 'cell.smart_scan_capable' = 'TRUE';
    
    SQL> CREATE DISKGROUP reco HIGH REDUNDANCY 
    
    -- These grid disks are on cell01
       DISK 
       'o/*/reco_CD_00_cell01',
       'o/*/reco_CD_01_cell01',
       'o/*/reco_CD_02_cell01'
    
    -- These grid disks are on cell02
       DISK
       'o/*/reco_CD_00_cell02',
       'o/*/reco_CD_01_cell02',
       'o/*/reco_CD_02_cell02'
    
    -- These disk group attributes must be set for cell access
    -- Note that this disk group is set for cell only
       ATTRIBUTE 'compatible.rdbms' = '11.2.0.2', 
                 'content.type' = 'recovery',
                 'compatible.asm' = '11.2.0.3',
                 'au_size' = '4M',
                 'cell.smart_scan_capable' = 'TRUE';
    
    

    When creating sparse disk groups, use a command similar to the following:

    SQL> CREATE DISKGROUP sparsedg NORMAL REDUNDANCY
    DISK 'o.*/sparse_*'
    ATTRIBUTE 'AU_SIZE' = '4M',
              'content.type' = 'data',
              'cell.smart_scan_capable'='TRUE',
              'compatible.rdbms' = '12.1.0.2',
              'compatible.asm' = '12.1.0.2', 
              'cell.sparse_dg' = "allsparse';
    

    In the preceding command, cell.sparse_dg attribute defines the disk group as a sparse disk group. The attribute does not have to included if the disk group is not a sparse disk group.

    Note:

    • When defining sparse grid disks, the compatible.asm and compatible.rdbms attributes must be at least 12.1.0.2.0.

    • The Oracle ASM disk group compatible attributes take precedence over the COMPATIBLE initialization parameter for the Oracle ASM instance.

    • The database and grid infrastructure must be release 12.1.0.2.0 BP5 or later when using sparse grid disks.

  6. View the Oracle ASM disk groups and associated attributes with a SQL query on V$ASM dynamic views. For example, the following SQL command lists the Oracle ASM disk groups and the attributes:
    SQL> SELECT dg.name AS diskgroup, SUBSTR(a.name,1,24) AS name, 
         SUBSTR(a.value,1,24) AS value FROM V$ASM_DISKGROUP dg, V$ASM_ATTRIBUTE a 
         WHERE dg.group_number = a.group_number;
    
    DISKGROUP                    NAME                       VALUE
    ---------------------------- ------------------------ ------------------------
    DATA                         compatible.rdbms           11.2.0.2
    DATA                         compatible.asm             11.2.0.3
    DATA                         au_size                    4194304
    DATA                         disk_repair_time           3.6h
    DATA                         cell.smart_scan_capable    TRUE
    ...
    
  7. Create a tablespace in the disk group to take advantage of Oracle Exadata Storage Server Software features, such as offload processing. This tablespace should contain the tables that you want to query with offload processing. The following is an example of the syntax:
    SQL> CREATE TABLESPACE tablespace_name DATAFILE '+DATA';
    

    In the preceding command, +DATA is the name of the Oracle ASM disk group.

  8. Verify that the tablespace is in an Oracle Exadata Storage Server disk group. The PREDICATE_EVALUATION column of the DBA_TABLESPACES view indicates whether predicates are evaluated by host (HOST) or by storage (STORAGE). For example, the following SQL command verifies the tablespace is entirely within the cells:
    SQL> SELECT tablespace_name, predicate_evaluation FROM dba_tablespaces
         WHERE tablespace_name = 'DATA_TB';
    
    TABLESPACE_NAME                PREDICA
    ------------------------------ -------
    DATA_TB                        STORAGE
    

See Also:

3.1.2 Adding a Disk to an Oracle ASM Disk Group

To add a disk to an Oracle ASM disk group, perform the following procedure:

  1. Determine which disks are available by querying the V$ASM_DISK view on the Oracle ASM instance. If the header status is set to CANDIDATE, then the disk is a candidate for a disk group.

    Do not add Oracle Exadata Storage Server grid disks to a non-Oracle Exadata Storage Server ASM disk group unless you are planning to migrate the disk group to an Oracle Exadata Storage Server disk group.

  2. Use the SQL ALTER DISKGROUP command with the ADD DISK clause to add the disk to the Oracle ASM disk group using syntax similar to the following:
    SQL> ALTER DISKGROUP disk_group_name ADD DISK 'o/cell_IPaddress/data*';
    

    When the disk is added, Oracle ASM rebalances the disk group. Oracle ASM monitors the rebalance operation, and Oracle Exadata Storage Server Software sends an e-mail message when the operation is complete. You can query the V$ASM_OPERATION view for the status of the rebalance operation.

3.1.3 Mounting or Dismounting an Oracle ASM Disk Group

A disk group must be mounted by an Oracle ASM instance before database instances can access the files in the disk group. Mounting the disk group requires discovering all of the disks and locating the files in the disk group that is being mounted.

To mount or dismount a disk group, use the SQL ALTER DISKGROUP command with the MOUNT or DISMOUNT option.

You can use the FORCE option of the ALTER DISKGROUP command MOUNT clause to mount disk groups if their components are unavailable, which results in a loss of full redundancy.

See Also:

Oracle Automatic Storage Management Administrator's Guide for additional information about mounting disk groups

3.1.4 Changing a Disk to Offline or Online

This procedure shows how to change an Oracle ASM disk to INACTIVE or ACTIVE.

  1. Determine which disk you want offline or online in the Oracle ASM disk group by querying the V$ASM_DISK and V$ASM_DISKGROUP views on the Oracle ASM instance.
  2. Use one of the following commands:
    • To make a disk inactive, use the following command:

      CellCLI> ALTER GRIDDISK gdisk_name INACTIVE
      
    • To make a disk active, use the following command:

      CellCLI> ALTER GRIDDISK gdisk_name ACTIVE
      

    As soon as the disk is online, the disk group is rebalanced. Oracle ASM monitors the rebalance operation, and Oracle Exadata Storage Server Software sends an e-mail message when the operation is complete. You can query the V$ASM_OPERATION view for the status of the rebalance operation.

3.1.5 Dropping a Disk from an Oracle ASM Disk Group

To drop a disk from a disk group, perform the following procedure:

  1. Determine which disks you want to drop from the Oracle ASM disk group by querying the V$ASM_DISK and V$ASM_DISKGROUP views on the Oracle ASM instance.

    If you are removing an Oracle Exadata Storage Server grid disk, then ensure that you identify the grid disks that are mapped to each Oracle ASM disk group.

  2. Use the SQL ALTER DISKGROUP command with the DROP DISK clause to drop the disks from the Oracle ASM disk group. For example:
    SQL> ALTER DISKGROUP disk_group_name DROP DISK data_CD_11_cell01;
    

    When the disk is dropped from the Oracle ASM disk group, Oracle ASM rebalances the disk group. Oracle ASM monitors the rebalance operation, and Oracle Exadata Storage Server Software sends an e-mail message when the operation is complete. You can query the V$ASM_OPERATION view for the status of the rebalance operation.

After an Oracle Exadata Storage Server grid disk is dropped from the Oracle ASM disk group, you can drop the grid disk from a cell.

3.1.6 Dropping an Oracle ASM Disk Group

To drop an Oracle ASM disk group, perform the following procedure:

  1. Determine the disk group that you want to drop by querying the V$ASM_DISKGROUP view on the Oracle ASM instance.
  2. Use the SQL DROP DISKGROUP command to drop the Oracle ASM disk group.

If you cannot mount a disk group but must drop it, then use the FORCE option with the DROP DISKGROUP command.

3.1.7 Enabling the Oracle ASM appliance.mode Attribute

The Oracle ASM appliance.mode attribute improves disk rebalance completion time when dropping one or more Oracle ASM disks. This means that redundancy is restored faster after a failure.

The attribute can only be enabled on disk groups that meet the following requirements:

  • The Oracle ASM disk group attribute compatible.asm is set to release 11.2.0.4, or 12.1.0.2 or later.

  • The cell.smart_scan_capable attribute is set to TRUE.

  • All disks in the disk group are the same type, such that all disks are hard disks or all disks are flash disks.

  • All disks in the disk group are the same size.

  • All failure groups in the disk group have an equal number of disks:

    • For eighth rack configurations, all failure groups have 4 disks, or all failure groups have 6 disks.

    • For all other rack configurations, all failure groups have 10 disks, or all failure groups have 12 disks.

  • There are at least 3 failure groups in the disk group.

  • No disk in the disk group is offline.

The attribute is automatically enabled when creating a new disk group. Existing disk groups must explicitly set the attribute using the ALTER DISKGROUP command. The following is an example of the command:

SQL> ALTER DISKGROUP disk_group SET ATTRIBUTE 'appliance.mode'='TRUE';

To disable the appliance.mode attribute during disk group creation, set the attribute to FALSE. The following is an example of disabling the appliance.mode attribute during disk group creation:

SQL> CREATE DISKGROUP data NORMAL REDUNDANCY
DISK
'o/*/DATA*'
ATTRIBUTE 'content.type' = 'data',
          'au_size' = '4M',
          'cell.smart_scan_capable'='TRUE',
          'compatible.rdbms'='11.2.0.3',
          'compatible.asm'='11.2.0.4',
          'appliance.mode'='FALSE';

Note:

Enabling the appliance.mode attribute for existing disk groups may cause an increase of data movement during the next rebalance operation.

3.2 Administering Oracle Exadata Storage Server Grid Disks with Oracle ASM

3.2.1 Naming Conventions for Oracle Exadata Storage Server Grid Disks

Using a consistent naming convention helps to identify Exadata components.

The name of the grid disk should contain the cell disk name to make it easier to determine which grid disks belong to a cell disk. To help determine which grid disks belong to an Oracle ASM disk group, a subset of the grid disk name should match all or part of the name of the Oracle ASM disk group to which the grid disk will belong.

For example, if a grid disk is created on the cell disk CD_03_cell01, and that grid disk belongs to an Oracle ASM disk group named data0, then the grid disk name should be data0_CD_03_cell01.

When you use the ALL PREFIX option with CREATE GRIDDISK, a unique grid disk name is automatically generated that includes the prefix and cell name. If you do not use the default generated name when creating grid disks, then you must ensure that the grid disk name is unique across all cells. If the disk name is not unique, then it might not be possible to add the grid disk to an Oracle ASM disk group.

3.2.2 Changing an Oracle Exadata Storage Server Grid Disk That Belongs to an Oracle ASM Disk Group

When you change a grid disk that belongs to an Oracle ASM disk group, you must consider how the change might affect the Oracle ASM disk group to which the grid disk belongs.

This section contains the following topics:

3.2.2.1 Changing an Oracle Exadata Storage Server Grid Disk Name

To change attributes of a grid disk, use the CellCLI ALTER GRIDDISK command. Use the DESCRIBE GRIDDISK command to determine which Oracle Exadata Storage Server grid disk attributes can be modified.

Caution:

Before changing the name of a grid disk that belongs to an Oracle ASM disk group, ensure that the corresponding Oracle ASM disk is offline.

Example 3-1 Changing an Oracle Exadata Storage Server Grid Disk Name

This example shows how to rename a grid disk.

CellCLI> ALTER GRIDDISK data011 name='data0_CD_03_cell04'

3.2.2.2 Dropping an Oracle Exadata Storage Server Grid Disk

To drop an Oracle Exadata Storage Server grid disk, use the CellCLI DROP GRIDDISK command. Make the grid disk inactive before dropping the grid disk to ensure that the grid disk is not in use. The FORCE option can be used to force the grid disk that is in use to be dropped.

Caution:

  • Before dropping a grid disk that belongs to an Oracle ASM disk group, ensure that the corresponding Oracle ASM disk was dropped from the disk group.

  • Before dropping a grid disk using the FORCE option, ensure that the Oracle ASM disk was dropped from the disk group.

To drop a grid disk, perform the following procedure.

  1. Drop the Oracle ASM disk from the disk group using the following command:
    SQL> ALTER DISKGROUP disk_group_name DROP DISK disk_name;
    
  2. Make the corresponding grid disk inactive using the following command:
    CellCLI> ALTER GRIDDISK disk_name INACTIVE
    
  3. Drop the grid disk using the following command:
    CellCLI> DROP GRIDDISK disk_name
    

Example 3-2 Dropping Grid Disks

This example shows how to drop a specified grid disk or multiple grid disks.

CellCLI> ALTER GRIDDISK data0_CD_03_cell04 INACTIVE
CellCLI> DROP GRIDDISK data0_CD_03_cell04

CellCLI> ALTER GRIDDISK ALL INACTIVE
CellCLI> DROP GRIDDISK ALL PREFIX=data0

CellCLI> DROP GRIDDISK data02_CD_04_cell01 FORCE

Related Topics

3.2.3 Resizing Grid Disks

You can resize grid disks and Oracle ASM disk groups to shrink one with excess free space and increase the size of another that is near capacity.

Initial configuration of Oracle Exadata Database Machine disk group size is based on Oracle best practices and the location of the backup files. For internal backups, space is allocated at 40% for the DATA disk groups, and 60% to the RECO disk groups. For external backups, the space allocations are 80% to the DATA disk group, and 20% to the RECO disk group. The disk group allocations can be changed after deployment. For example, the DATA disk group allocation may be too small at 60%, and need to be resized to 80%.

If your system has no free space available on the cell disks and one disk group, for example RECO, has plenty of free space, then you can resize the RECO disk group to a smaller size and reallocate the free space to the DATA disk group. The free space available after shrinking the RECO disk group is at a non-contiguous offset from the existing space allocations for the DATA disk group. Grid disks can use space anywhere on the cell disks and do not have to be contiguous.

If you are expanding the grid disks and the cell disks already have sufficient space to expand the existing grid disks, then you do not need to first resize an existing disk group. You would skip steps 2 and 3 below where the example shows the RECO disk group and grid disks are shrunk (you should still verify the cell disks have enough free space before growing the DATA grid disks). The amount of free space the administrator should reserve depends on the level of failure coverage.

If you are shrinking the size of the grid disks, you should understand how space is reserved for mirroring. Data is protected by Oracle ASM using normal or high redundancy to create one or two copies of data, which are stored as file extents. These copies are stored in separate failure groups. A failure in one failure group does not affect the mirror copies, so data is still accessible. When a failure occurs, Oracle ASM re-mirrors, also known as rebalances, any extents that are not accessible so that redundancy is reestablished. In order for the re-mirroring process to succeed, sufficient free space must exist in the disk group to allow creation of the new file extent mirror copies. If there is not enough free space, then some extents will not be re-mirrored and the subsequent failure of the other data copies will require the disk group be restored from backup. Oracle ASM sends an error when a re-mirror process fails due to lack of space.

You must be using Oracle Exadata Storage Server Software release 12.1.2.1.0 or higher, or have the patch for bug 19695225 applied to your software.

This procedure for resizing grid disks applies to bare metal and virtual machine (VM) deployments.

  1. Determine the Amount of Available Space
  2. Shrink the Oracle ASM Disks in the Donor Disk Group
  3. Shrink the Grid Disks in the Donor Disk Group
  4. Increase the Size of the Grid Disks Using Available Space
  5. Increase the Size of the Oracle ASM Disks

3.2.3.1 Determine the Amount of Available Space

To increase the size of the disks in a disk group you must either have unallocated disk space available, or you have to reallocate space currently used by a different disk group.

  1. View the space currently used by the disk groups.
    SELECT name, total_mb, free_mb, total_mb - free_mb used_mb, round(100*free_mb/total_mb,2) pct_free
    FROM v$asm_diskgroup
    ORDER BY 1;
    
    NAME                             TOTAL_MB    FREE_MB    USED_MB   PCT_FREE
    ------------------------------ ---------- ---------- ---------- ----------
    DATAC1                           68812800    9985076   58827724      14.51
    RECOC1                           94980480   82594920   12385560      86.96
    

    The example above shows that the DATAC1 disk group has only about 15% of free space available while the RECOC1 disk group has about 87% free disk space. The PCT_FREE displayed here is raw free space, not usable free space. Additional space is needed for rebalancing operations.

  2. For the disk groups you plan to resize, view the count and status of the failure groups used by the disk groups.
    SELECT dg.name, d.failgroup, d.state, d.header_status, d.mount_mode, 
     d.mode_status, count(1) num_disks
    FROM V$ASM_DISK d, V$ASM_DISKGROUP dg
    WHERE d.group_number = dg.group_number
    AND dg.name IN ('RECOC1', 'DATAC1')
    GROUP BY dg.name, d.failgroup, d.state, d.header_status, d.mount_status,
      d.mode_status
    ORDER BY 1, 2, 3;
    
    NAME       FAILGROUP      STATE      HEADER_STATU MOUNT_S  MODE_ST  NUM_DISKS
    ---------- -------------  ---------- ------------ -------- -------  ---------
    DATAC1     EXA01CELADM01  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM02  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM03  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM04  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM05  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM06  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM07  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM08  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM09  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM10  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM11  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM12  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM13  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM14  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM01  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM02  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM03  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM04  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM05  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM06  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM07  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM08  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM09  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM10  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM11  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM12  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM13  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM14  NORMAL     MEMBER        CACHED  ONLINE   12
    

    The above example is for a full rack, which has 14 cells and 14 failure groups for DATAC1 and RECOC1. Verify that each failure group has at least 12 disks in the NORMAL state (num_disks). If you see disks listed as MISSING, or you see an unexpected number of disks for your configuration, then do not proceed until you resolve the problem.

    Extreme Flash (EF) systems should see a disk count of 8 instead of 12 for num_disks.

  3. List the corresponding grid disks associated with each cell and each failure group, so you know which grid disks to resize.
    SELECT dg.name, d.failgroup, d.path
    FROM V$ASM_DISK d, V$ASM_DISKGROUP dg
    WHERE d.group_number = dg.group_number
    AND dg.name IN ('RECOC1', 'DATAC1')
    ORDER BY 1, 2, 3;
    
    NAME        FAILGROUP      PATH
    ----------- -------------  ----------------------------------------------
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_00_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_01_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_02_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_03_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_04_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_05_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_06_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_07_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_08_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_09_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_10_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_11_exa01celadm01
    DATAC1      EXA01CELADM02  o/192.168.74.44/DATAC1_CD_00_exa01celadm01
    DATAC1      EXA01CELADM02  o/192.168.74.44/DATAC1_CD_01_exa01celadm01
    DATAC1      EXA01CELADM02  o/192.168.74.44/DATAC1_CD_02_exa01celadm01
    ...
    RECOC1      EXA01CELADM13  o/192.168.74.55/RECOC1_CD_00_exa01celadm13
    RECOC1      EXA01CELADM13  o/192.168.74.55/RECOC1_CD_01_exa01celadm13
    RECOC1      EXA01CELADM13  o/192.168.74.55/RECOC1_CD_02_exa01celadm13
    ...
    RECOC1      EXA01CELADM14  o/192.168.74.56/RECOC1_CD_09_exa01celadm14
    RECOC1      EXA01CELADM14  o/192.168.74.56/RECOC1_CD_10_exa01celadm14
    RECOC1      EXA01CELADM14  o/192.168.74.56/RECOC1_CD_11_exa01celadm14  
    
    168 rows returned.
    
  4. Check the cell disks for available free space.
    Free space on the cell disks can be used to increase the size of the DATAC1 grid disks.  If there is not enough available free space to expand the DATAC1 grid disks, then you must shrink the RECOC1 grid disks to provide the additional space for the desired new size of DATAC1 grid disks. 
    [root@exa01adm01 tmp]# dcli -g ~/cell_group -l root "cellcli -e list celldisk \
      attributes name,freespace" 
    exa01celadm01: CD_00_exa01celadm01 0 
    exa01celadm01: CD_01_exa01celadm01 0 
    exa01celadm01: CD_02_exa01celadm01 0 
    exa01celadm01: CD_03_exa01celadm01 0 
    exa01celadm01: CD_04_exa01celadm01 0 
    exa01celadm01: CD_05_exa01celadm01 0 
    exa01celadm01: CD_06_exa01celadm01 0 
    exa01celadm01: CD_07_exa01celadm01 0 
    exa01celadm01: CD_08_exa01celadm01 0 
    exa01celadm01: CD_09_exa01celadm01 0 
    exa01celadm01: CD_10_exa01celadm01 0 
    exa01celadm01: CD_11_exa01celadm01 0 
    ...
    

    In this example, there is no free space available, so you must shrink the RECOC1 grid disks first to provide space for the DATAC1 grid disks.  In your configuration there might be plenty of free space available and you can use that free space instead of shrinking the RECOC1 grid disks.

  5. Calculate the amount of space to shrink from the RECOC1 disk group and from each grid disk.

    The minimum size to safely shrink a disk group and its grid disks must take into account the following:

    • Space currently in use (USED_MB)

    • Space expected for growth (GROWTH_MB)

    • Space needed to rebalance in case of disk failure (DFC_MB), typically 15% of total diskgroup size

    The minimum size calculation taking the above factors into account is: 

    Minimum DG size (MB) = (USED_MB + GROWTH_MB ) * 1.15 
    
    • USED_MB can be derived from V$ASM_DISKGROUP by calculating TOTAL_MB - FREE_MB

    • GROWTH_MB is an estimate specific to how the disk group will be used in the future and should be based on historical patterns of growth

    For the RECOC1 disk group space usage shown in step 1, we see the minimum size it can shrink to assuming no growth estimates is:

    Minimum RECOC1 size = (TOTAL_MB - FREE_MB + GROWTH_MB) * 1.15

                               = ( 94980480 - 82594920 + 0) * 1.15 = 14243394 MB = 13,910 GB

    In the example output shown in Step 1, RECOC1 has plenty of free space and DATAC1 has less than 15% free. So, you could shrink RECOC1 and give the freed disk space to DATAC1. If you decide to reduce RECOC1 to half of its current size, the new size is 94980480 / 2 = 47490240 MB. This size is significantly above the minimum size we calculated for the RECOC1 disk group above, so it is safe to shrink it down to this value.

    The query in Step 2 shows that there are 168 grid disks for RECOC1, because there are 14 cells and 12 disks per cell (14 * 12 = 168). The estimated new size of each grid disk for the RECOC1 disk group is 47490240 / 168, or 282,680 MB.

    Find the closest 16 MB boundary for the new grid disk size. If you do not perform this check, then the cell will round down the grid disk size to the nearest 16 MB boundary automatically, and you could end up with a mismatch in size between the Oracle ASM disks and the grid disks.

    SQL> SELECT 16*TRUNC(&new_disk_size/16) new_disk_size FROM dual;
    Enter value for new_disk_size: 282680
    
    NEW_DISK_SIZE
    -------------
           282672
    

    Based on the above result, you should choose 282672 MB as the new size for the grid disks in the RECOC1 disk group. After resizing the grid disks, the size of the RECOC1 disk group will be 47488896 MB.

  6. Calculate how much to increase the size of each grid disk in the DATAC1 disk group.

    Ensure the Oracle ASM disk size and the grid disk sizes match across the entire disk group. The following query shows the combinations of disk sizes in each disk group. Ideally, there is only one size found for all disks and the sizes of both the Oracle ASM (total_mb) disks and the grid disks (os_mb) match.

    SELECT dg.name, d.total_mb, d.os_mb, count(1) num_disks
    FROM v$asm_diskgroup dg, v$asm_disk d
    WHERE dg.group_number = d.group_number
    GROUP BY dg.name, d.total_mb, d.os_mb;
    
    NAME                             TOTAL_MB      OS_MB  NUM_DISKS
    ------------------------------ ---------- ---------- ----------
    DATAC1                             409600     409600        168
    RECOC1                             565360     565360        168
    

    After shrinking RECOC1's grid disks, the following space is left per disk for DATAC1:

    Additional space for DATAC1 disks = RECOC1_current_size - RECOC1_new_size
                                                           = 565360 - 282672 = 282688 MB

    To calculate the new size of the grid disks for the DATAC1 disk group, use the following:

    DATAC1's disks new size  = DATAC1_ disks_current_size + new_free_space_from_RECOC1
                                              = 409600 + 282688 = 692288 MB

    Find the closest 16 MB boundary for the new grid disk size. If you do not perform this check, then the cell will round down the grid disk size to the nearest 16 MB boundary automatically, and you could end up with a mismatch in size between the Oracle ASM disks and the grid disks.

    SQL> SELECT 16*TRUNC(&new_disk_size/16) new_disk_size FROM dual;
    Enter value for new_disk_size: 692288
    
    NEW_DISK_SIZE
    -------------
           692288
    

    Based on the query result, you can use the calculated size of 692288 MB for the disks in the DATAC1 disk groups because the size is on a 16 MB boundary. If the result of the query is different from the value you supplied, then you must use the value returned by the query because that is the value to which the cell will round the grid disk size.

    The calculated value of the new grid disk size will result in the DATAC1 disk group having a total size of 116304384 MB (168 disks * 692288 MB).

3.2.3.2 Shrink the Oracle ASM Disks in the Donor Disk Group

If there is no free space available on the cell disks, you can reduce the space used by one disk group to provide additional disk space for a different disk group.

This task is a continuation of a example where space in the RECOC1 disk group is being reallocated to the DATAC1 disk group.
Before resizing the disk group, make sure the disk group you are taking space from has sufficient free space.
  1. Shrink the Oracle ASM disks for the RECO disk group down to the new desired size for all disks.

    Use the new size for the disks in the RECO disk group that was calculated in 5 of "Determine the Amount of Available Space".

    SQL> alter diskgroup RECOC1 resize all size 282672M rebalance power 64;
    

    Note:

    The ALTER DISKGROUP command may take several minutes to complete. The SQL prompt will not return until this operation has completed.

    Wait for rebalance to finish by checking the view GV$ASM_OPERATION.

    SQL> set lines 250 pages 1000
    SQL> col error_code form a10
    SQL> SELECT dg.name, o.*
      2  FROM gv$asm_operation o, v$asm_diskgroup dg
      3  WHERE o.group_number = dg.group_number;
    

    Proceed to the next step ONLY when the query against GV$ASM_OPERATION shows no rows for the disk group being altered.

  2. Verify the new size of the ASM disks using the following queries:
    SQL> SELECT name, total_mb, free_mb, total_mb - free_mb used_mb,
      2   round(100*free_mb/total_mb,2) pct_free
      3  FROM v$asm_diskgroup
      4  ORDER BY 1;
    
    NAME                             TOTAL_MB    FREE_MB    USED_MB   PCT_FREE
    ------------------------------ ---------- ---------- ---------- ----------
    DATAC1                           68812800    9985076   58827724      14.51
    RECOC1                           47488896   35103336   12385560      73.92
    
    SQL> SELECT dg.name, d.total_mb, d.os_mb, count(1) num_disks
      2  FROM v$asm_diskgroup dg, v$asm_disk d
      3  WHERE dg.group_number = d.group_number
      4  GROUP BY dg.name, d.total_mb, d.os_mb;
    
    NAME                             TOTAL_MB      OS_MB  NUM_DISKS
    ------------------------------ ---------- ---------- ----------
    DATAC1                             409600     409600        168
    RECOC1                             282672     565360        168
    

    The above query example shows that the disks in the RECOC1 disk group have been resized to a size of 282672 MG each, and the total disk group size is 47488896 MB.

3.2.3.3 Shrink the Grid Disks in the Donor Disk Group

After shrinking the disks in the Oracle ASM disk group, you then shrink the size of the grid disks on each cell.

This task is a continuation of a example where space in the RECOC1 disk group is being reallocated to the DATAC1 disk group.
You must have first completed the task Shrink the Oracle ASM Disks in the Donor Disk Group.
  1. Shrink the grid disks associated with the RECO disk group on all cells down to the new, smaller size.

    For each storage cell identified in Determine the Amount of Available Space in Step 3, shrink the grid disks to match the size of the Oracle ASM disks that were shrunk in the previous task. Use commands similar to the following:

    dcli -c exa01celadm01 -l root "cellcli -e alter griddisk RECOC1_CD_00_exa01celadm01 \
    ,RECOC1_CD_01_exa01celadm01 \
    ,RECOC1_CD_02_exa01celadm01 \
    ,RECOC1_CD_03_exa01celadm01 \
    ,RECOC1_CD_04_exa01celadm01 \
    ,RECOC1_CD_05_exa01celadm01 \
    ,RECOC1_CD_06_exa01celadm01 \
    ,RECOC1_CD_07_exa01celadm01 \
    ,RECOC1_CD_08_exa01celadm01 \
    ,RECOC1_CD_09_exa01celadm01 \
    ,RECOC1_CD_10_exa01celadm01 \
    ,RECOC1_CD_11_exa01celadm01 \
    size=282672M "
    
    dcli -c exa01celadm02 -l root "cellcli -e alter griddisk RECOC1_CD_00_exa01celadm02 \
    ,RECOC1_CD_01_exa01celadm02 \
    ,RECOC1_CD_02_exa01celadm02 \
    ,RECOC1_CD_03_exa01celadm02 \
    ,RECOC1_CD_04_exa01celadm02 \
    ,RECOC1_CD_05_exa01celadm02 \
    ,RECOC1_CD_06_exa01celadm02 \
    ,RECOC1_CD_07_exa01celadm02 \
    ,RECOC1_CD_08_exa01celadm02 \
    ,RECOC1_CD_09_exa01celadm02 \
    ,RECOC1_CD_10_exa01celadm02 \
    ,RECOC1_CD_11_exa01celadm02 \
    size=282672M "
    
    ...
    
    dcli -c exa01celadm14 -l root "cellcli -e alter griddisk RECOC1_CD_00_exa01celadm14 \
    ,RECOC1_CD_01_exa01celadm14 \
    ,RECOC1_CD_02_exa01celadm14 \
    ,RECOC1_CD_03_exa01celadm14 \
    ,RECOC1_CD_04_exa01celadm14 \
    ,RECOC1_CD_05_exa01celadm14 \
    ,RECOC1_CD_06_exa01celadm14 \
    ,RECOC1_CD_07_exa01celadm14 \
    ,RECOC1_CD_08_exa01celadm14 \
    ,RECOC1_CD_09_exa01celadm14 \
    ,RECOC1_CD_10_exa01celadm14 \
    ,RECOC1_CD_11_exa01celadm14 \
    size=282672M "
    
  2. Verify the new size of the grid disks using the following command:
    [root@exa01adm01 tmp]# dcli -g cell_group -l root "cellcli -e list griddisk attributes name,size where name like \'RECOC1.*\' "
    
    exa01celadm01: RECOC1_CD_00_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_01_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_02_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_03_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_04_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_05_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_06_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_07_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_08_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_09_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_10_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_11_exa01celadm01 276.046875G  
    ...
    

    The above example shows that the disks in the RECOC1 disk group have been resized to a size of 282672 MB each (276.046875 * 1024).

3.2.3.4 Increase the Size of the Grid Disks Using Available Space

You can increase the size used by the grid disks if there is unallocated disk space either already available, or made available by shrinking the space used by a different Oracle ASM disk group.

This task is a continuation of a example where space in the RECOC1 disk group is being reallocated to the DATAC1 disk group. If you have already have sufficient space to expand an existing disk group, then you do not need to reallocate space from a different disk group.

  1. Check that the cell disks have the expected amount of free space.
    After completing the tasks to shrink the Oracle ASM disks and the grid disks, you would expect to see the following free space on the cell disks:
    [root@exa01adm01 tmp]# dcli -g ~/cell_group -l root "cellcli -e list celldisk \
    attributes name,freespace"
    
    exa01celadm01: CD_00_exa01celadm01 276.0625G
    exa01celadm01: CD_01_exa01celadm01 276.0625G
    exa01celadm01: CD_02_exa01celadm01 276.0625G
    exa01celadm01: CD_03_exa01celadm01 276.0625G
    exa01celadm01: CD_04_exa01celadm01 276.0625G
    exa01celadm01: CD_05_exa01celadm01 276.0625G
    exa01celadm01: CD_06_exa01celadm01 276.0625G
    exa01celadm01: CD_07_exa01celadm01 276.0625G
    exa01celadm01: CD_08_exa01celadm01 276.0625G
    exa01celadm01: CD_09_exa01celadm01 276.0625G
    exa01celadm01: CD_10_exa01celadm01 276.0625G
    exa01celadm01: CD_11_exa01celadm01 276.0625G 
    ...
    
  2. For each storage cell, increase the size of the DATA grid disks to the desired new size.

    Use the size calculated in Determine the Amount of Available Space.

    dcli -c exa01celadm01 -l root "cellcli -e alter griddisk DATAC1_CD_00_exa01celadm01 \
    ,DATAC1_CD_01_exa01celadm01 \
    ,DATAC1_CD_02_exa01celadm01 \
    ,DATAC1_CD_03_exa01celadm01 \
    ,DATAC1_CD_04_exa01celadm01 \
    ,DATAC1_CD_05_exa01celadm01 \
    ,DATAC1_CD_06_exa01celadm01 \
    ,DATAC1_CD_07_exa01celadm01 \
    ,DATAC1_CD_08_exa01celadm01 \
    ,DATAC1_CD_09_exa01celadm01 \
    ,DATAC1_CD_10_exa01celadm01 \
    ,DATAC1_CD_11_exa01celadm01 \
    size=692288M "
    ...
    dcli -c exa01celadm14 -l root "cellcli -e alter griddisk DATAC1_CD_00_exa01celadm14 \
    ,DATAC1_CD_01_exa01celadm14 \
    ,DATAC1_CD_02_exa01celadm14 \
    ,DATAC1_CD_03_exa01celadm14 \
    ,DATAC1_CD_04_exa01celadm14 \
    ,DATAC1_CD_05_exa01celadm14 \
    ,DATAC1_CD_06_exa01celadm14 \
    ,DATAC1_CD_07_exa01celadm14 \
    ,DATAC1_CD_08_exa01celadm14 \
    ,DATAC1_CD_09_exa01celadm14 \
    ,DATAC1_CD_10_exa01celadm14 \
    ,DATAC1_CD_11_exa01celadm14 \
    size=692288M "
    
  3. Verify the new size of the grid disks associated with the DATAC1 disk group using the following command:
    dcli -g cell_group -l root "cellcli -e list griddisk attributes name,size \ 
    where name like \'DATAC1.*\' "
    
    exa01celadm01: DATAC1_CD_00_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_01_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_02_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_03_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_04_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_05_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_06_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_07_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_08_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_09_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_10_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_11_exa01celadm01 676.0625G
    

Instead of increasing the size of the DATA disk group, you could instead create new disk groups with the newly freed free space or keep it free for future use. In general, Oracle recommends using the smallest number of disk groups needed (typically DATA, RECO, and DBFS_DG) to give the greatest flexibility and ease of administration. However, there may be cases, perhaps when using virtual machines or consolidating many databases, where additional disk groups or available free space for future use may be desired.  

If you decide to leave free space on the grid disks in reserve for future use, please see the My Oracle Support Note 1684112.1 for the steps on how to allocate free space to an existing disk group at a later time.

3.2.3.5 Increase the Size of the Oracle ASM Disks

You can increase the size used by the Oracle ASM disks after increasing the space allocated to the associated grid disks.

This task is a continuation of a example where space in the RECOC1 disk group is being reallocated to the DATAC1 disk group.
You must have completed the task of resizing the grid disks before you can resize the corresponding Oracle ASM disk group.
  1. Increase the Oracle ASM disks for DATAC1 disk group to the new size of the grid disks on the storage cells.
    SQL> ALTER DISKGROUP datac1 RESIZE ALL;
    

    This command resizes the Oracle ASM disks to match the size of the grid disks.

    Note:

    If the specified disk group has quorum disks configured within the disk group, then the ALTER DISKGROUP ... RESIZE ALL command could fail with error ORA-15277. Quorum disks are configured if the requirements specified in Oracle Exadata Database Machine Maintenance Guide are met.

    As a workaround, you can specify the storage server failure group names (for the ones of FAILURE_TYPE "REGULAR", not "QUORUM") explicitly in the SQL command, for example:

    SQL> ALTER DISKGROUP datac1 RESIZE DISKS IN FAILGROUP exacell01, exacell02, exacell03;
    
  2. Wait for the rebalance operation to finish.
    SQL> set lines 250 pages 1000 
    SQL> col error_code form a10 
    SQL> SELECT dg.name, o.* FROM gv$asm_operation o, v$asm_diskgroup dg 
         WHERE o.group_number = dg.group_number;
    

    Do not continue to the next step until the query returns zero rows for the disk group that was altered.

  3. Verify that the new sizes for the Oracle ASM disks and disk group is at the desired sizes.
    SQL> SELECT name, total_mb, free_mb, total_mb - free_mb used_mb, 
         round(100*free_mb/total_mb,2) pct_free
         FROM v$asm_diskgroup
         ORDER BY 1;
    
    NAME                             TOTAL_MB    FREE_MB    USED_MB   PCT_FREE
    ------------------------------ ---------- ---------- ---------- ----------
    DATAC1                          116304384   57439796   58864588      49.39
    RECOC1                           47488896   34542516   12946380      72.74
    
    SQL>  SELECT dg.name, d.total_mb, d.os_mb, count(1) num_disks
          FROM  v$asm_diskgroup dg, v$asm_disk d
          WHERE dg.group_number = d.group_number
          GROUP BY dg.name, d.total_mb, d.os_mb;
     
    NAME                             TOTAL_MB      OS_MB  NUM_DISKS
    ------------------------------ ---------- ---------- ----------
    DATAC1                             692288     692288        168
    RECOC1                             282672     282672        168
    
    

    The results of the queries show that the RECOC1 and DATAC1 disk groups and disk have been resized.

3.2.4 Determining Which Oracle ASM Disk Group Contains an Oracle Exadata Storage Server Grid Disk

If a grid disk name matches the Oracle ASM disk name, and the name contains the Oracle ASM disk group name, then you can determine the Oracle ASM disk group to which the grid disk belongs. You can also use SQL commands on the Oracle ASM instance to find the Oracle ASM disk group that matches part of the specific grid disk name. This can help you to determine which Oracle ASM disk group contains a specific grid disk.

Example 3-3 Determining Grid Disks in an Oracle ASM Disk Group

This example shows how to find the Oracle ASM disk group that contains grid disks that begin with DATA0, for example DATA0_CD_03_CELL04.

SQL> SELECT d.label as asmdisk, dg.name as diskgroup
     FROM V$ASM_DISK d, V$ASM_DISKGROUP dg 
     WHERE dg.name LIKE 'DATA0%'
           AND d.group_number = dg.group_number;

ASMDISK                DISKGROUP
---------------------- -------------
DATA0_CD_00_CELL04      DATA0
DATA0_CD_01_CELL04      DATA0
DATA0_CD_02_CELL04      DATA0
DATA0_CD_03_CELL04      DATA0

3.2.5 Determining Which Oracle Exadata Storage Server Grid Disks Belong to an Oracle ASM Disk Group

If a grid disk name contains the Oracle ASM disk group name, then you can use SQL commands on the Oracle ASM instance to list the Oracle ASM disk group names, and use the CellCLI utility to search for specific grid disk names.

Example 3-4 Displaying Oracle ASM Disk Group Names

This example shows how to use a SQL command to display the Oracle ASM disk group names on the Oracle ASM instance.

SQL> SELECT name FROM V$ASM_DISKGROUP;

NAME
------------------------------
CONTROL
DATA0
DATA1
DATA2
LOG
STANDBY

Example 3-5 Searching for Grid Disks by Name

This example shows how to display similar grid disk group names on the cell using the dcli utility.

$ ./dcli "cellcli -e list griddisk where -c cell04"

data0_CD_01_cell04
data0_CD_02_cell04
data0_CD_03_cell04
...

3.2.6 Handling Disk Replacement

When a physical disk is removed, its status becomes not present. Oracle ASM may take a grid disk offline when getting I/O errors while trying to access a grid disk on the physical disk. When the physical disk is replaced, Oracle Exadata Storage Server Software automatically puts the grid disks on the physical disk online in their respective Oracle ASM disk groups. If a grid disk remains offline longer than the time specified by the disk_repair_time attribute, then Oracle ASM force drops that grid disk and starts a rebalance to restore data redundancy. Oracle ASM monitors the rebalance operation, and Oracle Exadata Storage Server Software sends an e-mail message when the operation is complete.

The following table summarizes the physical disk statuses, and how Oracle ASM handles grid disks when the physical disk has a problem.

Table 3-1 Physical Disk Status

Physical Disk Status Oracle Exadata Storage Server Software Action

normal

Disk is functioning normally

No action.

not present

Disk has been removed

Oracle Exadata Storage Server Software offlines disk, then uses the DROP ... FORCE command after disk_repair_time limit exceeded. The rebalance operation begins.

predictive failure

Disk is having problems, and may fail

Oracle Exadata Storage Server Software drops the grid disks on the affected physical disk without the FORCE option from Oracle ASM, and the rebalance operation copies the data on the affected physical disk to other disks.

After all grid disks have been successfully removed from their respective Oracle ASM disk groups, administrators can proceed with disk replacement.

critical

Disk has failed

Oracle Exadata Storage Server Software drops the grid disks using the DROP ... FORCE command on the affected physical disk from Oracle ASM, and the rebalance operation restores data redundancy.

Administrators can proceed with disk replacement immediately.

This status is only available for releases 11.2.3.1.1 and earlier.

poor performance

Disk is performing poorly

Oracle Exadata Storage Server Software attempts to drop the grid disks using the FORCE option on the affected physical disk from ASM.

If the DROP ... FORCE command is successful, then the rebalance operation begins to restore data redundancy and administrators can proceed with disk replacement immediately.

If the DROP ... FORCE command fails due to offline partners, Oracle Exadata Storage Server Software drops the grid disks on the affected physical disk without the FORCE option from Oracle ASM, and the rebalance operation copies the data on the affected physical disk to other disks.

After all grid disks have been successfully removed from their respective Oracle ASM disk groups, administrators can proceed with disk replacement.

Once a physical disk is replaced, Oracle Exadata Storage Server Software automatically creates the grid disks on the replacement disk, and adds them to the respective Oracle ASM disk groups. An Oracle ASM rebalance operation relocates data to the newly-added grid disks. Oracle ASM monitors the rebalance operation, and Oracle Exadata Storage Server Software sends an e-mail message when the operation is complete, and when an error occurred during the rebalance operation.