2 Administering Oracle ASM on Exadata

2.1 Overview of Oracle Exadata Storage

Storage in Oracle Exadata consists of servers, cell disks, grid disks, Oracle ASM disk groups, and Oracle ASM failure groups.

The following image shows Oracle ASM disk groups created from Oracle Exadata Storage Server grid disks. It represents a typical, but simplified configuration, that can be used as a model for building larger storage grids with additional storage servers and disks.

Figure 2-1 Sample Oracle Exadata Storage Server Grid

Description of Figure 2-1 follows
Description of "Figure 2-1 Sample Oracle Exadata Storage Server Grid"

This example storage grid illustrates the following:

  • The storage servers in the grid use an RDMA Network Fabric network to connect to the database servers that have a single-instance database or Oracle Real Application Clusters (Oracle RAC) database installation.
  • Each storage server contains multiple physical disks.
  • Each cell disk represents a physical disk and a LUN.
  • Each cell disk is partitioned into grid disks.
  • Oracle ASM disk groups are created using the grid disks.

Oracle ASM failure groups are created to ensure that files are not mirrored on the same storage server, enabling the system to tolerate the failure of a storage server. The number of failure groups equals the number of storage servers. Each failure group is composed of a subset of grid disks in the Oracle ASM disk group that belong to a single storage server.

2.2 Administering Oracle ASM on Exadata

There are some administration tasks that may be required to use Oracle ASM on Exadata.

2.2.1 Configuring Exadata Storage Discovery for Oracle ASM

To enable Oracle ASM to discover and access Exadata grid disks, you must configure the ASM_DISKSTRING initialization parameter.

Exadata grid disks are specified by using a discovery string with the following format:

o/<cell_IP_pattern>/<griddisk_name_pattern>

In the discovery string:

  • <cell_IP_pattern> identifies Exadata storage server IP address (as listed in the cellip.ora file).

  • <griddisk_name_pattern> identifies grid disks by name.

The wildcard character (*) can be used to expand the <cell_IP_pattern> and <griddisk_name_pattern> values.

For example, the following ASM_DISKSTRING setting discovers all Exadata grid disks on all cells specified in the cellip.ora file:

ASM_DISKSTRING = 'o/*/*'

You can use a more specific setting to discover a subset of grid disks. For example, the following ASM_DISKSTRING setting discovers only the grid disks with names that begin with DATA:

ASM_DISKSTRING = 'o/*/DATA*'

You can change the ASM_DISKSTRING initialization parameter when the Oracle ASM instance is running with the SQL ALTER SYSTEM command. If you edit the ASM_DISKSTRING initialization parameter in the initialization parameter file when the Oracle ASM instance is running, then the Oracle ASM instance must be shut down and restarted for the change to take effect.

See Also:

2.2.2 Understanding Oracle ASM Disk Groups for Oracle Exadata Storage Servers

This topic explains Oracle Automatic Storage Management (Oracle ASM) disk groups, and how to create an Oracle ASM disk group for Oracle Exadata System Software using the CREATE DISKGROUP SQL command.

Before creating an Oracle ASM disk group, determine which grid disks belong to the Oracle ASM disk group. It is recommended that you choose similar names for the Oracle ASM disk group and its grid disks whenever possible.

The Oracle Exadata Storage Server grid disks are specified with the following pattern:

o/cell_IPaddress/griddisk_name

In the preceding syntax, cell_IPaddress is the IP address of Oracle Exadata Storage Server, and griddisk_name is the name of the grid disk.

The cell discovery strings begin with the o/ prefix.

When specifying the grid disks to be added to the disk group, consider the following:

  • The default Oracle ASM disk name is the grid disk name. Oracle recommends using the default name.
  • The default failure group name is the cell name. Oracle recommends using the default name.
  • Wildcards in the form of '*' may be used in the cell_IPaddress or griddisk_name to add multiple disks with one command. For example:
    CREATE DISKGROUP reco HIGH REDUNDANCYDISK 'o/*/DATA*'

When a failure group is not specified, Oracle ASM adds each disk within its own failure group. However, when the disks are stored on Oracle Exadata Storage Servers and a failure group is not specified, Oracle ASM adds a disk to the failure group for that cell. The failure group name is the cell name.

Note:

If a cell is renamed, and a disk from that cell is added to an existing disk group that has disks from that cell, then Oracle ASM adds the new disk to a failure group using the new cell name. To ensure all the disks from the cell are in one failure group, add the disk to the disk group and specify the original failure group name.

To enable Smart Scan predicate offload processing, all disks in a disk group must be Oracle Exadata Storage Server grid disks. You cannot include conventional disks with Oracle Exadata Storage Server grid disks.

2.2.2.1 About Fast Disk Scan Rates

To achieve fast disk scan rates, it is important to lay out segments with at least 4 MB of contiguous space. This allows disk scans to read 4 MB of data before performing another seek at a different location on disk. To ensure segments are laid out with 4 MB of contiguous space, set the Oracle ASM allocation unit size to 4 MB, and ensure data file extents are also at least 4 MB. The allocation unit can be set with the disk group attribute AU_SIZE when creating the disk group.

The following SQL command creates a disk group with the allocation unit set to 4 MB. The compatible.rdbms attribute is set to 11.2.0.2 in order to support both release 11.2.0.2 and release 11.2.0.3 databases in a consolidated environment.

SQL> CREATE DISKGROUP data NORMAL REDUNDANCY 
     DISK 'o/*/data_CD*'
     ATTRIBUTE 'compatible.rdbms' = '11.2.0.2', 
               'compatible.asm' = '11.2.0.3',
               'content.type' = 'data',
               'cell.smart_scan_capable' = 'TRUE',
               'au_size' = '4M';
2.2.2.2 Setting the Oracle ASM Content Type

Setting the content.type disk group attribute enhances fault tolerance, especially for normal redundancy disk groups.

Commencing with Oracle Grid Infrastructure release 11.2.0.3, Oracle ASM provides administrators with the option to specify the content type associated with each disk group. This capability is provided by the content.type disk group attribute. Three possible settings are allowed: data, recovery, or system. Each content type setting modifies the adjacency measure used by the secondary extent placement algorithm.

The result is that the contents of disk groups with different content type settings are distributed differently across the available disks. This decreases the likelihood that a double failure will result in data loss across multiple normal redundancy disk groups with different content type settings. Likewise, a triple failure is less likely to result in data loss on multiple high redundancy disk groups with different content type settings.

The value of content.type attribute should be set as follows:

  • DATA and SPARSE disk groups — data

  • RECO disk group — recovery

  • DBFS_DG disk group (if present) — system

Following this recommendation enhances fault-tolerance. For example, even if the DATA and RECO disk groups use normal redundancy, at least one of the disk groups will remain if two disks fail simultaneously. Therefore, even if DATA is dismounted, databases can typically be recovered from backup objects in RECO.

Note:

  • Do not use the content.type attribute to distinguish the availability characteristics of disk groups that are used for a different purpose, such as those created to support a particular service.

  • The Oracle Database and Oracle Grid Infrastructure software must be release 12.1.0.2.0 BP5 or later when using sparse grid disks.

  1. Use the ALTER DISKGROUP command to set the content.type attribute for an existing disk group, and then rebalance the disk group.

    For example:

    ALTER DISKGROUP reco SET ATTRIBUTE 'content.type'='recovery';
    ALTER DISKGROUP reco REBALANCE POWER preferred_power_setting ; 
    

    The rebalance operation can take a long time, but the data in the disk group is fully redundant throughout the operation. Oracle ASM monitors the rebalance operation, and Oracle Exadata System Software sends an e-mail message when the operation completes.

  2. Check the content.type attributes using the following query:
    SQL> SELECT dg.name,a.value FROM v$asm_diskgroup dg,        \
         v$asm_attribute a WHERE dg.group_number=a.group_number \
         AND a.name='content.type' AND (dg.name LIKE 'DATA%'    \
         OR dg.name LIKE 'RECO%' OR dg.name LIKE 'DBFS_DG%');
     
    NAME                 VALUE
    -------------------- --------------------
    DATA                 data
    RECO                 recovery
    DBFS_DG              system

Example 2-1 Specifying content.type While Creating a Disk Group

In this example, the compatible.rdbms attribute is set to 11.2.0.2 in order to support both Oracle Database release 11.2.0.2 and release 11.2.0.3 databases in a consolidated environment.

CREATE DISKGROUP data NORMAL REDUNDANCY
DISK 'o/*/DATA*'
ATTRIBUTE 'content.type' = 'DATA',
'AU_SIZE' = '4M',
'cell.smart_scan_capable'='TRUE',
'compatible.rdbms'='11.2.0.2',
'compatible.asm'='11.2.0.3';

2.2.3 Creating Oracle ASM Disk Groups

You can create Oracle ASM disk groups on Oracle Exadata Storage Server grid disks.

To create an Oracle ASM disk group to use Oracle Exadata Storage Server grid disks, perform the following procedure:

  1. Connect to the Oracle ASM instance.
  2. Ensure that the ORACLE_SID environment variable is set to the Oracle ASM instance using a command similar to the following:
    $ setenv ORACLE_SID ASM_instance_SID
    
  3. Start SQL*Plus on the Oracle ASM instance, and log in as a user with SYSASM administrative privileges.
    $ sqlplus / AS SYSASM
    
  4. Determine which Oracle Exadata Storage Server grid disks are available by querying the V$ASM_DISK view on the Oracle ASM instance, using the following syntax:
    SQL> SELECT path, header_status STATUS FROM V$ASM_DISK WHERE path LIKE 'o/%';
    
  5. Create an Oracle ASM disk group to include disks on the cells.

    In this example, the ALTER command is needed to change compatible.rdbms for the disk group created during installation to hold the OCR and voting disks. The compatible.rdbms attribute is set to 11.2.0.2 in order to support Oracle Database release 11.2.0.2 and later release databases in a consolidated environment.

    CREATE DISKGROUP data HIGH REDUNDANCY
    DISK 'o/*/DATA*'
    ATTRIBUTE 'AU_SIZE' = '4M',
              'content.type' = 'data',
              'compatible.rdbms'='11.2.0.4',
              'compatible.asm'='19.0.0.0';
    
    SQL> CREATE DISKGROUP reco HIGH REDUNDANCY
    DISK 'o/*/RECO*'
    ATTRIBUTE 'AU_SIZE' = '4M',
              'content.type' = 'recovery',
              'compatible.rdbms'='11.2.0.4',
              'compatible.asm'='19.0.0.0';
     
    REM for Exadata systems prior to X7
    SQL> ALTER DISKGROUP dbfs_dg SET ATTRIBUTE 
         'content.type' = 'system',
         'compatible.rdbms' = '11.2.0.4';
    

    When creating sparse disk groups, use a command similar to the following:

    SQL> CREATE DISKGROUP sparsedg NORMAL REDUNDANCY
    DISK 'o.*/sparse_*'
    ATTRIBUTE 'AU_SIZE' = '4M',
              'content.type' = 'data',
              'cell.smart_scan_capable'='TRUE',
              'compatible.rdbms' = '12.1.0.2',
              'compatible.asm' = '19.0.0.0', 
              'cell.sparse_dg' = 'allsparse';
    

    In the preceding command, the cell.sparse_dg attribute defines the disk group as a sparse disk group. The attribute is not required if the disk group is not a sparse disk group.

    Note:

    • When defining sparse grid disks, the compatible.asm and compatible.rdbms attributes must be at least 12.1.0.2.0.
    • The Oracle ASM disk group compatible attributes take precedence over the COMPATIBLE initialization parameter for the Oracle ASM instance.
    • The Oracle Database and Oracle Grid Infrastructure software must be release 12.1.0.2.0 BP5 or later when using sparse grid disks.
    • The recommended allocation unit size (AU_SIZE) is 4 MB for Oracle ASM disk groups on Exadata.
  6. View the Oracle ASM disk groups and associated attributes with a SQL query on V$ASM dynamic views.
    SQL> SELECT dg.name AS diskgroup, SUBSTR(a.name,1,24) AS name, 
         SUBSTR(a.value,1,24) AS value FROM V$ASM_DISKGROUP dg, V$ASM_ATTRIBUTE a 
         WHERE dg.group_number = a.group_number;
    
    DISKGROUP                    NAME                       VALUE
    ---------------------------- ------------------------ ------------------------
    DATA                         compatible.rdbms           11.2.0.4
    DATA                         compatible.asm             19.0.0.0
    DATA                         au_size                    4194304
    DATA                         disk_repair_time           3.6h
    DATA                         cell.smart_scan_capable    TRUE
    ...
    
  7. Create a tablespace in the disk group to take advantage of Oracle Exadata System Software features, such as offload processing. The tablespace should contain the tables that you want to query with offload processing.
    SQL> CREATE TABLESPACE tablespace_name DATAFILE '+DATA';
    

    In the preceding command, +DATA is the name of the Oracle ASM disk group.

  8. Verify that the tablespace is in an Oracle Exadata Storage Server disk group. The PREDICATE_EVALUATION column of the DBA_TABLESPACES view indicates whether predicates are evaluated by host (HOST) or by storage (STORAGE).
    SQL> SELECT tablespace_name, predicate_evaluation FROM dba_tablespaces
         WHERE tablespace_name = 'DATA_TB';
    
    TABLESPACE_NAME                PREDICA
    ------------------------------ -------
    DATA_TB                        STORAGE
    

2.2.4 Adding a Disk to an Oracle ASM Disk Group

You can add a disk to an Oracle ASM disk group.

You might need to do this if you are adding a new Oracle Exadata Storage Server or managing a custom disk group.

Do not add Oracle Exadata Storage Server grid disks to an Oracle ASM disk group that is not on an Oracle Exadata Storage Server unless you are planning to migrate the disk group to an Oracle Exadata Storage Server disk group.

  1. Determine which disks are available by querying the V$ASM_DISK view on the Oracle ASM instance.

    If the header status is set to CANDIDATE, then the disk is a candidate for a disk group.

  2. Use the SQL command ALTER DISKGROUP with the ADD DISK clause to add the disk to the Oracle ASM disk group.

    For example:

    SQL> ALTER DISKGROUP disk_group_name 
    ADD DISK 'o/cell_IP_address/grid_disk_prefix*';
    

After the disk is added, Oracle ASM rebalances the disk group. Oracle ASM monitors the rebalance operation, and Oracle Exadata System Software sends an e-mail message when the operation is complete.

You can query the V$ASM_OPERATION view for the status of the rebalance operation.

2.2.5 Mounting or Dismounting an Oracle ASM Disk Group

A disk group must be mounted by Oracle ASM before a database can access the files in the disk group.

Mounting a disk group requires discovering all of the disks and locating the files in the disk group. When an Oracle ASM starts, the disk groups mentioned in the ASM_DISKGROUPS instance parameter are automatically mounted.

Additionally:

  • To mount a disk group, you can use the SQL ALTER DISKGROUP command with the MOUNT option.
    You can use the FORCE option in conjunction with the ALTER DISKGROUP ... MOUNT command to mount a disk group even if a disk is unavailable. However, this compromises redundancy in the disk group.
  • To dismount a disk group, you can use the SQL ALTER DISKGROUP command with the DISMOUNT option.

2.2.6 Changing a Disk to Offline or Online

You can change an Oracle ASM disk to INACTIVE or ACTIVE.

  1. Determine which disk you want offline or online in the Oracle ASM disk group.

    Query the V$ASM_DISK and V$ASM_DISKGROUP views on the Oracle ASM instance.

  2. Use one of the following commands:
    • To make a disk inactive, use the following command:

      CellCLI> ALTER GRIDDISK gdisk_name INACTIVE
      
    • To make a disk active, use the following command:

      CellCLI> ALTER GRIDDISK gdisk_name ACTIVE
      

    As soon as the disk is online, the disk group is rebalanced.

Oracle ASM monitors the rebalance operation, and Oracle Exadata System Software sends an e-mail message when the operation is complete.

You can query the V$ASM_OPERATION view for the status of the rebalance operation.

2.2.7 Dropping a Disk from an Oracle ASM Disk Group

You can drop a grid disk from a disk group.

  1. Determine which disks you want to drop from the Oracle ASM disk group.

    Query the V$ASM_DISK and V$ASM_DISKGROUP views on the Oracle ASM instance.

    If you are removing an Oracle Exadata Storage Server grid disk, then ensure that you identify the grid disks that are mapped to each Oracle ASM disk group.

  2. Use the SQL ALTER DISKGROUP command with the DROP DISK clause to drop the disks from the Oracle ASM disk group.
    SQL> ALTER DISKGROUP disk_group_name 
    DROP DISK data_CD_11_cell01 NORMAL;
    Do not use the FORCE option when dropping the disk from the Oracle ASM disk group. If you use the FORCE option, Oracle Exadata System Software will attempt to add the disk back to the disk group if the disk online automation operation is triggered, by rebooting the storage server, for example. See Enhanced Manageability Features in Oracle Exadata Database Machine System Overview.

When the disk is dropped from the Oracle ASM disk group, Oracle ASM rebalances the disk group. Oracle ASM monitors the rebalance operation, and Oracle Exadata System Software sends an e-mail message when the operation is complete.

You can query the V$ASM_OPERATION view for the status of the rebalance operation.

After an Oracle Exadata Storage Server grid disk is dropped from the Oracle ASM disk group, you can drop the grid disk from the cell.

2.2.8 Dropping an Oracle ASM Disk Group

You can drop an Oracle ASM disk group.

If you cannot mount a disk group but must drop it, then use the FORCE option with the DROP DISKGROUP command.

  1. Determine the disk group that you want to drop.
    Query the V$ASM_DISKGROUP view on the Oracle ASM instance.
  2. Use the SQL DROP DISKGROUP command to drop the Oracle ASM disk group.

2.2.9 Enabling the Oracle ASM appliance.mode Attribute

The Oracle ASM appliance.mode attribute improves disk rebalance completion time when dropping one or more Oracle ASM disks.

Setting the appliance.mode attribute helps restore redundancy faster after a failure. The attribute can only be enabled on disk groups that meet the following requirements:

  • The Oracle ASM disk group attribute compatible.asm is set to release 11.2.0.4, or 12.1.0.2 or later.

  • The cell.smart_scan_capable attribute is set to TRUE.

  • All disks in the disk group are the same type; for example, all disks are hard disks or all disks are flash disks.

  • All disks in the disk group are the same size.

  • All failure groups in the disk group have an equal number of disks:

    • For eighth rack configurations, all failure groups have 4 disks, or all failure groups have 6 disks.

    • For all other rack configurations, all failure groups have 10 disks, or all failure groups have 12 disks.

  • There are at least 3 failure groups in the disk group.

  • No disk in the disk group is offline.

Note:

Enabling the appliance.mode attribute for existing disk groups may cause an increase of data movement during the next rebalance operation.

The appliance.mode attribute is automatically enabled when creating a new disk group. Existing disk groups must explicitly set the attribute using the ALTER DISKGROUP command.

SQL> ALTER DISKGROUP disk_group SET ATTRIBUTE 'appliance.mode'='TRUE';

Note:

The appliance.mode attribute should normally be set to TRUE. In rare cases it may be necessary to disable appliance.mode as a workaround when adding disks to a disk group. After the disk group is ALTERed enable appliance.mode, and perform a REBALANCE operation.

To disable the appliance.mode attribute during disk group creation, set the attribute to FALSE.

SQL> CREATE DISKGROUP data NORMAL REDUNDANCY
DISK
'o/*/DATA*'
ATTRIBUTE 'content.type' = 'data',
          'au_size' = '4M',
          'cell.smart_scan_capable'='TRUE',
          'compatible.rdbms'='11.2.0.3',
          'compatible.asm'='11.2.0.4',
          'appliance.mode'='FALSE';

2.2.10 Checking Disk Group Balance

Files should be equally balanced across all disks. The following queries and script can be used to check disk group balance:

  • To check I/O balance, query the V$ASM_DISK_IOSTAT view before and after running a large SQL statement. For example, if a large query has a lot of reads, then the values in the read column and the read_bytes column should be approximately the same for all disks in the disk group.

  • To check all mounted disk groups, run the script available in My Oracle Support document 367445.1.

2.2.11 Setting the Oracle ASM Disk Repair Timer

The Oracle ASM disk repair timer represents the amount of time a disk can remain offline before it is dropped by Oracle ASM. While the disk is offline, Oracle ASM tracks the changed extents so the disk can be resynchronized when it comes back online. The default disk repair time is 3.6 hours. If the default is inadequate, then the attribute value can be changed to the maximum amount of time it might take to detect and repair a temporary disk failure. The following command is an example of changing the disk repair timer value to 8.5 hours for the DATA disk group:

SQL> ALTER DISKGROUP data SET ATTRIBUTE 'disk_repair_time' = '8.5h'

The disk_repair_time attribute does not change the repair timer for disks currently offline. The repair timer for those offline disks is either the default repair timer or the repair timer specified on the command line when the disks were manually set to offline. To change the repair timer for currently offline disks, use the OFFLINE command and specify a repair timer value. The following command is an example of changing the disk repair timer value for disks that are offline:

SQL> ALTER DISKGROUP data OFFLINE DISK data_CD_06_cell11 DROP AFTER 20h;

Note:

Vulnerability to a double failure increases in line with increases to the disk repair time value.

2.3 Administering Oracle Exadata Storage Server Grid Disks with Oracle ASM

Use the following procedures for managing grid disks used with Oracle ASM.

2.3.1 Naming Conventions for Oracle Exadata Storage Server Grid Disks

Using a consistent naming convention helps to identify Exadata components.

The name of the grid disk should contain the cell disk name to make it easier to determine which grid disks belong to a cell disk. To help determine which grid disks belong to an Oracle ASM disk group, a subset of the grid disk name should match all or part of the name of the Oracle ASM disk group to which the grid disk will belong.

For example, if a grid disk is created on the cell disk CD_03_cell01, and that grid disk belongs to an Oracle ASM disk group named data0, then the grid disk name should be data0_CD_03_cell01.

When you use the ALL PREFIX option with CREATE GRIDDISK, a unique grid disk name is automatically generated that includes the prefix and cell name. If you do not use the default generated name when creating grid disks, then you must ensure that the grid disk name is unique across all cells. You cannot have multiple disks with the same name in an Oracle ASM disk group.

2.3.2 Changing an Oracle Exadata Storage Server Grid Disk That Belongs to an Oracle ASM Disk Group

Before you change a grid disk that belongs to an Oracle ASM disk group, you must consider how the change might affect the Oracle ASM disk group to which the grid disk belongs.

2.3.2.1 Changing an Oracle Exadata Storage Server Grid Disk Name

Use the CellCLI interface to change the name of a grid disk.

  • To change attributes of a grid disk, use the CellCLI ALTER GRIDDISK command.

    Use the DESCRIBE GRIDDISK command to determine which Oracle Exadata Storage Server grid disk attributes can be modified.

    Caution:

    Before changing the name of a grid disk that belongs to an Oracle ASM disk group, ensure that the corresponding Oracle ASM disk is offline.

Example 2-2 Changing an Oracle Exadata Storage Server Grid Disk Name

Use the ALTER GRIDDISK command to rename a grid disk.

CellCLI> ALTER GRIDDISK data011 name='data0_CD_03_cell04'
2.3.2.2 Dropping an Oracle Exadata Storage Server Grid Disk

To drop an Oracle Exadata Storage Server grid disk, use the CellCLI DROP GRIDDISK command.

Make the grid disk inactive before dropping the grid disk to ensure that the grid disk is not in use. The FORCE option can be used to force the grid disk that is in use to be dropped.

Caution:

  • Before dropping a grid disk that belongs to an Oracle ASM disk group, ensure that the corresponding Oracle ASM disk was dropped from the disk group.

  • Before dropping a grid disk using the FORCE option, ensure that the Oracle ASM disk was dropped from the disk group. If you drop a grid disk that is still part of an ASM disk group, you may compromise data redundancy in the disk group or cause the disk group to dismount.

  1. Drop the Oracle ASM disk from the disk group.
    SQL> ALTER DISKGROUP disk_group_name DROP DISK disk_name;
    
  2. Make the corresponding grid disk inactive.
    CellCLI> ALTER GRIDDISK disk_name INACTIVE
    
  3. Drop the grid disk.
    CellCLI> DROP GRIDDISK disk_name
    

Example 2-3 Dropping a specific grid disk

After you have dropped the Oracle ASM disk from the disk group, you can drop the related grid disk.

CellCLI> ALTER GRIDDISK data0_CD_03_cell04 INACTIVE
CellCLI> DROP GRIDDISK data0_CD_03_cell04

Example 2-4 Dropping all grid disks

After you have dropped the Oracle ASM disks from the disk group, you can drop multiple grid disks using a single command.

CellCLI> ALTER GRIDDISK ALL INACTIVE
CellCLI> DROP GRIDDISK ALL PREFIX=data0

Example 2-5 Using the FORCE option when dropping a grid disk

The FORCE option forces an active grid disk to be dropped. For example, if you cannot make a grid disk INACTIVE, but must drop the grid disk, you can use the FORCE option.

Use the FORCE option cautiously. If you drop a grid disk that is still part of an ASM disk group, you may compromise data redundancy in the disk group or cause the disk group to dismount.

CellCLI> DROP GRIDDISK data02_CD_04_cell01 FORCE

2.3.3 Resizing Grid Disks

You can resize grid disks and Oracle ASM disk groups to shrink one with excess free space and increase the size of another that is near capacity.

Initial configuration of Oracle Exadata disk group sizes is based on Oracle best practices and the location of the backup files.
  • For internal backups: allocation of available space is 40% for the DATA disk groups, and 60% for the RECO disk groups.

  • For external backups: allocation of available space is 80% for the DATA disk group, and 20% for the RECO disk group.

The disk group allocations can be changed after deployment. For example, the DATA disk group allocation may be too small at 60%, and need to be resized to 80%.

If your system has no free space available on the cell disks and one disk group, for example RECO, has plenty of free space, then you can resize the RECO disk group to a smaller size and reallocate the free space to the DATA disk group. The free space available after shrinking the RECO disk group is at a non-contiguous offset from the existing space allocations for the DATA disk group. Grid disks can use space anywhere on the cell disks and do not have to be contiguous.

If you are expanding the grid disks and the cell disks already have sufficient space to expand the existing grid disks, then you do not need to first resize an existing disk group. You would skip steps 2 and 3 below where the example shows the RECO disk group and grid disks are shrunk (you should still verify the cell disks have enough free space before growing the DATA grid disks). The amount of free space the administrator should reserve depends on the level of failure coverage.

If you are shrinking the size of the grid disks, you should understand how space is reserved for mirroring. Data is protected by Oracle ASM using normal or high redundancy to create one or two copies of data, which are stored as file extents. These copies are stored in separate failure groups. A failure in one failure group does not affect the mirror copies, so data is still accessible.

When a failure occurs, Oracle ASM re-mirrors, or rebalances, any extents that are not accessible so that redundancy is reestablished. For the re-mirroring process to succeed, sufficient free space must exist in the disk group to allow creation of the new file extent mirror copies. If there is not enough free space, then some extents will not be re-mirrored and the subsequent failure of the other data copies will require the disk group to be restored from backup. Oracle ASM sends an error when a re-mirror process fails due to lack of space.

You must be using Oracle Exadata System Software release 12.1.2.1.0 or higher, or have the patch for bug 19695225 applied to your software.

This procedure for resizing grid disks applies to bare metal and virtual machine (VM) deployments.

2.3.3.1 Determine the Amount of Available Space

To increase the size of the disks in a disk group you must either have unallocated disk space available, or you have to reallocate space currently used by a different disk group.

You can also use a script available in "Script to Calculate New Grid Disk and Disk Group Sizes in Exadata (My Oracle Support Doc ID 1464809.1)" to assist in determining how much free space is available to shrink a disk group.

  1. View the space currently used by the disk groups.
    SELECT name, total_mb, free_mb, total_mb - free_mb used_mb, round(100*free_mb/total_mb,2) pct_free
    FROM v$asm_diskgroup
    ORDER BY 1;
    
    NAME                             TOTAL_MB    FREE_MB    USED_MB   PCT_FREE
    ------------------------------ ---------- ---------- ---------- ----------
    DATAC1                           68812800    9985076   58827724      14.51
    RECOC1                           94980480   82594920   12385560      86.96

    The example above shows that the DATAC1 disk group has only about 15% of free space available while the RECOC1 disk group has about 87% free disk space. The PCT_FREE displayed here is raw free space, not usable free space. Additional space is needed for rebalancing operations.

  2. For the disk groups you plan to resize, view the count and status of the failure groups used by the disk groups.
    SELECT dg.name, d.failgroup, d.state, d.header_status, d.mount_mode, 
     d.mode_status, count(1) num_disks
    FROM V$ASM_DISK d, V$ASM_DISKGROUP dg
    WHERE d.group_number = dg.group_number
    AND dg.name IN ('RECOC1', 'DATAC1')
    GROUP BY dg.name, d.failgroup, d.state, d.header_status, d.mount_status,
      d.mode_status
    ORDER BY 1, 2, 3;
    
    NAME       FAILGROUP      STATE      HEADER_STATU MOUNT_S  MODE_ST  NUM_DISKS
    ---------- -------------  ---------- ------------ -------- -------  ---------
    DATAC1     EXA01CELADM01  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM02  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM03  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM04  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM05  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM06  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM07  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM08  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM09  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM10  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM11  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM12  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM13  NORMAL     MEMBER        CACHED  ONLINE   12
    DATAC1     EXA01CELADM14  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM01  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM02  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM03  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM04  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM05  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM06  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM07  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM08  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM09  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM10  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM11  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM12  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM13  NORMAL     MEMBER        CACHED  ONLINE   12
    RECOC1     EXA01CELADM14  NORMAL     MEMBER        CACHED  ONLINE   12
    

    The above example is for a full rack, which has 14 cells and 14 failure groups for DATAC1 and RECOC1. Verify that each failure group has at least 12 disks in the NORMAL state (num_disks). If you see disks listed as MISSING, or you see an unexpected number of disks for your configuration, then do not proceed until you resolve the problem.

    Extreme Flash systems should see a disk count of 8 instead of 12 for num_disks.

  3. List the corresponding grid disks associated with each cell and each failure group, so you know which grid disks to resize.
    SELECT dg.name, d.failgroup, d.path
    FROM V$ASM_DISK d, V$ASM_DISKGROUP dg
    WHERE d.group_number = dg.group_number
    AND dg.name IN ('RECOC1', 'DATAC1')
    ORDER BY 1, 2, 3;
    
    NAME        FAILGROUP      PATH
    ----------- -------------  ----------------------------------------------
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_00_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_01_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_02_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_03_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_04_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_05_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_06_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_07_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_08_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_09_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_10_exa01celadm01
    DATAC1      EXA01CELADM01  o/192.168.74.43/DATAC1_CD_11_exa01celadm01
    DATAC1      EXA01CELADM02  o/192.168.74.44/DATAC1_CD_00_exa01celadm01
    DATAC1      EXA01CELADM02  o/192.168.74.44/DATAC1_CD_01_exa01celadm01
    DATAC1      EXA01CELADM02  o/192.168.74.44/DATAC1_CD_02_exa01celadm01
    ...
    RECOC1      EXA01CELADM13  o/192.168.74.55/RECOC1_CD_00_exa01celadm13
    RECOC1      EXA01CELADM13  o/192.168.74.55/RECOC1_CD_01_exa01celadm13
    RECOC1      EXA01CELADM13  o/192.168.74.55/RECOC1_CD_02_exa01celadm13
    ...
    RECOC1      EXA01CELADM14  o/192.168.74.56/RECOC1_CD_09_exa01celadm14
    RECOC1      EXA01CELADM14  o/192.168.74.56/RECOC1_CD_10_exa01celadm14
    RECOC1      EXA01CELADM14  o/192.168.74.56/RECOC1_CD_11_exa01celadm14  
    
    168 rows returned.
  4. Check the cell disks for available free space.
    Free space on the cell disks can be used to increase the size of the DATAC1 grid disks. If there is not enough available free space to expand the DATAC1 grid disks, then you must shrink the RECOC1 grid disks to provide the additional space for the desired new size of DATAC1 grid disks.
    [root@exa01adm01 tmp]# dcli -g ~/cell_group -l root "cellcli -e list celldisk \
      attributes name,freespace" 
    exa01celadm01: CD_00_exa01celadm01 0 
    exa01celadm01: CD_01_exa01celadm01 0 
    exa01celadm01: CD_02_exa01celadm01 0 
    exa01celadm01: CD_03_exa01celadm01 0 
    exa01celadm01: CD_04_exa01celadm01 0 
    exa01celadm01: CD_05_exa01celadm01 0 
    exa01celadm01: CD_06_exa01celadm01 0 
    exa01celadm01: CD_07_exa01celadm01 0 
    exa01celadm01: CD_08_exa01celadm01 0 
    exa01celadm01: CD_09_exa01celadm01 0 
    exa01celadm01: CD_10_exa01celadm01 0 
    exa01celadm01: CD_11_exa01celadm01 0 
    ...

    In this example, there is no free space available, so you must shrink the RECOC1 grid disks first to provide space for the DATAC1 grid disks. In your configuration there might be plenty of free space available and you can use that free space instead of shrinking the RECOC1 grid disks.

  5. Calculate the amount of space to shrink from the RECOC1 disk group and from each grid disk.

    The minimum size to safely shrink a disk group and its grid disks must take into account the following:

    • Space currently in use (USED_MB)

    • Space expected for growth (GROWTH_MB)

    • Space needed to rebalance in case of disk failure (DFC_MB), typically 15% of total disk group size

    The minimum size calculation taking the above factors into account is:

    Minimum DG size (MB) = ( USED_MB + GROWTH_MB ) * 1.15 
    • USED_MB can be derived from V$ASM_DISKGROUP by calculating TOTAL_MB - FREE_MB

    • GROWTH_MB is an estimate specific to how the disk group will be used in the future and should be based on historical patterns of growth

    For the RECOC1 disk group space usage shown in step 1, we see the minimum size it can shrink to assuming no growth estimates is:

    Minimum RECOC1 size = (TOTAL_MB - FREE_MB + GROWTH_MB) * 1.15

    = ( 94980480 - 82594920 + 0) * 1.15 = 14243394 MB = 13,910 GB

    In the example output shown in Step 1, RECOC1 has plenty of free space and DATAC1 has less than 15% free. So, you could shrink RECOC1 and give the freed disk space to DATAC1. If you decide to reduce RECOC1 to half of its current size, the new size is 94980480 / 2 = 47490240 MB. This size is significantly above the minimum size we calculated for the RECOC1 disk group above, so it is safe to shrink it down to this value.

    The query in Step 2 shows that there are 168 grid disks for RECOC1, because there are 14 cells and 12 disks per cell (14 * 12 = 168). The estimated new size of each grid disk for the RECOC1 disk group is 47490240 / 168, or 282,680 MB.

    Find the closest 16 MB boundary for the new grid disk size. If you do not perform this check, then the cell will round down the grid disk size to the nearest 16 MB boundary automatically, and you could end up with a mismatch in size between the Oracle ASM disks and the grid disks.

    SQL> SELECT 16*TRUNC(&new_disk_size/16) new_disk_size FROM dual;
    Enter value for new_disk_size: 282680
    
    NEW_DISK_SIZE
    -------------
           282672

    Based on the above result, you should choose 282672 MB as the new size for the grid disks in the RECOC1 disk group. After resizing the grid disks, the size of the RECOC1 disk group will be 47488896 MB.

  6. Calculate how much to increase the size of each grid disk in the DATAC1 disk group.

    Ensure the Oracle ASM disk size and the grid disk sizes match across the entire disk group. The following query shows the combinations of disk sizes in each disk group. Ideally, there is only one size found for all disks and the sizes of both the Oracle ASM (total_mb) disks and the grid disks (os_mb) match.

    SELECT dg.name, d.total_mb, d.os_mb, count(1) num_disks
    FROM v$asm_diskgroup dg, v$asm_disk d
    WHERE dg.group_number = d.group_number
    GROUP BY dg.name, d.total_mb, d.os_mb;
    
    NAME                             TOTAL_MB      OS_MB  NUM_DISKS
    ------------------------------ ---------- ---------- ----------
    DATAC1                             409600     409600        168
    RECOC1                             565360     565360        168
    

    After shrinking RECOC1's grid disks, the following space is left per disk for DATAC1:

    Additional space for DATAC1 disks = RECOC1_current_size - RECOC1_new_size
                                                           = 565360 - 282672 = 282688 MB

    To calculate the new size of the grid disks for the DATAC1 disk group, use the following:

    DATAC1's disks new size  = DATAC1_ disks_current_size + new_free_space_from_RECOC1
                                              = 409600 + 282688 = 692288 MB

    Find the closest 16 MB boundary for the new grid disk size. If you do not perform this check, then the cell will round down the grid disk size to the nearest 16 MB boundary automatically, and you could end up with a mismatch in size between the Oracle ASM disks and the grid disks.

    SQL> SELECT 16*TRUNC(&new_disk_size/16) new_disk_size FROM dual;
    Enter value for new_disk_size: 692288
    
    NEW_DISK_SIZE
    -------------
           692288

    Based on the query result, you can use the calculated size of 692288 MB for the disks in the DATAC1 disk groups because the size is on a 16 MB boundary. If the result of the query is different from the value you supplied, then you must use the value returned by the query because that is the value to which the cell will round the grid disk size.

    The calculated value of the new grid disk size will result in the DATAC1 disk group having a total size of 116304384 MB (168 disks * 692288 MB).

2.3.3.2 Shrink the Oracle ASM Disks in the Donor Disk Group

If there is no free space available on the cell disks, you can reduce the space used by one disk group to provide additional disk space for a different disk group.

This task is a continuation of an example where space in the RECOC1 disk group is being reallocated to the DATAC1 disk group.
Before resizing the disk group, make sure the disk group you are taking space from has sufficient free space.
  1. Shrink the Oracle ASM disks for the RECO disk group down to the new desired size for all disks.

    Use the new size for the disks in the RECO disk group that was calculated in Step 5 of Determine the Amount of Available Space.

    SQL> ALTER DISKGROUP recoc1 RESIZE ALL SIZE 282672M REBALANCE POWER 64;

    Note:

    The ALTER DISKGROUP command may take several minutes to complete. The SQL prompt will not return until this operation has completed.

    If the specified disk group has quorum disks configured within the disk group, then the ALTER DISKGROUP ... RESIZE ALL command could fail with error ORA-15277. Quorum disks are configured if the requirements specified in Managing Quorum Disks for High Redundancy Disk Groups are met.

    As a workaround, for regular storage server failure groups (FAILGROUP_TYPE=REGULAR, not QUORUM), you can specify the failure group names explicitly in the SQL command, for example:

    SQL> ALTER DISKGROUP recoc1 RESIZE DISKS IN FAILGROUP exacell01 SIZE 282672M,
    exacell02 SIZE 282672M, exacell03 SIZE 282672M REBALANCE POWER 64;

    Wait for rebalance to finish by checking the view GV$ASM_OPERATION.

    SQL> set lines 250 pages 1000
    SQL> col error_code form a10
    SQL> SELECT dg.name, o.*
      2  FROM gv$asm_operation o, v$asm_diskgroup dg
      3  WHERE o.group_number = dg.group_number;

    Proceed to the next step ONLY when the query against GV$ASM_OPERATION shows no rows for the disk group being altered.

  2. Verify the new size of the ASM disks using the following queries:
    SQL> SELECT name, total_mb, free_mb, total_mb - free_mb used_mb,
      2   ROUND(100*free_mb/total_mb,2) pct_free
      3  FROM v$asm_diskgroup
      4  ORDER BY 1;
    
    NAME                             TOTAL_MB    FREE_MB    USED_MB   PCT_FREE
    ------------------------------ ---------- ---------- ---------- ----------
    DATAC1                           68812800    9985076   58827724      14.51
    RECOC1                           47488896   35103336   12385560      73.92
    
    SQL> SELECT dg.name, d.total_mb, d.os_mb, COUNT(1) num_disks
      2  FROM v$asm_diskgroup dg, v$asm_disk d
      3  WHERE dg.group_number = d.group_number
      4  GROUP BY dg.name, d.total_mb, d.os_mb;
    
    NAME                             TOTAL_MB      OS_MB  NUM_DISKS
    ------------------------------ ---------- ---------- ----------
    DATAC1                             409600     409600        168
    RECOC1                             282672     565360        168

    The above query example shows that the disks in the RECOC1 disk group have been resized to a size of 282672 MG each, and the total disk group size is 47488896 MB.

2.3.3.3 Shrink the Grid Disks in the Donor Disk Group

After shrinking the disks in the Oracle ASM disk group, you then shrink the size of the grid disks on each cell.

This task is a continuation of an example where space in the RECOC1 disk group is being reallocated to the DATAC1 disk group.
You must have first completed the task Shrink the Oracle ASM Disks in the Donor Disk Group.
  1. Shrink the grid disks associated with the RECO disk group on all cells down to the new, smaller size.

    For each storage cell identified in Determine the Amount of Available Space in Step 3, shrink the grid disks to match the size of the Oracle ASM disks that were shrunk in the previous task. Use commands similar to the following:

    dcli -c exa01celadm01 -l root "cellcli -e alter griddisk RECOC1_CD_00_exa01celadm01 \
    ,RECOC1_CD_01_exa01celadm01 \
    ,RECOC1_CD_02_exa01celadm01 \
    ,RECOC1_CD_03_exa01celadm01 \
    ,RECOC1_CD_04_exa01celadm01 \
    ,RECOC1_CD_05_exa01celadm01 \
    ,RECOC1_CD_06_exa01celadm01 \
    ,RECOC1_CD_07_exa01celadm01 \
    ,RECOC1_CD_08_exa01celadm01 \
    ,RECOC1_CD_09_exa01celadm01 \
    ,RECOC1_CD_10_exa01celadm01 \
    ,RECOC1_CD_11_exa01celadm01 \
    size=282672M "
    
    dcli -c exa01celadm02 -l root "cellcli -e alter griddisk RECOC1_CD_00_exa01celadm02 \
    ,RECOC1_CD_01_exa01celadm02 \
    ,RECOC1_CD_02_exa01celadm02 \
    ,RECOC1_CD_03_exa01celadm02 \
    ,RECOC1_CD_04_exa01celadm02 \
    ,RECOC1_CD_05_exa01celadm02 \
    ,RECOC1_CD_06_exa01celadm02 \
    ,RECOC1_CD_07_exa01celadm02 \
    ,RECOC1_CD_08_exa01celadm02 \
    ,RECOC1_CD_09_exa01celadm02 \
    ,RECOC1_CD_10_exa01celadm02 \
    ,RECOC1_CD_11_exa01celadm02 \
    size=282672M "
    
    ...
    
    dcli -c exa01celadm14 -l root "cellcli -e alter griddisk RECOC1_CD_00_exa01celadm14 \
    ,RECOC1_CD_01_exa01celadm14 \
    ,RECOC1_CD_02_exa01celadm14 \
    ,RECOC1_CD_03_exa01celadm14 \
    ,RECOC1_CD_04_exa01celadm14 \
    ,RECOC1_CD_05_exa01celadm14 \
    ,RECOC1_CD_06_exa01celadm14 \
    ,RECOC1_CD_07_exa01celadm14 \
    ,RECOC1_CD_08_exa01celadm14 \
    ,RECOC1_CD_09_exa01celadm14 \
    ,RECOC1_CD_10_exa01celadm14 \
    ,RECOC1_CD_11_exa01celadm14 \
    size=282672M "
  2. Verify the new size of the grid disks using the following command:
    [root@exa01adm01 tmp]# dcli -g cell_group -l root "cellcli -e list griddisk attributes name,size where name like \'RECOC1.*\' "
    
    exa01celadm01: RECOC1_CD_00_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_01_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_02_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_03_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_04_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_05_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_06_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_07_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_08_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_09_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_10_exa01celadm01 276.046875G
    exa01celadm01: RECOC1_CD_11_exa01celadm01 276.046875G  
    ...

    The above example shows that the disks in the RECOC1 disk group have been resized to a size of 282672 MB each (276.046875 * 1024).

2.3.3.4 Increase the Size of the Grid Disks Using Available Space

You can increase the size used by the grid disks if there is unallocated disk space either already available, or made available by shrinking the space used by a different Oracle ASM disk group.

This task is a continuation of an example where space in the RECOC1 disk group is being reallocated to the DATAC1 disk group. If you already have sufficient space to expand an existing disk group, then you do not need to reallocate space from a different disk group.

  1. Check that the cell disks have the expected amount of free space.
    After completing the tasks to shrink the Oracle ASM disks and the grid disks, you would expect to see the following free space on the cell disks:
    [root@exa01adm01 tmp]# dcli -g ~/cell_group -l root "cellcli -e list celldisk \
    attributes name,freespace"
    
    exa01celadm01: CD_00_exa01celadm01 276.0625G
    exa01celadm01: CD_01_exa01celadm01 276.0625G
    exa01celadm01: CD_02_exa01celadm01 276.0625G
    exa01celadm01: CD_03_exa01celadm01 276.0625G
    exa01celadm01: CD_04_exa01celadm01 276.0625G
    exa01celadm01: CD_05_exa01celadm01 276.0625G
    exa01celadm01: CD_06_exa01celadm01 276.0625G
    exa01celadm01: CD_07_exa01celadm01 276.0625G
    exa01celadm01: CD_08_exa01celadm01 276.0625G
    exa01celadm01: CD_09_exa01celadm01 276.0625G
    exa01celadm01: CD_10_exa01celadm01 276.0625G
    exa01celadm01: CD_11_exa01celadm01 276.0625G 
    ...
  2. For each storage cell, increase the size of the DATA grid disks to the desired new size.

    Use the size calculated in Determine the Amount of Available Space.

    dcli -c exa01celadm01 -l root "cellcli -e alter griddisk DATAC1_CD_00_exa01celadm01 \
    ,DATAC1_CD_01_exa01celadm01 \
    ,DATAC1_CD_02_exa01celadm01 \
    ,DATAC1_CD_03_exa01celadm01 \
    ,DATAC1_CD_04_exa01celadm01 \
    ,DATAC1_CD_05_exa01celadm01 \
    ,DATAC1_CD_06_exa01celadm01 \
    ,DATAC1_CD_07_exa01celadm01 \
    ,DATAC1_CD_08_exa01celadm01 \
    ,DATAC1_CD_09_exa01celadm01 \
    ,DATAC1_CD_10_exa01celadm01 \
    ,DATAC1_CD_11_exa01celadm01 \
    size=692288M "
    ...
    dcli -c exa01celadm14 -l root "cellcli -e alter griddisk DATAC1_CD_00_exa01celadm14 \
    ,DATAC1_CD_01_exa01celadm14 \
    ,DATAC1_CD_02_exa01celadm14 \
    ,DATAC1_CD_03_exa01celadm14 \
    ,DATAC1_CD_04_exa01celadm14 \
    ,DATAC1_CD_05_exa01celadm14 \
    ,DATAC1_CD_06_exa01celadm14 \
    ,DATAC1_CD_07_exa01celadm14 \
    ,DATAC1_CD_08_exa01celadm14 \
    ,DATAC1_CD_09_exa01celadm14 \
    ,DATAC1_CD_10_exa01celadm14 \
    ,DATAC1_CD_11_exa01celadm14 \
    size=692288M "
  3. Verify the new size of the grid disks associated with the DATAC1 disk group using the following command:
    dcli -g cell_group -l root "cellcli -e list griddisk attributes name,size \ 
    where name like \'DATAC1.*\' "
    
    exa01celadm01: DATAC1_CD_00_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_01_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_02_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_03_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_04_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_05_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_06_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_07_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_08_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_09_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_10_exa01celadm01 676.0625G
    exa01celadm01: DATAC1_CD_11_exa01celadm01 676.0625G

Instead of increasing the size of the DATA disk group, you could instead create new disk groups with the new free space or keep it free for future use. In general, Oracle recommends using the smallest number of disk groups needed (typically DATA, RECO, and DBFS_DG) to give the greatest flexibility and ease of administration. However, there may be cases, perhaps when using virtual machines or consolidating many databases, where additional disk groups or available free space for future use may be desired.

If you decide to leave free space on the grid disks in reserve for future use, please see the My Oracle Support Note 1684112.1 for the steps on how to allocate free space to an existing disk group at a later time.

2.3.3.5 Increase the Size of the Oracle ASM Disks

You can increase the size used by the Oracle ASM disks after increasing the space allocated to the associated grid disks.

This task is a continuation of an example where space in the RECOC1 disk group is being reallocated to the DATAC1 disk group.
You must have completed the task of resizing the grid disks before you can resize the corresponding Oracle ASM disk group.
  1. Increase the Oracle ASM disks for DATAC1 disk group to the new size of the grid disks on the storage cells.
    SQL> ALTER DISKGROUP datac1 RESIZE ALL;

    This command resizes the Oracle ASM disks to match the size of the grid disks.

    Note:

    If the specified disk group has quorum disks configured within the disk group, then the ALTER DISKGROUP ... RESIZE ALL command could fail with error ORA-15277. Quorum disks are configured if the requirements specified in Oracle Exadata Database Machine Maintenance Guide are met.

    As a workaround, for regular storage server failure groups (FAILGROUP_TYPE=REGULAR, not QUORUM), you can specify the failure group names explicitly in the SQL command, for example:

    SQL> ALTER DISKGROUP datac1 RESIZE DISKS IN FAILGROUP exacell01, exacell02, exacell03;
  2. Wait for the rebalance operation to finish.
    SQL> set lines 250 pages 1000 
    SQL> col error_code form a10 
    SQL> SELECT dg.name, o.* FROM gv$asm_operation o, v$asm_diskgroup dg 
         WHERE o.group_number = dg.group_number;

    Do not continue to the next step until the query returns zero rows for the disk group that was altered.

  3. Verify that the new sizes for the Oracle ASM disks and disk group is at the desired sizes.
    SQL> SELECT name, total_mb, free_mb, total_mb - free_mb used_mb, 
         ROUND(100*free_mb/total_mb,2) pct_free
         FROM v$asm_diskgroup
         ORDER BY 1;
    
    NAME                             TOTAL_MB    FREE_MB    USED_MB   PCT_FREE
    ------------------------------ ---------- ---------- ---------- ----------
    DATAC1                          116304384   57439796   58864588      49.39
    RECOC1                           47488896   34542516   12946380      72.74
    
    SQL>  SELECT dg.name, d.total_mb, d.os_mb, COUNT(1) num_disks
          FROM  v$asm_diskgroup dg, v$asm_disk d
          WHERE dg.group_number = d.group_number
          GROUP BY dg.name, d.total_mb, d.os_mb;
     
    NAME                             TOTAL_MB      OS_MB  NUM_DISKS
    ------------------------------ ---------- ---------- ----------
    DATAC1                             692288     692288        168
    RECOC1                             282672     282672        168
    
    

    The results of the queries show that the RECOC1 and DATAC1 disk groups and disk have been resized.

2.3.4 Determining Which Oracle ASM Disk Group Contains an Oracle Exadata Storage Server Grid Disk

If a grid disk name matches the Oracle ASM disk name, and the name contains the Oracle ASM disk group name, then you can determine the Oracle ASM disk group to which the grid disk belongs.

You can also use SQL commands on the Oracle ASM instance to find the Oracle ASM disk group that matches part of the specific grid disk name. This can help you to determine which Oracle ASM disk group contains a specific grid disk.

Example 2-6 Determining Grid Disks in an Oracle ASM Disk Group

This example shows how to find the Oracle ASM disk group that contains grid disks that begin with DATA0, for example DATA0_CD_03_CELL04.

SQL> SELECT d.label AS asmdisk, dg.name AS diskgroup
     FROM V$ASM_DISK d, V$ASM_DISKGROUP dg 
     WHERE dg.name LIKE 'DATA0%'
           AND d.group_number = dg.group_number;

ASMDISK                DISKGROUP
---------------------- -------------
DATA0_CD_00_CELL04      DATA0
DATA0_CD_01_CELL04      DATA0
DATA0_CD_02_CELL04      DATA0
DATA0_CD_03_CELL04      DATA0

2.3.5 Determining Which Oracle Exadata Storage Server Grid Disks Belong to an Oracle ASM Disk Group

If a grid disk name contains the Oracle ASM disk group name, then you can use SQL commands on the Oracle ASM instance to list the Oracle ASM disk group names.

You can use the CellCLI utility to search for specific grid disk names.

Example 2-7 Displaying Oracle ASM Disk Group Names

This example shows how to use a SQL command to display the Oracle ASM disk group names on the Oracle ASM instance.

SQL> SELECT name FROM V$ASM_DISKGROUP;

NAME
------------------------------
CONTROL
DATA0
DATA1
DATA2
LOG
STANDBY

Example 2-8 Searching for Grid Disks by Name

This example shows how to display similar grid disk group names on the cell using the dcli utility.

$ ./dcli "cellcli -e list griddisk where -c cell04"

data0_CD_01_cell04
data0_CD_02_cell04
data0_CD_03_cell04
...

2.3.6 Handling Disk Replacement

If a disk has a problem, the physical disk status changes.

When a physical disk is removed, its status becomes not present. Oracle ASM may take a grid disk offline when getting I/O errors while trying to access a grid disk on the physical disk. When the physical disk is replaced, Oracle Exadata System Software automatically puts the grid disks on the physical disk online in their respective Oracle ASM disk groups. If a grid disk remains offline longer than the time specified by the disk_repair_time attribute, then Oracle ASM force drops that grid disk and starts a rebalance to restore data redundancy. Oracle ASM monitors the rebalance operation, and Oracle Exadata System Software sends an e-mail message when the operation is complete.

The following table summarizes the physical disk statuses, and how Oracle ASM handles grid disks when the physical disk has a problem.

Table 2-1 Physical Disk Status

Physical Disk Status Oracle Exadata System Software Action

normal

Disk is functioning normally

No action.

not present

Disk has been removed

Oracle Exadata System Software offlines disk, then uses the DROP ... FORCE command after disk_repair_time limit exceeded. The rebalance operation begins.

predictive failure

Disk is having problems, and may fail

Oracle Exadata System Software drops the grid disks on the affected physical disk without the FORCE option from Oracle ASM, and the rebalance operation copies the data on the affected physical disk to other disks.

After all grid disks have been successfully removed from their respective Oracle ASM disk groups, administrators can proceed with disk replacement.

critical

Disk has failed

Oracle Exadata System Software drops the grid disks using the DROP ... FORCE command on the affected physical disk from Oracle ASM, and the rebalance operation restores data redundancy.

Administrators can proceed with disk replacement immediately.

This status is only available for releases 11.2.3.1.1 and earlier.

poor performance

Disk is performing poorly

Oracle Exadata System Software attempts to drop the grid disks using the FORCE option on the affected physical disk from Oracle ASM.

If the DROP ... FORCE command is successful, then the rebalance operation begins to restore data redundancy and administrators can proceed with disk replacement immediately.

If the DROP ... FORCE command fails due to offline partners, Oracle Exadata System Software drops the grid disks on the affected physical disk without the FORCE option from Oracle ASM, and the rebalance operation copies the data on the affected physical disk to other disks.

After all grid disks have been successfully removed from their respective Oracle ASM disk groups, administrators can proceed with disk replacement.

After a physical disk is replaced, Oracle Exadata System Software automatically creates the grid disks on the replacement disk, and adds them to the respective Oracle ASM disk groups. An Oracle ASM rebalance operation relocates data to the newly-added grid disks. Oracle ASM monitors the rebalance operation, and Oracle Exadata System Software sends an e-mail message when the operation is complete.