9 Setting up Oracle Exadata Storage Snapshots

9.1 Before You Begin With Exadata Snapshots

Understand the optimal use-case before you begin with Exadata Snapshots.

In addition to production database workloads, many customers use Oracle Exadata for non-production workloads, including application development, testing, quality assurance, and other uses.

Exadata Snapshots are ideal for creating space-efficient read-only or read-write clones of an Oracle database that you can use for development, testing, or other non-production purposes. Exadata Snapshots are particularly useful for non-production databases that leverage Exadata performance and availability features. For example, application functionality testing that validates the use of Exadata Smart Scan features.

However, before you begin with Exadata Snapshots, be aware that some non-production database needs may be better satisfied using alternative technologies or approaches. Consider the following in conjunction with your business needs:

  • If you require full end-to-end performance and high availability (HA) testing, Oracle recommends a matching test environment that mirrors the production environment. This is the only solution to fully evaluate performance gains or regressions in response to hardware, software, database, or application changes.

  • If your requirements emphasize dynamic snapshot capabilities without requiring Exadata smart storage features, then consider using Oracle Advanced Cluster File System (Oracle ACFS) snapshots on Exadata.

    For information about Oracle ACFS snapshots, see the following related links.

9.2 Overview of Exadata Snapshots

Traditionally, to clone databases in a production system, you would create test master and snapshot databases on a non-Exadata system (Figure 9-1). In some cases, these databases are a full copy that consumes as much storage as its source (Figure 9-2).

Figure 9-1 Traditional Clone to a Non-Exadata System

Description of Figure 9-1 follows
Description of "Figure 9-1 Traditional Clone to a Non-Exadata System"

Figure 9-2 Database Clone That Is a Full Copy of Its Source

Description of Figure 9-2 follows
Description of "Figure 9-2 Database Clone That Is a Full Copy of Its Source"

If the clones are a full copy of the production database, this is expensive in terms of the amount of storage consumed and the time it takes to create the clones. Imagine creating ten clones for a multi-terabyte database and it is easy to see why this approach does not scale.

Another drawback to this approach is that Oracle Exadata System Software features such as Smart Scan, Smart Logging, and Smart Flash are not available on non-Exadata systems.

To solve these problems, you can use Exadata Snapshots. Exadata Snapshots are ideal for creating space-efficient read-only or read-write clones of an Oracle database that you can use for development, testing, or other non-production purposes, and when multiple clones are required because of disk space and time savings. The following image depicts the space required for an Exadata Snapshot.

Note:

Exadata Snapshots should be used only for development and testing purposes. They should not be used in production environments.

An Exadata Snapshot is based on a test master, which is a full clone of the source database. The test master is the only full copy of the source database. From a single test master you can create multiple Exadata Snapshots with minimal additional storage and minimal effort. Each Exadata Snapshot uses a small fraction of the disk space required for the test master and can be created or dropped in seconds. Each Exadata Snapshot is a logical copy of the test master.

Before creating Exadata Snapshots from the test master, you can modify the data in the test master, if required. For example, you can delete or mask sensitive data in the test master before making it available to non-privileged users.

Creating an Exadata Snapshot from the test master is as simple as recording the parent file name in the child file header, an operation that completes in seconds and requires minimal disk space. Additional disk space is consumed only when the user of the snapshot begins to change data. Only new data is written to data blocks that are allocated to the snapshot on write. All requests for data that has not changed are serviced by the data blocks of the test master.

Multiple users can create independent snapshots from the same test master. This enables multiple development and test environments to share space while maintaining independent databases for each user. The following image shows an Exadata environment with three Exadata Snapshots that use the same test master.

Figure 9-4 An Exadata Environment with 3 Exadata Snapshots from the Same Test Master

Description of Figure 9-4 follows
Description of "Figure 9-4 An Exadata Environment with 3 Exadata Snapshots from the Same Test Master"

Hierarchical and Read-Write Snapshots

Oracle Exadata System Software release 12.2.1.1.0 introduces hierarchical and read-write snapshots. Hierarchical snapshots enable you to create snapshots from snapshots. You might want to do this if you are working on your snapshot and wish to save a copy before you make additional changes to it. In hierarchical snapshots, you can make a snapshot at any level descended from the test master. Exadata Snapshots are writable. A snapshot points to the parent’s blocks for the data. If you edit a snapshot, then the snapshot will point to the new data. For the unchanged data, it will point to the parent’s blocks.

If you have taken a snapshot from a snapshot, and you edit the parent snapshot, then you have to delete all snapshots that are dependent on that snapshot.

9.2.1 Exadata Snapshot Concepts

There are various object you need when creating and using Exadata Snapshots.

9.2.1.1 Sparse Grid Disks

A sparse grid disk has a virtual size attribute as well as physical size.

A sparse Oracle ASM disk group is composed of sparse grid disks.

The maximum physical size that can be allocated to sparse grid disks from a single cell disk is 4 TB. The maximum allowed virtual size is 100 TB.

You create sparse grid disks before creating a sparse Oracle ASM disk group. If you are deploying a new system or re-deploying an existing system, set aside some space from each cell disk for use by the disk group just as you would for any other disk group.

If you want to use an existing system without reimaging, see My Oracle Support note 1467056.1 for instructions on resizing existing grid disks.

9.2.1.2 Sparse Disk Groups

Exadata Snapshots utilize Oracle ASM sparse disk groups.

Sparse data files can be created only in Oracle Automatic Storage Management (Oracle ASM) sparse disk groups. The following figure shows a sparse disk group containing three Exadata Snapshots.

Figure 9-5 Sparse Disk Group Contains Exadata Snapshots

Description of Figure 9-5 follows
Description of "Figure 9-5 Sparse Disk Group Contains Exadata Snapshots"

A sparse disk group has the following attributes:

  • compatible.asm must be set to 12.1.0.2 or higher.
  • compatible.rdbms must be set to 12.1.0.2 or higher.
  • cell.smart_scan_capable must be set to true.
  • cell.sparse_dg must be set to allsparse. This attribute identifies the disk group to Oracle ASM as being made up of sparse grid disks.
  • As is the case for all Oracle ASM disk groups on Exadata, the recommended allocation unit (AU) size is 4M.

For example, the following SQL command creates a sparse disk group:

SQL> CREATE DISKGROUP SPARSE
  NORMAL REDUNDANCY 
  DISK 'o/*/SPARSE_*'
  ATTRIBUTE 
    'compatible.asm'          = '12.1.0.2',
    'compatible.rdbms'        = '12.1.0.2',
    'cell.smart_scan_capable' = 'true',
    'cell.sparse_dg'          = 'allsparse',
    'au_size'                 = '4M';

A sparse Oracle ASM disk group can store both sparse and non-sparse files, which makes it possible to store a test master database and Exadata Snapshots in the same disk group. However, because sparse grid disks are limited in size (to a maximum of 4 TB on a cell disk) and because there is no benefit in placing non-sparse files in a sparse disk group, you should reserve space in sparse disk groups for the sparse files associated with Exadata Snapshots and place the test master database in a regular (non-sparse) disk group.

9.2.1.3 Sparse Database and Sparse Files

In a sparse database, such as an Exadata Snapshot database, its data files are sparse files.

A database consists of the following files:

  • control files
  • online redo logs
  • temp files
  • data files

A sparse file contains only changes made to blocks from the parent file (the parent file remains unchanged) and maintains a pointer to the parent file for access to unchanged data.

The Exadata Snapshot has its own copy of the other database files (control files, online redo logs, and temp files). These other database files are not sparse files.

9.2.2 Exadata Snapshot Support of Exadata Features

In addition to space and time savings, Exadata Snapshots provide cost-effective development, quality assurance and test environments on Oracle Exadata.

Exadata Snapshots can be used by developers and testers who need to validate functionality. You can also use Exadata Snapshot to practice maintenance and operational steps in a fully functional Exadata environment (for example, Exadata Smart Flash Cache, Exadata Smart Scan Offload, and Exadata Hybrid Columnar Compression).

9.2.3 Separate Test/Development and Production Environments

Oracle recommends that test and development environments be hosted on a separate physical Oracle Exadata Rack from the rack hosting the production database.

An Oracle Exadata system dedicated to development and test is ideal. Test masters and their associated Exadata Snapshots would be hosted on this system. Alternatively it may be an Oracle Exadata system that hosts Oracle Data Guard standby databases for high availability, disaster recovery, or other purposes as permitted by capacity. Test masters and their snapshots may reside in either physical or virtual machines on an Oracle Exadata system.

9.2.4 Types of Exadata Snapshots

You can create two types of Exadata Snapshots, depending on the current setup of your environment.

  • Using a test master from a pluggable database (PDB).

    You can create Exadata Snapshot PDBs from individual PDBs within a container database (CDB). You can clone individual PDBs to create the test master PDB, from which Exadata Snapshot PDBs are generated.

    You can move Exadata Snapshot PDBs from one CDB to another in the same Exadata cluster. You can also create an Exadata Snapshot PDB in one container from a test master PDB residing in another container as long as both CDBs are in the same Exadata cluster.

    The test master database and their Exadata Snapshots must be in the same Oracle Automatic Storage Management (Oracle ASM) cluster environment.

    The following image shows a production CDB with three PDBs. Two of the PDBs (PDB1 and PDB2) have been cloned to create test master PDBs, which were then unplugged and plugged into another CDB on the test Oracle Exadata. In this figure, six Exadata Snapshot PDBs have been created from the test master PDBs.

    Figure 9-6 Exadata Snapshot PDBs

    Description of Figure 9-6 follows
    Description of "Figure 9-6 Exadata Snapshot PDBs"
  • Using a test master from a whole database (CDB or non-CDB)

    An Exadata Snapshot database can be a snapshot of a complete container database (CDB) or non-container database (non-CDB). A snapshot of a complete CDB includes all of the PDBs in the CDB.

    Note:

    To create a snapshot of a complete CDB, you must apply Patch 32233739 to the Oracle home directory that supports the Exadata Snapshot database.

    The next figure depicts a full clone of the production database. The clone, which is the test master database, is hosted on a separate test/development system, and it is associated with six Exadata Snapshots. The test master database is created using either an Oracle Data Guard standby database (recommended if the test master will be refreshed on a regular basis) or Oracle Recovery Manager (RMAN).

    The test master database and their Exadata Snapshots must be in the same Oracle ASM cluster environment.

    Figure 9-7 Exadata Snapshot Databases

    Description of Figure 9-7 follows
    Description of "Figure 9-7 Exadata Snapshot Databases"

9.2.5 Hierarchical Snapshot Databases

Hierarchical snapshots enable you to create snapshot databases from other snapshot databases.

Oracle Exadata System Software release 12.2.1.1.0 introduced hierarchical snapshots. You might want to use hierarchical snapshots if you are working on your snapshot database and you want to save a copy before you make additional changes to it.

There is no set limit to the number of levels allowed in the hierarchy, but for performance and management reasons, a practical limit of 10 levels is recommended.

Figure 9-8 Hierarchical Snapshot Databases

Description of Figure 9-8 follows
Description of "Figure 9-8 Hierarchical Snapshot Databases"

A snapshot database points to its parent’s blocks for data. If you make a change to a snapshot database, then the snapshot database allocates a new block for the changed data. For the unchanged data, the snapshot points to the parent’s blocks. A snapshot that is several levels descended from the original test master will retrieve its data by traversing up the tree starting with the snapshot database from which it was created.

Figure 9-9 Allocation of Blocks in Hierarchical Snapshot Databases

Description of Figure 9-9 follows
Description of "Figure 9-9 Allocation of Blocks in Hierarchical Snapshot Databases"

If you have taken a snapshot database from another snapshot database, and you make a change to the parent snapshot database, then you have to drop all snapshot databases that depend on that snapshot database. The parent snapshot database becomes read-only when child snapshots are created from it. If you want to write to the parent snapshot again, you must drop all child snapshots.

9.2.6 Sparse Test Masters

With the introduction of hierarchical snapshots, you can now create sparse test masters.

Hierarchical snapshot databases provide the capability to create sparse test masters. With a sparse test master there is one full copy of data to provide the parent data for multiple levels of snapshots. When you are ready to create snapshots, you create an additional set of sparse files to receive updates from your production database via a replication technology such as Oracle Data Guard.

When you need to create a new test master of current data, rather than cloning the full production database again as of the new point in time, you mark the previous additional sparse files read only creating a sparse test master and create a new set of sparse files to be kept current. Additional snapshots on this sparse test master could then be created to be used by application developers for testing. You can repeat this process up to 9 times total to allow one full copy of the production database to support snapshots from multiple points in time. Creating the new sparse test master is very quick (5 minutes or less) allowing you to easily keep current with production.

The sparse test master database can be a container database (CDB) or non-container database (non-CDB).

Figure 9-10 Example configuration with Sparse Test Masters

Description of Figure 9-10 follows
Description of "Figure 9-10 Example configuration with Sparse Test Masters"

9.3 Prerequisites for Exadata Snapshot Databases

Before creating Exadata snapshot databases, check that your environment meets these requirements.

  • Storage servers must be Oracle Exadata Database Machine X3-2 or later

  • Oracle Exadata System Software 12.1.2.1.0 or later for Oracle Exadata Storage Servers and Oracle Exadata Database Servers

    You cannot downgrade to an earlier version with sparse grid disks on a cell.

  • Oracle Grid Infrastructure software release 12.1.0.2.0 BP5 or later

    The Oracle ASM disk group that contains the sparse Oracle ASM grid disks must have both COMPATIBLE.RDBMS and COMPATIBLE.ASM set to 12.1.0.2 or later.

    The parent disk group can be 11.2 compatible.

  • For disk groups hosting test master files, COMPATIBLE.RDBMS must be set to 11.2.0.0.0 or later.
  • Oracle Database software release 12.1.0.2.0 BP5 or later

    The parent database and the snapshot database must be 12.1.0.2 compatible.

  • The data files for the snapshot database and the parent database must be on the same Oracle ASM cluster.

  • The db_block_size database initialization parameter must be at least 4096 and be a multiple of 4096.

  • If you are using hierarchical snapshot databases, sparse test master databases, the new --sparse option for the Oracle ASM cp command, or the new setsparseparent Oracle ASM command, then you need Oracle Database and Oracle Grid Infrastructure software release 12.2.0.1.0 and Oracle Exadata System Software release 12.2.1.1.0 or later.

  • Exadata sparse griddisks and hence Oracle ASM Sparse diskgroups cannot be created on Exadata XT storage servers.
  • To create a snapshot of a complete container database (CDB), you must apply Patch 32233739 to the Oracle home directory that supports the Exadata Snapshot database.

Related Topics

9.4 Sparse Disk Sizing and Allocation Methodology

To create Exadata Snapshots, you must have sparse grid disks created with an Oracle ASM disk group created based on those disks.

It is not possible to directly modify an existing disk group and convert it into a sparse disk group; you must create the grid disks with the sparse attributes. If you want to use an existing disk group, then the disk group and associated grid disks must be dropped and re-created.

If you want to create a new sparse disk group and all of your space is currently allocated to existing disk groups, then you must resize one or more existing disk groups to free up space. Use the following steps to perform this process.

9.4.1 Sizing Steps for New Sparse Disk Groups

These steps describe how to determine the space needed for sparse disk groups and how to allocate that space.

The maximum physical size that can be allocated to sparse grid disks from a single cell disk is 4 TB.

The maximum virtual size that can be allocated per sparse grid disk is 100 TB. However, to avoid additional performance impact Oracle does not recommend creating large virtual sizes unless it is truly required.

Remember also that a sparse disk group can house non-sparse database files, in which case the physical storage utilization matches the virtual file size.

  1. Determine the amount of physical space for your snapshots. Use the formula found in Calculating the Physical Size for Grid Disks.

    Note that a good maximum starting point for physical space allocation for sparse grid disk is 15% of your total space. This can be tuned up or down as required based on utilization patterns and future needs.

  2. Determine the amount of virtual space you will require for your snapshots. Use the formula found in Calculating the Virtual Size for Grid Disks.

    Note that the virtual size can be modified online by modifying the Oracle ASM disk group without making any changes to the underlying grid disks.

  3. Using the number of snapshots you plan on having at any one time and how much changes you expect to make, use the process defined in Determine the Amount of Available Space to determine which existing disk group(s) you want to shrink to release space for the sparse disk group. Based on your current allocations, you may need to shrink multiple disk groups to get the space you require for the sparse disk group. Leave a minimum of 15% free space in the original disk groups to allow for rebalancing during the resize operation.

    If there is not enough free space available in your current disk groups to provide desired space for the sparse disk group:

    • Rethink your sparse usage in this environment and downsize accordingly
    • Relocate objects and databases from this environment to another with available free space
    • Add storage
  4. Once you have decided which disk group to resize, follow the steps in Shrink the Oracle ASM Disks in the Donor Disk Group and Shrink the Grid Disks in the Donor Disk Group to resize the disks.

    This process allows you to resize disks while they are online and only require a single rebalance operation per disk group. Oracle ASM may run more than one rebalance operation concurrently, depending on the number of Oracle ASM instances available in the cluster.

  5. Create the sparse grid disk from the space just released using the commands outlined in Creating Sparse Grid Disks.
  6. Create the sparse Oracle ASM disk group using the commands outlined in Resizing the Physical Space.
  7. To monitor sparse grid disk activity see Monitoring Sparse Disk Group Utilization.
  8. If you find that you need to resize your sparse grid disks:
    • If more physical space is required, follow steps 1 through 4 in this section to free up space and then follow the steps in Resizing the Physical Space. The maximum physical size that can be allocated to sparse grid disks from a single cell disk is 4 TB.

    • If more virtual space is required, follow the steps in Resizing the Virtual Space. The maximum virtual size that can be allocated to sparse grid disks from a single cell disk is 100 TB.

9.5 Refresh Considerations, or Lifecycle of Exadata Snapshots

Refresh cycles can influence how you use and create Exadata Snapshots.

The test master must remain in either READONLY state (if open) or closed while it has Exadata Snapshots associated with it. If the test master needs to be refreshed, all Exadata Snapshots dependent on the test master must be dropped and re-created.

If different groups of Exadata Snapshot users have different refresh cycle requirements, you may need to maintain multiple test masters. The following figure shows three test masters, each with its own refresh schedule.

Figure 9-11 Three Developer Groups for PDB1, Each With a Different Refresh Cycle

Description of Figure 9-11 follows
Description of "Figure 9-11 Three Developer Groups for PDB1, Each With a Different Refresh Cycle"

9.6 Using an Oracle Data Guard Standby Database as the Test Master

If the test master is a complete database that needs to be refreshed regularly, Oracle recommends creating the test master database as an Oracle Data Guard physical standby dedicated to this purpose.

There are multiple benefits when using this approach:

  • Easier refresh

    Oracle Data Guard is a proven solution for synchronizing multiple physical replicas of a production database used for disaster recovery, read-only offload from production, backup offload, and test. This same functionality can be used to maintain a copy of the production database dedicated to serving as a test master database that is more easily refreshed on a periodic basis. The benefit of using an Oracle Data Guard physical standby increases as the size of the test master database and the number of times it must be refreshed increases. These benefits far outweigh the small incremental effort required to create an Oracle Data Guard replica compared to simply cloning a database from an Oracle Recovery Manager (RMAN) backup.

  • Minimal impact to primary

    During the time the Oracle Data Guard replica is being used as the test master database, Oracle Data Guard redo transport and apply are disabled. There is zero impact to the production database. When it is time to refresh, only the deltas generated since the Oracle Data Guard replica was converted to a test master database are taken from the production database and used to resynchronize the test master database.

    Note:

    Since transport and apply for this Oracle Data Guard replica will be stopped while it functions as a test master, it should not be used for disaster recovery or for any purpose other than as test master. If you are already using Oracle Data Guard for high availability or disaster protection, Oracle recommends creating an Oracle Data Guard replica(s) to use as test master databases for Exadata Snapshot databases.
  • Easy to scrub prior to creating snapshot clones

    Oracle Data Guard makes it easy to modify the test master database before making it available to create Exadata Snapshots. For example, an Oracle Data Guard replica can be opened read-write and data can be masked or scrubbed prior to creating Exadata Snapshots. Later, when testing is complete, the test master database can be converted back into an Oracle Data Guard replica discarding any modifications made to the original copy and refreshing it using only the deltas from the production database. Note that after refreshing the Oracle Data Guard replica, you need to re-scrub the database before you can use it as a test master again.

    If you are using an RMAN backup database, and you mask or scrub the data, when you need to refresh the test master, you have to create another backup as the test master and rescrub it to make it current.

9.7 Managing Exadata Snapshots

To create and manage Exadata Snapshots, you need to perform these procedures:

9.7.1 Creating Sparse Grid Disks

When creating sparse grid disks, you need to specify the physical size and the virtual size.

9.7.1.1 Calculating the Physical Size for Grid Disks

You can use the following formula to get a rough estimate of the total physical space to set aside for a sparse ASM disk group:

Total physical space =
   (SUM(size of all test masters in the sparse ASM disk group) +
    SUM(approximate size of all updates to the snapshot databases))
   * ASM Redundancy

In the formula above, ASM redundancy takes into account ASM mirroring of extents. Exadata requires ASM redundancy set to either normal redundancy (double mirror the extents) or high redundancy (triple mirror the extents). If the sparse ASM disk group will use normal redundancy, expect to double the space used. If using high redundancy, expect to triple the space used.

For example, if you want 2 test masters in the sparse ASM disk group created with normal redundancy with a combined total space of 500 GB (250 GB each) and each test master will have 5 Exadata snapshots with the expectation that each snapshot will modify 20% of the blocks, then the total physical space that will be needed can be calculated as follows:

Space for 2 test masters:   2 * 250 GB =                                500 GB
Space for 5 snapshots per test master, for a total of 10 snapshots:
    10 * 250 GB * 20% =                                                 500 GB
Subtotal                                                               1000 GB
Normal redundancy: 2 * 1000 GB =                                       2000 GB

Divide this value by the number of disks to determine the size parameter for each disk. ASM grid disks should be allocated on 16 MB boundaries. If the size parameter in MB for each grid disk is not evenly divisible by 16, adjust up to a 16 MB boundary.

Note that you should set aside additional space to use for multiple projects and through multiple iterations.

Also to accommodate any disk rebalancing operations, you should add a 15% space cushion on top of the space used for snapshots and test masters.

9.7.1.2 Calculating the Virtual Size for Grid Disks

You can use the following formula to get a rough estimate of the virtual size to assign for a sparse ASM disk group:

Virtual size required for sparse disks =
   (SUM(full virtual size of all Exadata snapshots) + Physical space allocated)
   * ASM Redundancy

To continue with the example from the previous section, you have 10 Exadata snapshots. If they were full copies of the test master, they would be 250 GB each.

The following shows the calculation for the total virtual space:

Full size for 5 snapshots per test master, for a total of 10 snapshots:
    10 * 250 GB =                                                      2500 GB
Size of the 2 test masters: 2 * 250 GB =                                500 GB
Subtotal                                                               3000 GB
Normal redundancy: 2 * 3000 GB =                                       6000 GB

Divide this value by the number of disks to determine the virtualsize parameter for each disk. The virtual size for each grid disk should be allocated on 16 MB boundaries. If the virtualSize parameter in MB for each grid disk is not evenly divisible by 16, adjust up to a 16 MB boundary.

Note that you should set aside additional space to use for multiple projects and through multiple iterations.

9.7.1.3 Creating a Sparse Grid Disk

To create the sparse ASM grid disk, log in to each cell (or use dcli) in CellCLI and run a command similar to the following, changing size values where appropriate:

CellCLI> create griddisk all harddisk prefix=SPARSE, size=56G, virtualsize=560G

This creates a grid disk of physical size 56 GB but presents to ASM as a 560 GB grid disk. The size parameter should match the actual physical size of the ASM grid disk while the virtualsize parameter should be at least the physical size of the ASM grid disk.

Attributes for the ASM grid disk created above would look like the following from the "LIST GRIDDISK DETAIL" command:

CellCLI> LIST GRIDDISK DETAIL
size:        56G
sparse:      TRUE
virtualSize: 560G

size displays the actual physical size of the grid disk.

sparse has a value of TRUE.

virtualSize displays the virtual size of the grid disk.

9.7.2 Creating an ASM Disk Group for the Sparse Grid Disks

After you have created the sparse grid disks, you create an ASM disk group to enable those disks to be accessible to a database on the cluster. To create a disk group, log in to an ASM instance using SQL*Plus as sysasm and run a command similar to the following:

SQL> create diskgroup SPARSE high redundancy disk 'o/*/SPARSE_*' attribute
'compatible.asm'='12.1.0.2', 
'compatible.rdbms'='12.1.0.2', 
'au_size'='4M', 
'cell.smart_scan_capable'='true', 
'cell.sparse_dg'='allsparse', 
'appliance.mode' = 'TRUE';

compatible.asm must be set to 12.1.0.2 or higher.

compatible.rdbms must be set to 12.1.0.2 or higher.

cell.sparse_dg must be set to "allsparse". This identifies the disk group to ASM as being made up of sparse grid disks.

appliance.mode must be set to true.

 

9.7.3 Setting Up the Test Master

You can create the test master using one of two methods:

After creating the test master, perform the tasks described in Setting the Ownership of the Test Master Data Files.

9.7.3.1 Create a New Test Master - Full Clone on a Disk Group with ASM ACL Enabled

You can create a full clone of your database using RMAN backup/restore, data pump, or any of the other methods typically used to create a full clone of a database.

You should place the clone in a regular (non-sparse) Oracle ASM disk group.

After creating the full clone, make the data files read-only to help prevent accidental overwrite.

For example:

SQL> ALTER DISKGROUP DATA SET PERMISSON OWNER=READ ONLY, GROUP=READ ONLY, OTHER=NONE FOR FILE 'FILENAME';
9.7.3.2 Converting an Existing Full Clone or Standby Database to a Test Master

If you already have a full clone or a standby database that you want to repurpose as a test master, then you can convert that database to a test master.

Standby databases cannot be running redo apply while serving as a test master.
  1. If you are using an Oracle Data Guard standby database, perform the following steps:
    1. For initial creation of the Oracle Data Guard replica, use the steps outlined in My Oracle Support note 1617946.1.

      The Data Guard copy must have enough redo applied that it can be opened in a READ ONLY state.

    2. If the test master is a physical standby database and you need to make any modifications to the test master, for example, deleting or masking sensitive data, then perform the following steps:
      1. Convert the standby database into a snapshot standby.

        Note:

        An Oracle Data Guard snapshot standby is different from an Oracle Exadata snapshot. An Oracle Data Guard snapshot standby is a complete copy of the source database that is open read-write. Conversion to an Oracle Data Guard snapshot standby is a simple operation using a single command. Oracle Data Guard snapshot standby facilitates making modifications to the test master and refreshing it for subsequent rounds of testing. See Oracle Data Guard Concepts and Administration (referenced at the end of this topic) for more information on Oracle Data Guard snapshot standby databases.
      2. Modify the standby database as required.
    3. When the standby database is at a consistent state and can be opened in READ ONLY mode, stop log transport to the standby and disable redo apply on the standby.
      DGMGRL> edit database TESTMASTER set property logshipping=OFF;
      Property "logshipping" updated

      If you have not converted the physical standby database into a snapshot standby, then stop redo apply.

      DGMGRL> edit database TESTMASTER set state=APPLY-OFF;
      Succeeded
      
  2. If access control is not already enabled on the disk group that contains the test master's data files, then enable access control on the disk group.

    The disk group must be on Oracle Exadata storage servers.

    SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'ACCESS_CONTROL.ENABLED' = 'TRUE';
    
  3. Grant ownership to all data files.
  4. Remove write permissions on all the data files to help prevent accidental overwrite.

    SQL commands in an Oracle ASM instance only allow you to set file permissions to read only. You cannot remove write permissions in SQL.

    SQL> ALTER DISKGROUP DATA set permission owner=read ONLY, group=read ONLY, other=none for file 'FILENAME';
    

    This allows snapshots to be created and owned by users other than the owner of the base files.

9.7.3.3 Setting the Ownership of the Test Master Data Files

After you have cloned the database to create a test master database, configure permissions on the disk group and data files.

Set an operating system user as the owner of the disk group, and make the operating system user the owner of the test master's data files.

You can do this by running SQL commands manually in SQL*Plus, or by running a script:

9.7.3.3.1 Running Commands Manually

You can use SQL*Plus to manually run the commands to set the ownership of the test master data files.

The following commands are run in SQL*Plus.
  1. If the operating system user you are granting access to is not added as a user on the disk group, then add the user.

    Note:

    When enabling access control, all software owners that are running databases must be added as a user to the disk group.

    For example, to add the user SCOTT as an owner of the DATA disk group, use the following command:

    SQL> ALTER DISKGROUP DATA ADD USER 'scott';
    
  2. Make the operating system user the owner of the test master's data files:
    SQL> ALTER DISKGROUP DATA SET OWNERSHIP OWNER='scott' FOR FILE
      '+DATA/TESTMASTER/DATAFILE/system.257.865863315';
    
    SQL> ALTER DISKGROUP DATA SET OWNERSHIP OWNER='scott' FOR FILE
      '+DATA/TESTMASTER/DATAFILE/sysaux.258.865863317';
    
    SQL> ALTER DISKGROUP DATA SET OWNERSHIP OWNER='scott' FOR FILE 
      '+DATA/TESTMASTER/DATAFILE/sysext.259.865863317';
    
    SQL> ALTER DISKGROUP DATA SET OWNERSHIP OWNER='scott' FOR FILE 
      '+DATA/TESTMASTER/DATAFILE/tbs_1.256.865863315';
    
9.7.3.3.2 Running from a Script

You can also set the ownership of the test master data files using a SQL script.

The following procedure is equivalent to the commands in the previous topic, but it queries V$DATAFILE for the filenames:

  1. Add an operating system user as owner of the disk group.
    SQL> ALTER DISKGROUP DATA ADD USER 'scott';
    
  2. Generate a script called set_owner.sql to set the owner of the test master's data files.
    • If the test master is a full database, run the following in the test master database:

      set newpage 0
      set linesize 999
      set pagesize 0
      set feedback off
      set heading off
      set echo off
      set space 0
      set tab off
      set trimspool on
      spool set_owner.sql
      select 'ALTER DISKGROUP DATA set ownership owner='||''''||'scott'||''''||' for file '||''''||name||''''||';' from v$datafile;
      exit
      
    • If the test master is a PDB, run the following in the CDB$ROOT of the test master PDB:

      In the select statement below, the example assumes the test master PDB has a con_id of 10.

      set newpage 0
      set linesize 999
      set pagesize 0
      set feedback off
      set heading off
      set echo off
      set space 0
      set tab off
      set trimspool on
      spool set_owner.sql
      select 'ALTER DISKGROUP DATA set ownership owner='||''''||'scott'||''''||' for file '||''''||name||''''||';' -
      from v$datafile where con_id=10;
      exit
      
  3. Remove extra lines in set_owner.sql.
    sed -i '/SQL/d' set_owner.sql
    
  4. Run the script in the ASM instance.
    SQL> @set_owner
    

9.7.4 Creating Snapshots

You can create an Exadata Snapshot of a pluggable database (PDB) or a complete database.

9.7.4.1 Creating a Snapshot of a Pluggable Database

Creating an Exadata snapshot of a pluggable database (PDB) is the simplest method for creating a snapshot because it requires no additional manual steps. Two new clauses in the CREATE PLUGGABLE DATABASE statement identify the PDB as an Exadata snapshot.

Creating individual Exadata snapshot PDBs is best used when creating snapshots for a smaller number of PDBs within a given CDB. The following figure shows a high-level example of a typical lifecycle for a PDB with two PDB snapshots.

Figure 9-12 Lifecycle of Exadata Snapshots Using PDBs

Description of Figure 9-12 follows
Description of "Figure 9-12 Lifecycle of Exadata Snapshots Using PDBs"

One of the benefits of Oracle Multitenant and PDBs is the ability to easily clone an existing PDB to create a test master and move it from one CDB to another. Oracle recommends that you clone your source PDB to create the test master and then migrate it to your test environment where you can perform any data scrubbing that may be needed. Once complete, the test master PDB can be used to create numerous Exadata snapshot PDBs.

An Exadata snapshot PDB is created as a PDB in the CDB. Like all other PDBs, an Exadata snapshot PDB is subject to the general limit for the maximum number of PDBs in a single CDB. For Oracle Database 12c release 1 (12.1), the limit is 252 PDBs in a single CDB. For Oracle Database 12c release 2 (12.2) and later, the limit is 4096 PDBs in a single CDB.

When creating an Exadata snapshot PDB, the command changes file permissions on the test master PDB, marking the files as READONLY in Oracle ASM. Consequently, if you want to delete the test master PDB, you must first drop all the snapshot copies and change the test master PDB file permissions back to READ WRITE.

Note:

If you drop the test master PDB before changing the file permissions to READ WRITE, the command completes without error. However, the underlying database files remain, and an entry is written to the database alert log file. In this case, you must manually delete the files in Oracle ASM to free up the space occupied by the files.

After creating the test master PDB, perform the following steps to create an Exadata snapshot PDB:

  1. In SQL*Plus, connect to the CDB root container (cdb$root).
  2. Close the test master PDB in all instances.
    SQL> alter pluggable database PDB1TM1 close instances=all;
  3. Open the test master PDB in the local instance in read only mode.
    SQL> alter pluggable database PDB1TM1 open read only;
  4. Create an Exadata snapshot PDB of the test master.
    SQL> create pluggable database PDB1S1 from PDB1TM1 tempfile reuse create_file_dest='+SPARSE' snapshot copy;

    The create_file_dest argument must specify the name of a sparse disk group. The snapshot copy clause creates the PDB as a snapshot rather than a full PDB clone.

    If you need to create more PDBs than can fit in a single CDB, you can create the Exadata snapshot PDB in another CDB by creating a remote clone using a database link. For example:

    SQL> create pluggable database PDB1S1 from PDB1TM1@cdb1_dblink tempfile reuse create_file_dest='+SPARSE' snapshot copy;
9.7.4.2 Creating a Snapshot of a Full Database

You create an Exadata snapshot of a full database.

The following figure shows the lifecycle of an Exadata test master database and snapshot databases where the test master is based on an Oracle Data Guard replica.

Figure 9-13 Lifecycle of Test Master and Snapshot Databases

Description of Figure 9-13 follows
Description of "Figure 9-13 Lifecycle of Test Master and Snapshot Databases"

Note that the test master database cannot be used as a failover target to provide high availability or disaster recovery (a Data Guard configuration may have multiple replicas that can each serve different purposes). Similar to test master PDBs, test master databases cannot be modified while Exadata snapshots exist against them.

The test master database cannot be a read-only physical standby database that is in recovery mode (for example, Active Data Guard in Real Time Apply).

The test master database and their Exadata snapshots must be in the same Oracle ASM cluster environment.

Note:

If the test master is an Oracle RAC database, you have to do one of the following:

  • Create redo logs for all threads in the CREATE CONTROLFILE statement for the sparse clone, OR
  • Specify Oracle Managed Files for the redo logs using the ONLINE_LOG_CREATE_DEST_1 initialization parameter in the SPFILE of the sparse clone to have the redo logs created automatically.
  1. In the test master database, create a sample control file script to use for your Exadata snapshot databases by backing up the existing control file to trace.

    Connect to the test master database via SQL*Plus as SYSDBA and do the following:

    1. Determine name and location of any trace file to be generated by your session.

      For example:

      SQL> SELECT value FROM v$diag_info WHERE name = 'Default Trace File';
      
      VALUE
      ---------------------------------------------------------------------------------
      /u01/app/oracle/diag/rdbms/TESTMASTER/TESTMASTER1/trace/TESTMASTER1_ora_26756.trc
    2. Run the BACKUP CONTROLFILE TO TRACE command to place the CREATE CONTROLFILE command into the trace file.
      SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
    3. Retrieve the file shown for the Default Trace File.
  2. In the test master database, determine the existing file names for the rename that will happen in step 11.

    For example, log into SQL*Plus as SYSDBA and run the following:

    SET newpage 0
    SET linesize 999
    SET pagesize 0
    SET feedback off
    SET heading off
    SET echo off
    SET space 0
    SET tab off
    SET trimspool on
    SPOOL rename_files.sql
    SELECT 'EXECUTE dbms_dnfs.clonedb_renamefile -
    ('||''''||name||''''||','||''''||REPLACE(REPLACE(REPLACE(name,'.','_'),-
    'TESTMASTER','SNAPTEST'),'+DATA','+SPARSE')||''''||');' FROM v$datafile;
    EXIT

    The example script builds a file named rename_files.sql that contains statements for each data file similar to the following:

    EXECUTE dbms_dnfs.clonedb_renamefile (
    '+DATA/TESTMASTER/DATAFILE/system.257.865863315',
    '+SPARSE/SNAPTEST/DATAFILE/system_257_865863315');
    

    In the example script, the REPLACE function:

    • Replaces periods with underscores in the original file name

    • Replaces the original database name of TESTMASTER with SNAPTEST

    • Replaces the original disk group name of +DATA with +SPARSE

    If you modify the example to suite your environment, ensure that you maintain consistency throughout the rest of the procedure.

  3. Create an init.ora file with the contents of the test master SPFILE.
    SQL> CREATE PFILE = 'init_TestMaster.ora' FROM SPFILE;
  4. Shut down the test master.
    SQL> shutdown;
  5. Create an init.ora file for the Exadata snapshot database.

    You can use the init.ora file of the test master as a template, but make sure to change the db_name and control_files entries.

    For this procedure, the init.ora file for the Exadata snapshot database is referenced as snap_init.ora in commands and examples.

    $ cp init_TestMaster.ora snap_init.ora

    Modify snap_init.ora with the new database name, new control file name, and audit file destination, for example:

    db_name = SNAPTEST
    control_files = '+DATA/SNAPTEST/control1.f'
    audit_file_dest=/u01/app/oracle/admin/snaptest/adump

    You can save the modified snap_init.ora file as a template for creating additional snapshot copies of the test master.

  6. Create a script to create a control file for the Exadata snapshot database.
    You will use this script later in step 10.
    1. Examine the trace file generated in step 1 and locate the CREATE CONTROLFILE command that includes the RESETLOGS clause.

      The required command is located in the trace file immediately after the following comments:

      --     Set #2. RESETLOGS case
      -- 
      -- The following commands will create a new control file and use it
      -- to open the database.
      -- Data used by Recovery Manager will be lost.
      -- The contents of online logs will be lost and all backups will
      -- be invalidated. Use this only if online logs are damaged.
    2. Copy the CREATE CONTROLFILE command into a new script file.

      Copy only the complete CREATE CONTROLFILE command and discard all surrounding comments and commands.

      For example, the script file may be named crt_ctlfile.sql.

    3. Modify the CREATE CONTROLFILE command to create a control file for the Exadata snapshot database.

      The control file should be created with the Exadata snapshot database name, new log file names, and the data file names of the test master. The new log files can be in any disk group that has enough space, but they should not be created in the sparse Oracle ASM disk group.

      The following shows an example CREATE CONTROLFILE command. In the example, SNAPTEST is the Exadata snapshot database. The LOGFILE lines specify the new log file locations, and the DATAFILE lines specify the data file locations for the test master database.

      CREATE CONTROLFILE REUSE SET DATABASE SNAPTEST RESETLOGS ARCHIVELOG
            MAXLOGFILES 32 
            MAXLOGMEMBERS 2 
            MAXINSTANCES 1 
            MAXLOGHISTORY 908 
        LOGFILE 
            GROUP 1 '+DATA/SNAPTEST/t_log1.f' SIZE 100M BLOCKSIZE 512,
            GROUP 2 '+DATA/SNAPTEST/t_log2.f' SIZE 100M BLOCKSIZE 512
        DATAFILE
            '+DATA/TESTMASTER/DATAFILE/system.257.865863315',
            '+DATA/TESTMASTER/DATAFILE/sysaux.258.865863317',
            '+DATA/TESTMASTER/DATAFILE/sysext.259.865863317',
            '+DATA/TESTMASTER/DATAFILE/tbs_1.256.865863315'
        CHARACTER SET WE8DEC; 
      

    You can save the modified script file as a template for creating additional snapshot copies of the test master.

  7. Create the audit_file_dest directory on all nodes on which the snapshot will be running.
    $ mkdir -p /u01/app/oracle/admin/snaptest/adump
  8. Create the directories in Oracle ASM for the snapshot data files.

    For example:

    $ asmcmd -p 
    ASMCMD > cd +SPARSE 
    ASMCMD [+sparse] > mkdir SNAPTEST 
    ASMCMD [+sparse] > cd SNAPTEST 
    ASMCMD [+sparse/snaptest] > mkdir DATAFILE

    If the test master is a container database (CDB), create additional data file sub-directories for each pluggable database (PDB). Use the globally unique identifier (GUID) for each PDB, which is available in the V$PDBS view.

    For example:

    $ sqlplus / as sysdba
    SQL> ALTER SESSION set container=PDB1;
    SQL> SELECT guid FROM v$pdbs;
    
                                GUID
    –-------------------------------
    A839E00B5E9E7820E053412E850A5F18
    $ asmcmd -p 
    ASMCMD > cd +SPARSE 
    ASMCMD > cd +SPARSE/SNAPTEST 
    ASMCMD [+sparse/snaptest] > mkdir A839E00B5E9E7820E053412E850A5F18
    ASMCMD [+sparse/snaptest] > cd A839E00B5E9E7820E053412E850A5F18
    ASMCMD [+sparse/snaptest/A839E00B5E9E7820E053412E850A5F18] > mkdir DATAFILE
  9. Start a database instance pointing to the Exadata snapshot database init.ora file (snap_init.ora) using the following commands:
    $ sqlplus / as sysdba
    SQL> startup nomount pfile=snap_init.ora
    
  10. Create the Exadata snapshot control file using the script created in step 6.

    In the following example the script is named crt_ctlfile.sql.

    SQL> @crt_ctlfile
    
  11. Run the script you modified in step 2.

    All the files must be renamed prior to opening the Exadata snapshot database.

    Connect using SQL*Plus as SYSDBA to the Exadata snapshot database and run the following command:

    SQL> @rename_files
    

    This script modifies the permissions of the test master database files stored in Oracle ASM, marking them as READONLY.

    The dbms_dnfs.clonedb_renamefile procedure, which is called by rename_files.sql, sets up the parent-child relationship between the test master database and the snapshot database, and renames the file names in the snapshot database's control file.

  12. Open the Exadata snapshot database with the RESETLOGS option:
    SQL> ALTER DATABASE OPEN RESETLOGS;
    
  13. Confirm that the Exadata snapshot files are child files of the test master database. Connect using SQL*Plus as SYSDBA to the Exadata snapshot, and run the following command:
    SQL> SELECT filenumber num, clonefilename child, snapshotfilename parent
    FROM V$CLONEDFILE;
    

    The following is an example of the output from the query:

    NUM  CHILD                                        
    ---- ---------------------------------------------
    PARENT 
    -----------------
    1     +SPARSE/SNAPTEST/DATAFILE/system_257_865863315
    +DATA/TESTMASTER/DATAFILE/system.257.865863315
    
    2     +SPARSE/SNAPTEST/DATAFILE/sysaux_258_865863317
    +DATA/TESTMASTER/DATAFILE/sysaux.258.865863317
    
    3     +SPARSE/SNAPTEST/DATAFILE/sysext_259_865863317
    +DATA/TESTMASTER/DATAFILE/sysext.259.865863317
    
    4     +SPARSE/SNAPTEST/DATAFILE/tbs_1_256_865863315
     +DATA/TESTMASTER/DATAFILE/tbs_1.256.865863315 
  14. Log in using SQL*Plus to the Exadata snapshot database, and add temp files to the TEMP tablespace. This is a full size temp file, not a sparse temp file.
    SQL> ALTER TABLESPACE temp ADD TEMPFILE '+DATA' SIZE 10G;
    

    Additionally, for a snapshot of a complete CDB, connect to each PDB and add a temp file to the PDB-specific TEMP tablespace.

    For example:

    SQL> ALTER SESSION set container=PDB1;
    SQL> ALTER TABLESPACE temp ADD TEMPFILE '+DATA' SIZE 10G;

9.7.5 Refreshing the (Read-only) Test Master Database

To refresh a read-only test master database, it must be converted temporarily to a read-write test master.

To refresh the (read-only) test master database, the main steps are:

9.7.5.1 Drop the Snapshot Databases

Delete the Exadata snapshot databases that are children of the test master database you want to refresh.

You can delete the snapshot databases using RMAN.

  1. Connect to the Exadata snapshot database using RMAN with the Exadata snapshot as the target.

    Connect as a user that has the necessary privileges to start and drop a database, such as SYSBACKUP or SYSDBA.

    RMAN> CONNECT TARGET "user@snapdb_name AS SYSDBA"
  2. Start the snapshot database in restricted mode.
    RMAN> STARTUP FORCE MOUNT DBA
  3. Delete the snapshot database.
    RMAN> DROP DATABASE;

    Note:

    • Failure to drop an Exadata snapshot database has no impact on the state of a test master database. However, an Exadata snapshot can behave unpredictably if its test master database is dropped or refreshed.
    • If you have a snapshot hierarchy, you must delete all of the snapshots at all levels of the snapshot hierarchy that are children of the test master database you want to refresh.
9.7.5.2 Change the Permissions on the Test Master to Read-Write

To modify the data files from read-only to read-write you can use SQL to generate a script of SQL commands.

Before starting this part of refreshing the read-only test master database, you must first drop the snapshot databases.
  1. After all Exadata snapshot databases have been deleted, start up the test master database in mount mode.
  2. Create a script to reset permissions on the data files for the test master database.

    Connect to the test master using SQL*Plus and run the following script to create a new SQL script that contains the commands to reset permissions on the data files belonging to the test master database.

    set newpage 0 
    set linesize 999 
    set pagesize 0 
    set feedback off 
    set heading off 
    set echo off 
    set space 0 
    set tab off 
    set trimspool on 
    spool change_perm.sql 
    select 'ALTER DISKGROUP DATA set permission owner=read write, 
    group=read write, other=none for file '||''''||name||''''||';' from v$datafile; 
    exit 
    
  3. Remove the extra lines from the generated script.

    Run the following sed command to remove extra lines from the change_perm.sql script that you created in the previous step. Run this command from an operating system prompt, in the directory that contains the change_perm.sql script.

    $ sed -i '/SQL/d' change_perm.sql
    
  4. Use the generated script to change the file permissions.

    Use SQL*Plus to connect to an Oracle ASM instance as a SYSASM user. Run the change_perm.sql script. This script changes the permissions of the test master's data files to make them writable.

    SQL> @change_perm
    
9.7.5.3 Convert the Test Master Database Back to a Data Guard Replica

Now that the data files for the test master are writable, convert the read-only test master database into an Oracle Data Guard replica.

If you had originally prepared the test master database using Oracle Data Guard snapshot standby, then convert it back to its original state as an Oracle Data Guard replica using the CONVERT command. This command discards any changes previously made to the replica to prepare it to be the test master. It also makes it possible to refresh the test master using just incremental changes from the source database instead of a complete restore from a current backup.

9.7.5.4 Update the Test Master Database

You have two options for refreshing the test master database:

  • Allow Oracle Data Guard to refresh the test master database

    If the Oracle Data Guard replica has been used as a test master database for only a short period of time and you have all the redo generated during this time in archive logs on disk at the source database, then you can enable redo shipping and start redo apply. The test master database will use regular Oracle Data Guard protocols to retrieve archive logs and apply the logs until it is caught up with the primary database. Once the Oracle Data Guard replica is as current as you need it to be, disable redo shipping, stop redo apply and repeat the test master and snapshot creation cycle described in Setting Up the Test Master and Creating Snapshots.

    This option has the benefit of being able to stop redo apply at some intermediate point rather than bringing the test master database totally current.

    To let Oracle Data Guard refresh the standby, enable log shipping to the standby and redo apply on the standby:

    DGMGRL> edit database TESTMASTER set property logshipping=ON; 
    Property "logshipping" updated 
    DGMGRL> edit database TESTMASTER set state=apply-on; 
    Succeeded
    
  • Use RMAN RECOVER...FROM SERVICE to roll forward the test master database

    If the Oracle Data Guard replica has been used as a test master database for a long period of time or if you no longer have the redo available on disk to enable Oracle Data Guard to automatically refresh the test master database, use RMAN to perform live incremental apply over the network.

    A major advantage to using this method is that no additional disk space is required. RMAN will bring changed blocks to the standby from the primary over the network and apply them directly. Also RMAN greatly simplifies the process by determining which blocks need to be retrieved based on the SCN of the data files on the test master. With this method you cannot recover to an intermediate point in time; the refresh will bring the test master database current with the primary. For more information on this method refer to Performing RMAN Recovery: Advanced Scenarios in Oracle Database Backup and Recovery User’s Guide.

To refresh the test master database using RMAN Network Incrementals:

  1. Prepare Oracle Net Services for the RMAN connections.

    These steps need to be performed only once.

    1. Create a listener.ora entry for the test master database (the Oracle Data Guard replica).

      The listener entry allows RMAN to connect to the target using the SID because the service is not started when the database is opened in NOMOUNT mode. The following is an example of the entry:

      SID_LIST_LISTENER = 
        (SID_LIST =   
          (SID_DESC = 
            (SID_NAME = TESTMASTER1)
            (ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
          )
        )
      
    2. Reload the listener to pick up the changes to the listener.ora.
       $ lsnrctl reload listener 
    3. Create a TNS entry on the test master environment pointing to the SID of the local test master instance.
      The entry should use the local host name rather than the SCAN name to ensure the connection request goes to the correct host.
      TESTMASTER1 = 
        (DESCRIPTION = 
          (ADDRESS = (PROTOCOL = TCP)(HOST = standbydb01.example.com)(PORT = 1521))
          (CONNECT_DATA = 
            (SERVER = DEDICATED) 
            (SID = TESTMASTER1) 
            (UR=A) 
          ) 
        ) 
      
  2. Connect via RMAN to the test master database and save the CURRENT_SCN for later.
    This value will be used to determine if newly created files since the last refresh need to be restored from the source database.
    RMAN> select current_scn from v$database;
    CURRENT_SCN# 
    ------------------ 
              17081990 
  3. List the names and group identifiers of the redo log files.
    The names of the online redo log files and standby redo log files of the Oracle Data Guard replica might be required in a later step.
    RMAN> SELECT type, group#, member FROM v$logfile;
  4. Refresh the standby control file of the Oracle Data Guard replica from the source database to make the control file current.
    1. Reconnect to the Oracle Data Guard replica as the RMAN target.
    2. Restart the target in NOMOUNT mode.
      RMAN> startup nomount force;
    3. Restore the standby control file by using the control file on the source database.

      The following example restores the control file on the Oracle Data Guard replica by using the database control file from SOURCEMASTER, the source database.

      RMAN> RESTORE STANDBY CONTROLFILE FROM SERVICE SOURCEMASTER;
      
    4. Mount the Oracle Data Guard replica.
      RMAN> ALTER DATABASE MOUNT;
  5. Update the names of the data files and the temp files in the standby control file.

    If you are not using an RMAN catalog, the names of files in the standby control file are the names that were used in the source database, not the standby.

    Use the CATALOG command and the SWITCH command to update all the data file names. The SWITCH command will be used after restoring any newly created files from the source database in step 7.

    In the following example, +DATA/TESTMASTER/DATAFILE/ is the location of the data files on the Oracle Data Guard replica. All data files must be stored in this location.

    RMAN> CATALOG START WITH '+DATA/TESTMASTER/DATAFILE/';
    
  6. Determine if new files were added that need to be restored from the source database.

    Use the CURRENT_SCN from step 2.

    RMAN> SELECT file# FROM v$datafile WHERE creation_change# >= 17081990;
     FILE#
    ---------- 
             9 
            10 
    
  7. If there are files returned by the previous query, restore those data files from the source database.

    Run an RMAN command block similar to the following using the list of FILE# values returned by the previous step. If no FILE# values were returned, then skip this step.

    RMAN> run{ 
    2> set newname for database to '+DATA';
    3> restore datafile 9,10 from service SOURCEMASTER;
    4> } 
    
  8. If not using an RMAN catalog, rename the data files in the standby control file.

    Switch to the copies cataloged in step 5.

    RMAN> SWITCH DATABASE TO COPY;
    
  9. Update the names of the online redo logs and standby redo logs in the standby control file.
    Use one of the following methods:
    • Use the ALTER DATABASE CLEAR command to clear the log files in all redo log groups of the Oracle Data Guard replica. RMAN then recreates all the standby redo logs and the online redo log files.

      Note:

      Clearing log files is recommended only if the Oracle Data Guard replica does not have access to the online redo log files and standby redo log files of the source database. If the Oracle Data Guard replica has access to the redo log files of the source database and the redo log file names of the source database are OMF names, then the ALTER DATABASE command will delete log files on the source database.

      Also, the clearing of the log files will create new log files. Any existing log files are not used because the control file is not aware of those existing files. To conserve space, delete the existing log files from Oracle ASM prior to running the ALTER DATABASE CLEAR commands.

      The GROUP# column of the V$LOGFILE view queried in step 5 provides the redo log group identifiers of the log groups that must be cleared. Use separate ALTER DATABASE CLEAR commands to clear each redo log group.

      For example, the following command clears the redo log group with identifier 2.

      SQL> ALTER DATABASE CLEAR LOGFILE GROUP 2;
      
    • Use the ALTER DATABASE RENAME FILE command to rename the redo log files. Use a separate command to rename each log file listed in step 5.

      To rename log files, the STANDBY_FILE_MANAGEMENT initialization parameter must be set to MANUAL. Renaming log files is recommended when the number of online redo logs files and standby redo log files is the same in the source database and the Oracle Data Guard replica.

  10. Use RMAN RECOVER....FROM SERVICE to roll forward the data files to current state.
    No additional space is required for this operation. Note that this process can only bring the files totally current; it cannot bring the files to a previous point in time. Connect via RMAN to the Oracle Data Guard replica as target using the TNS entry created in step 3. The service specified should point to the primary.
    RMAN> recover database noredo from service SOURCEMASTER;
    
  11. Enable redo shipping to the Oracle Data Guard replica and start redo apply.
    This is necessary to update the control file with the blocks applied as part of step 10.
    DGMGRL> edit database TESTMASTER set property logshipping=ON;
    Property "logshipping" updated
    DGMGRL> edit database TESTMASTER set state=apply-on;
    Succeeded.
    
  12. After redo has been applied, repeat the process you used to convert the Oracle Data Guard replica into a test master database and then create Exadata database snapshots.
    Remember to once again disable log shipping and redo apply at the standby.
9.7.5.5 Close the Test Master and Make All Test Master Data Files Read-Only

After the Test Master has been updated, you can revert it to a read-only test master.

Complete one of the tasks described in "Setting the Ownership of the Test Master Data Files".

9.7.5.6 Re-create All Snapshots

After the Test Master has been updated and made read-only again, re-create all the snapshot databases to get the latest updates.

You can create an Exadata snapshot database of a pluggable database (PDB) or of a full database as described in "Creating Snapshots."

9.7.6 Creating a Snapshot Database from Another Snapshot Database

To create a snapshot from a snapshot:

  1. Create a first level snapshot. In the following example, the snapshot is called PDB1S1.

    create pluggable database PDB1S1
      from PDB1TM1
      create_file_dest='+SPARSE'
      snapshot copy;
  2. Open and close the PDB, so you can re-open it as read-only in the next step.

    alter pluggable database PDB1S1 open;
    alter pluggable database PDB1S1 close;
    
  3. Open the PDB in read-only mode so it can serve as a test master.

    alter pluggable database PDB1S1 open read only;
  4. Create a snapshot from the snapshot created in step 1. In the following example, the second level snapshot is called PDB1S1_A.

    create pluggable database PDB1S1_A
      from PDB1S1
      create_file_dest='+SPARSE'
      snapshot copy;
    
    alter pluggable database PDB1S1_A open;

9.7.7 Creating Sparse Test Masters from a Single Full Database Copy

You can create multiple sparse test masters from a single copy of a full database.

In this procedure, the source for the test masters is a full copy of a Data Guard physical standby database. This standby database should not be used as a target for switchover or failover; it should only be used as the source for the test masters defined in this process.

This process takes advantage of the hierarchical snapshot functionality to allow redo to be shipped and applied, keeping the standby database current with production while also providing files to be used as source for test masters to be used by Exadata storage snapshots. The physical standby database begins as a full copy of the primary database. When you are ready to create storage snapshots, sparse data files are created pointing to the full database files to apply redo shipped from the primary. These sparse files are then used in the standby database instance to apply redo. You can also open the sparse data files in Active Data Guard mode to supply read only access of current data.

When additional snapshots are required at different points in time, you repeat the process of creating new sparse files on top of the previously created sparse files to apply redo and keep the data current. This allows you to use a single full copy of the data files to use as multiple test masters from different points in time. Also, you can create a new test master in a matter of minutes because you do not have to drop the existing snapshots.

Figure 9-14 Starting Configuration


Description of Figure 9-14 follows
Description of "Figure 9-14 Starting Configuration"

The following tasks assume that a physical standby database has already been created to be used as the source for the test masters. The database can be a container database (CDB) or non-container database (non-CDB).

9.7.7.1 Task 1: Prepare the Standby Database to Be Used as a Sparse Test Master

The existing files for the standby database are used to support snapshots. You create a series of sparse files pointing to the existing files of the standby. Redo received from the primary database is applied to these files. These sparse files allow the standby database to be used as a sparse test master and also kept current with the primary database.

  1. Stop redo apply at the standby.

    To ensure that the structure of the database is at a quiesced state for creating the supporting files to build snapshots, redo apply should be turned off at the standby database.

    DGMGRL> edit database tm_standby set state='APPLY-OFF';
  2. Prepare the current standby database to be used as a test master.
    The system must be configured as follows:
    1. The disk group containing the database files must have the access_control.enabled attribute set to TRUE.

      As SYSASM, log into Oracle ASM using SQL*Plus and configure the disk group.

      For example:

      SQL> alter diskgroup DATA set attribute 'ACCESS_CONTROL.ENABLED'='TRUE';
    2. The operating system (OS) user of the database owner must be added as an explicit user of the disk group containing the database files.

      For example:

      SQL> alter diskgroup DATA add user 'scott';
    3. The database files that are to be used must have explicit permissions granted to the database owner OS user.

      You must perform this step for all OS users that will be creating snapshots using these files and for all files that will be referenced by the snapshots.

      The following script can be used to build SQL statements to configure the ownership settings. Run the script while connected to the standby database using SQL*Plus. If the standby is a container database (CDB) you must be connected to the cdb$root container:

      set newpage 0
      set linesize 999
      set pagesize 0
      set feedback offset heading off
      set echo off
      set space 0
      set tab off
      set trimspool on
      spool set_owner.sql
      select 'ALTER DISKGROUP DATA set ownership owner='||''''||'scott'||''''||' 
       for file '||''''||name||''''||';' from v$datafile;
      exit

      After running the previous script, log in to Oracle ASM using SQL*Plus as the SYSASM user, and run the commands in set_owner.sql.

      SQL> @set_owner
  3. Create a backup of the controlfile.

    All snapshots are created using the current state of the standby database, so they need to know all of the files that make up the standby. Create a binary backup of the control file to allow future creation of the CREATE CONTROLFILE script required for additional snapshots

    SQL> ALTER DATABASE BACKUP CONTROLFILE TO '/home/oracle/snap_tm/control_tm.ctl';
  4. Create the rename_files.sql script to create the sparse data files for the snapshot.

    This script builds a series of RENAME statements to create the sparse data files to be used for the snapshot to apply redo received from the primary. Use a SQL statement similar to the following. Note that this statement uses the same directory structure as the original files, but the files will be created in the SPARSE disk group. The new file names will be created replacing ‘.’ (periods) with ‘_’ (underscores).

    set newpage 0
    set linesize 999
    set pagesize 0
    set feedback off
    set heading off
    set echo offset space 0
    set tab off
    set trimspool on
    spool rename_files.sql
    select 'EXECUTE dbms_dnfs.clonedb_renamefile ('||''''||name||''''||
    ','||''''||replace(replace(name,'.','_'),'DATA/','SPARSE/')||''''||
    ');' from v$datafile;
    exit

    This script produces output similar to the following:

    EXECUTE dbms_dnfs.clonedb_renamefile ('+DATA/TM_STANDBY/DATAFILE/system.515.9304
    75939','+SPARSE/TM_STANDBY/DATAFILE/system_515_930475939');
    
    EXECUTE dbms_dnfs.clonedb_renamefile ('+DATA/TM_STANDBY/429CE0836E0166ACE05382C8
    E50A1154/DATAFILE/system.567.930475945','+SPARSE/TM_STANDBY/429CE0836E0166ACE053
    82C8E50A1154/DATAFILE/system_567_930475945');
    
    EXECUTE dbms_dnfs.clonedb_renamefile ('+DATA/TM_STANDBY/DATAFILE/sysaux.571.9304
    75939','+SPARSE/TM_STANDBY/DATAFILE/sysaux_571_930475939');
    
    EXECUTE dbms_dnfs.clonedb_renamefile ('+DATA/TM_STANDBY/429CE0836E0166ACE05382C8
    E50A1154/DATAFILE/sysaux.516.930475945','+SPARSE/TM_STANDBY/429CE0836E0166ACE053
    82C8E50A1154/DATAFILE/sysaux_516_930475945');
    
    EXECUTE dbms_dnfs.clonedb_renamefile ('+DATA/TM_STANDBY/DATAFILE/undotbs1.497.93
    0475939','+SPARSE/TM_STANDBY/DATAFILE/undotbs1_497_930475939');
    
    EXECUTE dbms_dnfs.clonedb_renamefile ('+DATA/TM_STANDBY/429CE0836E0166ACE05382C8
    E50A1154/DATAFILE/undotbs1.564.930475945','+SPARSE/TM_STANDBY/429CE0836E0166ACE0
    5382C8E50A1154/DATAFILE/undotbs1_564_930475945');
  5. Using ASMCMD, create directories for all of the directories identified in the script rename_files.sql.

    When the dbms_dnfs.clonedb_renamefile function runs, it requires that all directory structures used for the files already exist in ASM.

    Use the output from the previous step to determine the structures required and then create them as needed. You can use ASMCMD to create the directories as in the following example:

    cd ASMCMD [+] > cd sparse
    ASMCMD [+sparse] > ls
    ASMCMD [+sparse] > mkdir tm_standby
    ASMCMD [+sparse] > cd tm_standby
    ASMCMD [+sparse/tm_standby] > mkdir datafile
    ASMCMD [+sparse/tm_standby] > mkdir 429DC0E1BCBD1B90E05382C8E50A8E80
    ASMCMD [+sparse/tm_standby] > mkdir 429CE0836E0166ACE05382C8E50A1154
    ASMCMD [+sparse/tm_standby] > cd 429DC0E1BCBD1B90E05382C8E50A8E80
    ASMCMD [+sparse/tm_standby/429DC0E1BCBD1B90E05382C8E50A8E80] > mkdir datafile
    ASMCMD [+sparse/tm_standby/429DC0E1BCBD1B90E05382C8E50A8E80] > cd ../429CE0836E0166ACE05382C8E50A1154
    ASMCMD [+sparse/tm_standby/429CE0836E0166ACE05382C8E50A1154] > mkdir datafile
9.7.7.2 Task 2: Configure the Sparse Test Master and Sparse Files on the Standby Site

In this task you convert the standby into a test master and create sparse files to receive and apply redo from the primary database.

Note:

During this process you will not be creating a full snapshot database, you will be using portions of the existing standby database and adding sparse data files to the standby. The standby database controlfile will be modified to use the sparse files that are added. Going forward, the same standby instance is used, but redo apply will use the sparse files to store changes, leaving the original standby data files to serve as a sparse test master files for snapshots.

The existing data files are used to support full database snapshots with data as of the point in time the process was run.

  1. Shutdown all instances of the TM_STANDBY database.
    $ srvctl stop db –d tm_standby –o abort
  2. Using SQL*Plus, start one of the TM_STANDBY instances in mount mode.
    SQL> startup mount
  3. Change the DB_CREATE_FILE_DEST setting in the standby instance to point to the SPARSE disk group.
    This ensures all new data files that are created will reside in the SPARSE disk group. To perform this step you must disable standby file management.
    SQL> alter system set standby_file_management='MANUAL';
    SQL> alter system set db_create_file_dest='+SPARSE';
  4. Run the rename_files.sql script created in Task 1 against the standby database.
    Running the script renames the data files in the TM_STANDBY controlfile and creates the sparse files for the snapshot.
    SQL> @rename_files
  5. Re-enable standby file management.
    Completing this step ensures that all new data files added to the primary will automatically be created by the standby when it receives the redo to create the data file.
    SQL> alter system set standby_file_management='AUTO';
  6. Enable redo apply on TM_STANDBY.
    Completing this step applies redo to the snapshot, keeping it current and preparing for the next round of snapshot creation.
    DGMGRL> edit database tm_standby set state='APPLY-ON';
  7. Restart the remaining instances in mount mode.
    $ srvctl start db –d tm_standby –o mount

    Note:

    If you plan to clone a local pluggable database (PDB) at the primary database, then enable Active Data Guard mode at the standby. This requires an Active Data Guard license.

Figure 9-15 Configuration with Test Master Files and Sparse Files for Redo Apply

Description of Figure 9-15 follows
Description of "Figure 9-15 Configuration with Test Master Files and Sparse Files for Redo Apply"
9.7.7.3 Task 3: Create Full Database Snapshots Using the New Sparse Test Master
At this time you can create full snapshots against the original files of the standby database as described in Creating a Snapshot of a Full Database.

You must use the backup controlfile created in step 2 of Task 1: Prepare the Standby Database to Be Used as a Sparse Test Master to build the CREATE CONTROLFILE statement. To use the file, you can create a temporary database instance to mount the controlfile and run the 'backup controlfile to trace' command.

  1. Create a small PFILE file for the instance to use.

    At a minimum, the PFILE file should contain the following parameters:

    control_files='/home/oracle/snap_tm/control_tm.ctl'  # This should be the control file name created above
    db_name=primary             # This should be the db_name used in the Data Guard configuration
    db_unique_name=temp         # This should be a unique name for a database instance on this host
    sga_target=5g               # Provide enough memory to start the instance
    
  2. Set your environment to point to a unique ORACLE_SID.
    $ export ORACLE_SID=temp
  3. Using SQL*Plus, start the instance in mount mode using the PFILE created in Step 1.
    SQL> startup mount pfile='/home/oracle/snap_tm/pfile.ora'
  4. Build the create controlfile statement and the rename files script.
    Use steps 1 and 2 in Creating a Snapshot of a Full Database to build the CREATE CONTROLFILE statement and the rename files script. The rename files script created in step 4 of Task 1: Prepare the Standby Database to Be Used as a Sparse Test Master can be used, but you must modify the directory structure of the sparse files to be created.

Figure 9-16 Configuration After Creating Snapshots

Description of Figure 9-16 follows
Description of "Figure 9-16 Configuration After Creating Snapshots"
9.7.7.4 Task 4: Create a New Sparse Test Master Using a Previously Created Sparse Test Master

Create a new set of snapshots to provide a new test master and new sparse files to contain changes from the primary.

It is possible that at periodic intervals you will want to use the standby database to create additional snapshots without having to build a complete copy of the test master. You can repeat the process performed in the previous three tasks to do just that, taking advantage of the hierarchical snapshot functionality. The new test master is built on top of the latest existing snapshot that is applying redo. This snapshot becomes read-only and a new snapshot is built to continue the redo apply processing.

Do the following to configure the standby for the new test master time line:

  1. Perform the steps from Task 1: Prepare the Standby Database to Be Used as a Sparse Test Master with the following changes:

    The procedure is generally the same, however in this case the test master already uses sparse files.

    1. In Step 2, change all the commands that alter the DATA disk group to instead alter the SPARSE disk group.

      If the SPARSE disk group is already appropriately configured, you can skip the corresponding ALTER DISKGROUP ... SET ATTRIBUTE or ALTER DISKGROUP ... ADD USER command.

    2. In Step 4, supply a different name for the snapshot.

      You are creating a new snapshot, so the files need unique names from what was previously used. As a suggestion, you can append an identifier to the end of the file name to help identify it with the snapshot to be built. For example, if this was the original command:

      EXECUTE dbms_dnfs.clonedb_renamefile ('+SPARSE/TM_STANDBY/DATAFILE/system_515
      _930475939','+SPARSE/TM_STANDBY/DATAFILE/system_515_930475939');

      You can add an identifier to the end of the file name to create a unique file name, as shown here:

      EXECUTE dbms_dnfs.clonedb_renamefile ('+SPARSE/TM_STANDBY/DATAFILE/system_515
      _930475939','+SPARSE/TM_STANDBY/DATAFILE/system_515_930475939_Dec_15_16');

      This step must be repeated for each statement for the rename_files.sql script.

  2. Perform the steps as described in Task 2: Configure the Sparse Test Master and Sparse Files on the Standby Site.

Figure 9-17 Configuration After Repeating the Process


Description of Figure 9-17 follows
Description of "Figure 9-17 Configuration After Repeating the Process"

The process to create a new sparse test master can be repeated up to 9 times, which would create an environment 10 levels deep with the original standby data files and 9 hierarchical snapshots.  When repeating the process for the ninth time, do not create a new snapshot to receive the redo from the primary database.

Figure 9-18 Potential Future Configuration


Description of Figure 9-18 follows
Description of "Figure 9-18 Potential Future Configuration"

If you reach the maximum 10 levels, you have multiple options:

  • If you have enough space to maintain multiple copies of the standby database and snapshots, then create a new tree of hierarchical snapshots based on another standby database. The original standby data files and snapshots can remain as long as required.

  • If you do not have enough space to maintain multiple copies of the standby database and snapshots, then:

    1. Delete the snapshots and their associated data files, including any snapshots used as test masters.

    2. Refresh the standby.

    3. Create a new tree of hierarchical snapshots.

  • Create a new standby database on a different environment and create a new tree of hierarchical snapshots.

9.7.8 Creating Sparse Test Masters for PDBs

This procedure creates a hierarchical snapshot tree or sparse test masters manually for a pluggable database (PDB) in an Oracle Multitenant database.

The test master must be closed while making the daily reference snapshot. The downtime is very short (less than 5 minutes). You can use a replication mechanism, such as Oracle GoldenGate, to keep the sparse test master current with the production PDB. For more information about configuring Oracle GoldenGate with PDBs, see Configuring Oracle GoldenGate in a Multitenant Container Database in Oracle GoldenGate Oracle Installation and Setup Guide. The following example assumes you are using Oracle GoldenGate.

STEP 1: Create First Test Master PDB From the PROD PDB

This is a traditional PDB clone operation to instantiate the test master PDB. Once the clone completes, you configure Oracle GoldenGate to extract changes from the PRODPDB1 PDB in production and replicate these changes to the test master TMPDB1 PDB.

  1. Run the following commands on the PROD container database (CDB) root:

    PRODCDB> alter pluggable database prodpdb1 close;
    
    PRODCDB> alter pluggable database prodpdb1 open read only;
  2. Run the following commands from the test master CDB root:

    TMCDB> create database link PROD_DBLINK
    connect to system identified by password using 'PROD_CDB';
    
    TMCDB> create pluggable database TMPDB1
    from PRODPDB1@PROD_DBLINK;
    
    TMCDB> alter pluggable database TMPDB1 open;
    
  3. Configure Oracle GoldenGate so that the changes made at the PRODPDB1 PDB will be extracted, replicated and applied to the TMPDB1 PDB. After configuring the extract and replicat and starting the extract process, open PRODPDB1 PDB in read write mode.

    Note:

    The PRODPDB1 PDB cannot be opened for changes until after Oracle GoldenGate has been configured and the extract process started.
    PRODCDB> alter pluggable database PRODPDB1 close;
    PRODCDB> alter pluggable database PRODPDB1 open;

At this point, you have a full copy of PRODPDB1 on test master as TMPDB1 receiving all data changes made at PRODPDB1.

Note:

Oracle GoldenGate does not replicate data dictionary changes such as CREATE TABLESPACE or ADD DATAFILE. Only schema changes are replicated from PRODPDB1 to TMPDB1.

Figure 9-19 TMPDB1 Created From the PRODPDB1 Pluggable Database

Description of Figure 9-19 follows
Description of "Figure 9-19 TMPDB1 Created From the PRODPDB1 Pluggable Database"

Although TMPDB1 can be opened in read/write mode, you should leave it in read-only mode because the only changes it should receive are from PRODPDB1 through Oracle GoldenGate.

To create a snapshot from TMPDB1, the test master PDB must be opened in read-only mode. To provide a test master PDB from which you can create snapshots, and a test master PDB that is kept current with its source, you need two PDBs. The next step shows how you can accomplish this.

STEP 2: Create Daily Read-Only Snapshot and Move TMPDB1 PDB to a New Sparse Test Master PDB

This step creates a read-only snapshot PDB that acts as a test master. You can then create read/write snapshot PDBs from this read-only snapshot PDB every day. The main steps are:

  • Create a (daily) read-only snapshot PDB that you can make available to private read/write clients.

  • Create a new sparse TMPDB1 PDB pointing back to the read-only daily snapshot PDB. The new TMPDB1 PDB also accepts and applies changes from PRODPDB1.

Connect to the TMPDB1 PDB, then run the following commands:

TMCDB> alter session set container = CDB$ROOT;

# Stop the Oracle GoldenGate replicat process at the Test Master database.  This allows 
# all changes made at PRODPDB1 to continue to be extracted and then applied to  
# TMPDB1 when the replicat process is restarted.

# Close the test master PDB.
TMCDB> alter pluggable database TMPDB1 close;

# Write the test master PDB metadata to an XML file.
TMCDB> alter pluggable database TMPDB1 unplug into 
       '/home/oracle/snapshot/TMPDB1_monday.XML';

# Drop the test master PDB, but keep the data files.
TMCDB> drop pluggable database TMPDB1 keep datafiles;

# Create a TMPDB1_MONDAY PDB using the XML file you just created.
#Use the NOCOPY clause to reuse the original data files.

TMCDB> create pluggable database TMPDB1_MONDAY using 
       '/home/oracle/snapshot/TMPDB1_monday.XML' nocopy;

# Open the new TMPDB1_MONDAY PDB. The PDB must be opened
# once in read/write mode to complete the creation process.

TMCDB> alter pluggable database TMPDB1_MONDAY open;
TMCDB> alter pluggable database TMPDB1_MONDAY close;
TMCDB> alter pluggable database TMPDB1_MONDAY open read only;

# Create the new TMPDB1 PDB to receive changes from PRODPDB1. This PDB
# must have the same name as the original test master PDB to ensure no
# changes are required to the Oracle GoldenGate configuration.

TMCDB> create pluggable database TMPDB1 from TMPDB1_MONDAY
        create_file_dest='+SPARSE'
        snapshot copy;

# Open the new TMPDB1 PDB. The PDB must be opened once in read/write 
# mode to complete the PDB creation process.

TMCDB> alter pluggable database TMPDB1 open;
TMCDB> alter pluggable database TMPDB1 close;
TMCDB> alter pluggable database TMPDB1 open read only;


# Restart the Oracle GoldenGate replicat process to the new TMPDB1
# PDB. The Oracle GoldenGate replicat process now applies changes from  
# PRODPDB1 to the TMPDB1 snapshot and all changes are written to 
# sparse files.

The following figure shows the TMPDB1 created from TMPDB1_MONDAY. The original TMPDB1 has been renamed to TMPDB1_Monday as part of the DROP PLUGGABLE DATABASE/CREATE PLUGGABLE DATABASE steps listed above. The new TMPDB1 is a sparse snapshot pluggable database that, until any changes are made to TMPDB1, looks exactly like TMPDB1_Monday. Oracle GoldenGate applies redo to the new TMPDB1 snapshot without having to make any changes to the replicat configuration

Figure 9-20 TMPDB1 Created From TMPDB1_MONDAY

Description of Figure 9-20 follows
Description of "Figure 9-20 TMPDB1 Created From TMPDB1_MONDAY"

STEP 3: Create Read/Write Snapshot From TMPDB1_MONDAY

You create the snapshots from TMPDB1_MONDAY, not from TMPDB1. This allows TMPDB1 to continue receiving and applying changes from PRODPDB1.

Connect to the TMPDB1_MONDAY PDB, then run the following commands:

TMCDB> alter session set container = cdb$ROOT;

TMCDB> create pluggable database TEST_MONDAY_JIM from TMPDB1_MONDAY
   create_file_dest='+SPARSE'
   snapshot copy;

TMCDB> alter pluggable database TEST_MONDAY_JIM open;

The following figure shows the TEST_MONDAY_JIM snapshot PDB created from TMPDB1_MONDAY. TEST_MONDAY_JIM uses TMPDB1_MONDAY as its parent so all data in TMPDB1_MONDAY_JIM is that same as the data in TMPDB1_MONDAY until changes are made to the snapshot PDB. Oracle GoldenGate continues to receive and apply redo to TMPDB1.

Figure 9-21 TEST_MONDAY_JIM Created From TMPDB1_MONDAY

Description of Figure 9-21 follows
Description of "Figure 9-21 TEST_MONDAY_JIM Created From TMPDB1_MONDAY"

When you need to create another test master and snapshot, you just need to repeat Step 2. For example, to create a test master on Tuesday, you can do the following:

Start a SQL*Plus session for the TMPDB1 PDB.

TMCDB> alter session set container = CDB$ROOT;

# Stop the Oracle GoldenGate replicat process from applying changes to
# TMPDB1

# Close the test master PDB
TMCDB> alter pluggable database TMPDB1 close;

# Write the test master PDB metadata to an XML file
TMCDB> alter pluggable database TMPDB1 unplug into 
'/home/oracle/snapshots/TMPDB1_tuesday.XML';

# Drop the test master PDB, but keep the data files
TMCDB> drop pluggable database TMPDB1 keep datafiles;

# Create a TMPDB1_TUESDAY PDB from the XML file
TMCDB> create pluggable database TMPDB1_TUESDAY using 
'/home/oracle/snapshot/TMPDB1_tuesday.XML' nocopy;

# Open the new TMPDB1_TUESDAY PDB
TMCDB> alter pluggable database TMPDB1_TUESDAY open;
TMCDB> alter pluggable database TMPDB1_TUESDAY close;
TMCDB> alter pluggable database TMPDB1_TUESDAY open read only;

# Create the new TMPDB1 PDB as a snapshot PDB
TMCDB> create pluggable database TMPDB1 from TMPDB1_TUESDAY
   create_file_dest='+SPARSE'
   snapshot copy;

# Open the TMPDB1 PDB
TMCDB> alter pluggable database TMPDB1 open;
TMCDB> alter pluggable database TMPDB1 close;
TMCDB> alter pluggable database TMPDB1 open read only;

# Restart the Oracle GoldenGate replicat process to apply changes to
# the new TMPDB1

You can now create read/write snapshot PDBs from TMPDB1_TUESDAY, similar to Step 3 above. The same as with full database sparse test masters, you can repeat this process up to 9 times in total before needing to either create a new TMPDB1 test master or drop and recreate the original TMPDB1 to begin building a new hierarchical snapshot tree.

9.7.9 Determining All Snapshots Associated with a Test Master

Use this query to discover all of the children associated with a test master.

Consider the following configuration for a test master with multiple children.



You can use a query of the associated SYSTEM data files for each database to list all the children within the same tree. The query selects just the SYSTEM data file for each database or PDB. The data file must exist for all clones and parents, and there should be only 1 data file for each. The START WITH clause provides a starting point of a file that is not a cloned file, which is the original test master parent.

Connect to the Oracle ASM instance and run this command as the SYSASM user.

SELECT clonefilename "Child", snapshotfilename "Parent"
FROM v$clonedfile
WHERE LOWER(snapshotfilename) LIKE '%system.%'  
START WITH snapshotfilename NOT IN (SELECT clonefilename FROM v$clonedfile)  
CONNECT BY LOWER(clonefilename) = PRIOR (snapshotfilename);

The results of this query for database-based snapshots would be similar to the following:

Child                                      
  Parent
-----------------------------------------------------------
  -----------------------------------------------------------------
+SPARSE/SNAP001/DATAFILE/SYSTEM.256.1011532891           
  +DATA/TESTMASTER/DATAFILE/system.270.1011530981 
+SPARSE/SNAP002/DATAFILE/SYSTEM.265.1011532969           
  +DATA/TESTMASTER/DATAFILE/system.270.1011530981
+SPARSE/SNAP1011/DATAFILE/SYSTEM.270.1011533005           
  +SPARSE/SNAP001/DATAFILE/system.256.1011532891
+SPARSE/SNAP1012/DATAFILE/SYSTEM.275.1011780925           
  +SPARSE/SNAP001/DATAFILE/system.256.1011532891
+SPARSE/SNAP2011/DATAFILE/SYSTEM.281.1011781103           
  +SPARSE/SNAP1011/DATAFILE/system.270.1011533005

If you created folders in Oracle ASM that contained the database name, as shown in the above result, then the database name in the CLONEFILENAME string is the snapshot, and the database name in the SNAPSHOTFILENAME string is the master for that snapshot.

The results of this query for PDB-based snapshots would be similar to the following:

CLONEFILENAME                                                           
  SNAPSHOTFILENAME
---------------------------------------------------------------------------------
  ---------------------------------------------------------------------------------
+SPARSEC1/CDB001/8BDBC355D43721F5E053412E850AB5D1/DATAFILE/SYSTEM.256.1011532891  
  +DATAC1/CDB001/8BDBC355D42D21F5E053412E850AB5D1/DATAFILE/system.270.1011530981 
+SPARSEC1/CDB001/8BDBC355D43E21F5E053412E850AB5D1/DATAFILE/SYSTEM.265.1011532969  
  +DATAC1/CDB001/8BDBC355D42D21F5E053412E850AB5D1/DATAFILE/system.270.1011530981 
+SPARSEC1/CDB001/8BDBC355D44021F5E053412E850AB5D1/DATAFILE/SYSTEM.270.1011533005   
  +SPARSEC1/CDB001/8BDBC355D43721F5E053412E850AB5D1/DATAFILE/system.256.1011532891 
+SPARSEC1/CDB001/8BDBC355D44821F5E053412E850AB5D1/DATAFILE/SYSTEM.275.1011780925   
  +SPARSEC1/CDB001/8BDBC355D43721F5E053412E850AB5D1/DATAFILE/system.256.1011532891 
+SPARSEC1/CDB001/8BDBC355D44D21F5E053412E850AB5D1/DATAFILE/SYSTEM.281.1011781103   
  +SPARSEC1/CDB001/8BDBC355D44021F5E053412E850AB5D1/DATAFILE/system.270.1011533005 

In this case, the folder name in Oracle ASM is the GUID associated with the PDB. To determine the name of teach snapshot PDB and its master, you must do the following:

  1. Log in to the CDB that has the name shown in the results, for example, CDB001.

  2. Run a query against the CDB_PDBS view to translate the GUIDs into the PDB names, as shown below:

    SELECT pdb_name, guid FROM CDB_PDBS 
    WHERE guid IN ('8BDBC355D42D21F5E053412E850AB5D1','8BDBC355D43721F5E053412E850AB5D1'
    '8BDBC355D44821F5E053412E850AB5D1','8BDBC355D43E21F5E053412E850AB5D1', 
    '8BDBC355D44021F5E053412E850AB5D1','8BDBC355D44D21F5E053412E850AB5D1');
    
    PDB_NAME                GUID   
    ----------------------- -----------------------------------
    TESTMASTER              8BDBC355D42D21F5E053412E850AB5D1
    SNAP001                 8BDBC355D43721F5E053412E850AB5D1
    SNAP1012                8BDBC355D44821F5E053412E850AB5D1
    SNAP02                  8BDBC355D43E21F5E053412E850AB5D1
    SNAP1011                8BDBC355D44021F5E053412E850AB5D1
    SNAP2011                8BDBC355D44D21F5E053412E850AB5D1

Then use this information to determine the parent/child relationship among the PDBs in the original query results.

9.7.10 Doing a Sparse Copy

The ASM cp command copies a sparse file to a new destination. However, this operation copies the sparse file with all the blocks instantiated from the parent. The “sparse copy” feature enables you to do a sparse copy of a file.

You can have multiple ASM instances running at the same time. If an operation involves a source or a destination on a different ASM instance other than the one it is executing on, it is treated as a remote ASM instance. You can do a sparse copy on a local ASM instance, or between a local and a remote ASM instance. However, sparse copy does not work between two remote ASM instances.

To do a sparse copy, you use the new --sparse option in the existing ASM cp command. The syntax looks like the following:

ASMCMD> cp --sparse <src_sparse_file> <tgt_file>

A new ASM command called setsparseparent enables you to set the parent of a sparse file. If you do a sparse copy of a file to a sparse destination on a local ASM instance, its parent is set as part of the sparse copy operation. However, if the destination is on a remote ASM instance, you have to set its parent explicitly using the setsparseparent command.

The setsparseparent command requires sparse child file and parent file as parameters. It sets the parent of the sparse child file to the new parent file. The syntax looks like the following:

ASMCMD> setsparseparent <sparse_file> <parent_file>

The cp ASM command performs the following validations before doing a sparse copy operation. The operation is allowed only if it satisfies the following criteria:

  • The source file must exist and must be a sparse file.

  • If you specify multiple source sparse files, all of them must be on the same ASM instance.

  • Copying multiple sparse files on a remote ASM instance to a destination on a local ASM instance and vice versa is allowed provided all source files are on the same ASM instance.

  • Destination file should be backed by a sparse disk group. However, it can be a non-sparse file if event “KFTST_KFPKG_CP_SPARSE” is set. This event is required to validate sparse copy operation by merging and copying the files to a non-sparse destination.

  • Both source and destination cannot be on a remote ASM instance. However, either source or destination can be on a remote ASM instance.

  • If the destination is on a remote ASM instance, its file type cannot be validated and you have to ensure that it is backed by a sparse disk group. You also have to set the parent explicitly using the ASM setsparseparent command.

  • If the destination is a non-sparse file and you run the setsparseparent command, the command will fail because the child file should be sparse. This is a second-level validation if the destination is a non-sparse file.

The setsparseparent ASM command performs the following validations before it sets the parent. The operation is allowed only if it satisfies the following criteria:

  • The child file must exist and must be a sparse file.

  • The parent file must exist. It can be a sparse or a non-sparse file.

  • Parent and child files must be present on same ASM instance.

Note:

You have to ensure that the files you specify in the setsparseparent ASM command have a valid parent-child relationship. The command cannot perform this check for files on remote ASM instances. If the files do not have a valid parent-child relationship, then data integrity and corruption issues are likely to occur.

Example 1: The following ASM command copies sparse file “TBS_1.264.908376549” to the destination “+SPARSEDG/child_1”.

ASMCMD> cp –-sparse +SPARSEDG/MERGE/DATAFILE/TBS_1.264.908376549 +SPARSEDG/child_1

Example 2: The following ASM command sets parent “tbs_1.269.908374993” for the sparse file “remote_child_10”.

ASMCMD> setsparseparent +SPARSEDG/remote_child_10 +DATAFILE/DATAFILE/tbs_1.269.908374993

Example 3: The following command copies sparse child files child_1, child_2 and child_3 to the destination directory +SPARSEDG.

ASMCMD> cp –-sparse +SPARSEDG/DATAFILE/child_1 +SPARSEDG/DATAFILE/child_2 +SPARSEDG/DATAFILE/child_3 +SPARSEDG/

9.8 Managing Sparse Griddisks

You can resize, recreate, or monitor the activity of sparse griddisks.

9.8.1 Resizing the Virtual Space

When V$ASM_DISKGROUP.FREE_MB or V$ASM_DISK.FREE_MB is running low, you need to increase the virtual address space.

  1. To increase the size of the virtual space:
    1. Run the following command on the cells, specifying all the grid disks for the SPARSE disk group:
      CellCLI> alter griddisk SPARSE_CD_00_CELL01,SPARSE_CD_01_CELL01,....,SPARSE_CD_11_CELL01 virtualSize=newSize
      

      For example, on the first cell:

      CellCLI> alter griddisk SPARSE_CD_00_CELL01,SPARSE_CD_01_CELL01,SPARSE_CD_02_
      CELL01,SPARSE_CD_03_CELL01,SPARSE_CD_04_CELL01,SPARSE_CD_05_CELL01,SPARSE_CD_
      06_CELL01,SPARSE_CD_07_CELL01,SPARSE_CD_08_CELL01,SPARSE_CD_09_CELL01,SPARSE_
      CD_10_CELL01,SPARSE_CD_11_CELL01 virtualSize=12000G
      
      GridDisk SPARSE_CD_00_CELL01 successfully altered
      GridDisk SPARSE_CD_01_CELL01 successfully altered
      ...
      

      For example, on the next cell:

      CellCLI> alter griddisk SPARSE_CD_00_CELL02,SPARSE_CD_01_CELL02,SPARSE_CD_02_
      CELL02,SPARSE_CD_03_CELL02,SPARSE_CD_04_CELL02,SPARSE_CD_05_CELL02,SPARSE_CD_
      06_CELL02,SPARSE_CD_07_CELL02,SPARSE_CD_08_CELL02,SPARSE_CD_09_CELL02,SPARSE_
      CD_10_CELL02,SPARSE_CD_11_CELL02 virtualSize=12000G
      
      GridDisk SPARSE_CD_00_CELL02 successfully altered
      GridDisk SPARSE_CD_01_CELL02 successfully altered
      ...
      

      Note:

      You must change the size of the grid disks on all cells before making any changes in Oracle ASM.
    2. On an ASM instance, resize the disk group to this new size:
      SQL> alter diskgroup SPARSE resize all size newSize;

      For example:

      SQL> alter diskgroup SPARSE resize all size 12000G;
  2. To decrease the size of the virtual space:
    1. On an ASM instance, resize the disk group to this new size:
      SQL> alter diskgroup SPARSE resize all size newSize;
      

      For example:

      SQL> alter diskgroup SPARSE resize all size 8000G;
    2. Run the following command on the cells, specifying all the grid disks for the SPARSE disk group:
      CellCLI> alter griddisk SPARSE_CD_00_CELL01,SPARSE_CD_01_CELL01,....,SPARSE_CD_11_CELL01 virtualSize=newSize
      

      For example, on the first cell:

      CellCLI> alter griddisk SPARSE_CD_00_CELL01,SPARSE_CD_01_CELL01,SPARSE_CD_02_
      CELL01,SPARSE_CD_03_CELL01,SPARSE_CD_04_CELL01,SPARSE_CD_05_CELL01,SPARSE_CD_
      06_CELL01,SPARSE_CD_07_CELL01,SPARSE_CD_08_CELL01,SPARSE_CD_09_CELL01,SPARSE_
      CD_10_CELL01,SPARSE_CD_11_CELL01 virtualSize=8000G
      
      GridDisk SPARSE_CD_00_CELL01 successfully altered
      GridDisk SPARSE_CD_01_CELL01 successfully altered
      ...
      

      For example, on the next cell:

      CellCLI> alter griddisk SPARSE_CD_00_CELL02,SPARSE_CD_01_CELL02,SPARSE_CD_02_
      CELL02,SPARSE_CD_03_CELL02,SPARSE_CD_04_CELL02,SPARSE_CD_05_CELL02,SPARSE_CD_
      06_CELL02,SPARSE_CD_07_CELL02,SPARSE_CD_08_CELL02,SPARSE_CD_09_CELL02,SPARSE_
      CD_10_CELL02,SPARSE_CD_11_CELL02 virtualSize=8000G
      
      GridDisk SPARSE_CD_00_CELL02 successfully altered
      GridDisk SPARSE_CD_01_CELL02 successfully altered
      ...
      

9.8.2 Resizing the Physical Space

Commencing with Oracle Exadata System Software 22.1.0, you can use the sizeAllocated grid disk attribute to determine the total size of materialized space used by data in a sparse grid disk. When sizeAllocated approaches the physical grid disk size, you need to increase the physical grid disk size to support further data growth.

Otherwise, you can determine the amount of available physical space in the sparse grid disks by examining the difference between the TOTAL_MAT_MB and ALLOCATED_MAT_MB column values in V$ASM_DISK_SPARSE.

To increase the physical size of the grid disks:

  1. Ensure that there is free space available on the respective cell disks.

    For example:

    # dcli -g cell_group -l root "cellcli -e list celldisk attributes name,freespace" 
    exa01celadm01: CD_00_exa01celadm01 0 
    exa01celadm01: CD_01_exa01celadm01 0 
    exa01celadm01: CD_02_exa01celadm01 0 
    exa01celadm01: CD_03_exa01celadm01 0 
    exa01celadm01: CD_04_exa01celadm01 0 
    exa01celadm01: CD_05_exa01celadm01 0 
    exa01celadm01: CD_06_exa01celadm01 0 
    exa01celadm01: CD_07_exa01celadm01 0 
    exa01celadm01: CD_08_exa01celadm01 0 
    exa01celadm01: CD_09_exa01celadm01 0 
    exa01celadm01: CD_10_exa01celadm01 0 
    exa01celadm01: CD_11_exa01celadm01 0 
    ...

    If there is no available free space, then you have to free up disk space being used by other grid disks, either by reducing the size of other grid disks or by removing unnecessary grid disks.

  2. Run the ALTER GRIDDISK command on the cells, specifying the grid disks to resize and the new physical size for each grid disk:

    The command syntax is:

    CellCLI> alter griddisk gridDisk1,gridDisk2,...,gridDiskN size=newPhysicalSize

    Run the command on each cell.

    For example, on the first cell:

    CellCLI> alter griddisk data01_CD_00_exa01celadm01,data01_CD_01_exa01celadm01,
    data01_CD_02_exa01celadm01,data01_CD_03_exa01celadm01,data01_CD_04_exa01celadm01,
    data01_CD_05_exa01celadm01,data01_CD_06_exa01celadm01,data01_CD_07_exa01celadm01,
    data01_CD_08_exa01celadm01,data01_CD_09_exa01celadm01,data01_CD_10_exa01celadm01,
    data01_CD_11_exa01celadm01 size=12000G

    Then on the next cell:

    CellCLI> alter griddisk data01_CD_00_exa01celadm02,data01_CD_01_exa01celadm02,
    data01_CD_02_exa01celadm02,data01_CD_03_exa01celadm02,data01_CD_04_exa01celadm02,
    data01_CD_05_exa01celadm02,data01_CD_06_exa01celadm02,data01_CD_07_exa01celadm02,
    data01_CD_08_exa01celadm02,data01_CD_09_exa01celadm02,data01_CD_10_exa01celadm02,
    data01_CD_11_exa01celadm02 size=12000G

    And so on.

    After you increase the physical size of the grid disks, the sparse disk group automatically consumes the additional space as required.

To shrink the physical size of the grid disks:

  1. Check the amount of materialized space used by data in the sparse grid disks. You cannot resize a grid disk to be smaller than the current amount of materialized data.

    Commencing with Oracle Exadata System Software 22.1.0, check the sizeAllocated grid disk attribute.

    For example:

    # dcli -g cell_group -l root "cellcli -e list griddisk attributes name,sizeAllocated" 
    exa01celadm01: data01_CD_00_exa01celadm01 1023.9375M 
    exa01celadm01: data01_CD_01_exa01celadm01 1024.4375M 
    exa01celadm01: data01_CD_02_exa01celadm01 1023.4375M 
    exa01celadm01: data01_CD_03_exa01celadm01 1024.9375M 
    exa01celadm01: data01_CD_04_exa01celadm01 1023.9375M 
    exa01celadm01: data01_CD_05_exa01celadm01 1024.4375M 
    exa01celadm01: data01_CD_06_exa01celadm01 1023.4375M 
    exa01celadm01: data01_CD_07_exa01celadm01 1024.9375M 
    exa01celadm01: data01_CD_08_exa01celadm01 1023.9375M 
    exa01celadm01: data01_CD_09_exa01celadm01 1024.4375M 
    exa01celadm01: data01_CD_10_exa01celadm01 1023.4375M 
    exa01celadm01: data01_CD_11_exa01celadm01 1024.9375M 
    ...

    Otherwise, examine the ALLOCATED_MAT_MB column value in V$ASM_DISK_SPARSE. For example:

    SQL> SELECT allocated_mat_mb FROM v$asm_disk_sparse
          WHERE group_number = spare_disk_group_number;

    If you need to shrink the grid disks to be smaller than the amount of currently materialized data, you must drop objects from the sparse disk group.

  2. Run the ALTER GRIDDISK command on the cells, specifying the grid disks to shrink and the new physical size for each grid disk:

    The command syntax is the same as for increasing the grid disk size:

    CellCLI> alter griddisk gridDisk1,gridDisk2,...,gridDiskN size=newPhysicalSize

    Run the command on each cell.

    For example, on the first cell:

    CellCLI> alter griddisk data01_CD_00_exa01celadm01,data01_CD_01_exa01celadm01,
    data01_CD_02_exa01celadm01,data01_CD_03_exa01celadm01,data01_CD_04_exa01celadm01,
    data01_CD_05_exa01celadm01,data01_CD_06_exa01celadm01,data01_CD_07_exa01celadm01,
    data01_CD_08_exa01celadm01,data01_CD_09_exa01celadm01,data01_CD_10_exa01celadm01,
    data01_CD_11_exa01celadm01 size=4000G

    Then on the next cell:

    CellCLI> alter griddisk data01_CD_00_exa01celadm02,data01_CD_01_exa01celadm02,
    data01_CD_02_exa01celadm02,data01_CD_03_exa01celadm02,data01_CD_04_exa01celadm02,
    data01_CD_05_exa01celadm02,data01_CD_06_exa01celadm02,data01_CD_07_exa01celadm02,
    data01_CD_08_exa01celadm02,data01_CD_09_exa01celadm02,data01_CD_10_exa01celadm02,
    data01_CD_11_exa01celadm02 size=4000G

    And so on.

Related Topics

9.8.3 Monitoring Sparse Disk Group Utilization

The V$ASM_DISK and V$ASM_DISKGROUP views contain information about the virtual size and utilization of the sparse ASM disk group. New views V$ASM_DISK_SPARSE and V$ASM_DISKGROUP_SPARSE contain information about the actual size and utilization of the sparse ASM disk group. V$ASM_DISK_SPARSE also contains performance and usage metrics.

The following table describes the columns in V$ASM_DISK_SPARSE:

Table 9-1 V$ASM_DISK_SPARSE Columns and Descriptions

Column Description

GROUP_NUMBER

The number of the disk group containing the disk.

DISK_NUMBER

The number assigned to the disk within this disk group.

INCARNATION

The incarnation number for the disk.

ALLOCATED_MAT_MB

The total used physical and materialized capacity on the disk.

TOTAL_MAT_MB

The total physical capacity on the disk.

SPARSE_READS

The total number of I/O read requests on non-materialized regions of the disk.

SPARSE_BYTES_READ

The total number of bytes read from non-materialized regions of the disk.

SPARSE_READ_TIME

The time taken by sparse read I/O operations.

The following table describes the columns in V$ASM_DISKGROUP_SPARSE:

Table 9-2 V$ASM_DISKGROUP_SPARSE Columns and Descriptions

Column Description

GROUP_NUMBER

The cluster-wide number assigned to the disk group.

ALLOCATED_MAT_MB

The total used physical and materialized capacity of the disk group.

TOTAL_MAT_MB

The total physical capacity of the disk group.

The following example shows the used (allocated) space and the total space for disks in a specific disk group:

SQL> select 
     DISK_NUMBER          dsk_num,
     ALLOCATED_MAT_MB     alloc,
     TOTAL_MAT_MB         total
from V$ASM_DISK_SPARSE
where GROUP_NUMBER = 5;
 
DSK_NUM    ALLOC      TOTAL
---------- ---------- ----------
         0       5536      57336
         1       5424      57336
         2       5532      57336
         3       5424      57336
         4       5424      57336

In the following example, sparse ASM grid disks were created with an actual size of 56 GB and a virtual size of 560 GB. When you query V$ASM_DISK's OS_MB and TOTAL_MB columns, you can see the virtual size of 573440 MB (573440 MB / 1024 = 560 GB).

SQL> select os_mb, total_mb from v$asm_disk where group_number=4;

     OS_MB   TOTAL_MB 
---------- ---------- 
    573440     573440 
    573440     573440 
    573440     573440 

Querying V$ASM_DISK_SPARSE for TOTAL_MB, you can see the actual size of the ASM grid disk available for use. Note that each ASM grid disk stores metadata information of approximately 2 MB per 16 GB of space allocated to the sparse ASM grid disk. For 56 GB allocated per grid disk in this example, 8 MB of space is reserved for sparse disk metadata (57336 MB + 8 MB = 57344 MB / 1024 = 56 GB).

SQL> select total_mb from v$asm_disk_sparse where group_number=4; 

 TOTAL_MB 
--------- 
 57336 
 57336 
 57336 

9.8.4 Repurposing Sparse Griddisks

You can change sparse griddisks back to normal griddisks.

If you previously created sparse griddisks, but now want to use them as normal griddisks, you can drop and recreate the disks.
  1. Drop the snapshot database that is currently using the sparse griddisks.
    RMAN> startup mount force;
    RMAN> delete database;
  2. Using SQL*Plus or ASMCMD, drop the Oracle ASM disk group that contains the sparse griddisks.
    SQL> DROP DISKGROUP sparse INCLUDING CONTENTS force;
  3. Using CellCLI, drop the griddisks in the storage cells.
    cellcli -e drop griddisk all harddisk prefix=SPARSEC1
  4. Recreate the griddisks.

    When you create the griddisks, use a similar size as the other disks and add back to the disk group of your choice. Do not specify the sparse attribute. See CREATE GRIDDISK for the command syntax.

  5. Add the recreated griddisks to an Oracle ASM disk group.

    Use the SQL ALTER DISKGROUP command with the ADD DISK clause to add the disk to the Oracle ASM disk group using syntax similar to the following:

    SQL> ALTER DISKGROUP disk_group_name ADD DISK 'o/cell_IPaddress/data*';

9.9 Monitoring Exadata Snapshots Using Database Statistics and Wait Events

The following table describes database specific statistics that are useful for monitoring Exadata Snapshots. The statistics are available in various dynamic performance views, including V$SYSSTAT, and may be displayed in the Global Activity Statistics or Instance Activity Statistics section of an AWR report.

Statistic Description
physical read snap IO requests base The number of physical I/Os on the parent or base file.
physical read snap IO requests copy The number of physical I/Os on the snapshot file.
physical read snap bytes base The number of bytes read from the parent or base file
physical read snap bytes copy The number of bytes read from the snapshot file.
physical read snap IO requests new allocation The number of new allocations on the snapshot file.
physical read snap IO requests no data The number of physical read I/O requests for which no physical I/O is done on the child file level.

The following table describes specific database wait events that are useful for monitoring Exadata Snapshots. The wait events are visible in various dynamic performance views, including V$SESSION, V$SYSTEM_EVENT and V$SESSION_EVENT, and may be displayed in the Wait Event sections of the AWR report.

Wait Event Description
cell physical read no I/O

The wait event appears when a read is done from a sparse disk group, and did not return any data.

The availability of a specific statistic or wait event is subject to the version of Oracle Database being used.